Concurrency with fork/join in JDK 7

A few days ago, Brian Goetz came to visit a colleague of mine in our offices and I had the opportunity to hear some of the new concurrency features coming in JDK 7. We talked about the fork/join framework and Brian kindly showed us the presentation he gave at JavaOne.

The new fork/join model is a Java implementation of a parallel loop. It allows you to perform concurrent processing on relatively large amounts of data by distributing the load among different threads. So if your hardware platform has a lot of CPUs/cores you will be able to see significant performance improvement.

This is how it works: you create a number of tasks and you split the workload among them. Then you run all of your tasks concurrently using the coInvoke() method. Tasks are allocated to a pool of workers for processing. After all of your tasks have executed the coInvoke() method will terminate. This allows you to run many independant operations on a number of data.

Moreover the framework supports recursion, which means, that tasks can create and execute other tasks. This is very important because it allows you to parallelize practically any divide-et-impera algorithm. The example Brian has shown was Mergesort, but you are not limited to array data structures, you could for example do a BFS on a graph, or solve the maximum flow problem.

On top of the fork/join model a very useful parallel array data structure has been implemented that allows you to perform map, filter, sort and reduce operations. This is where things get interesting, because you start to express your application logic in terms of array operations.

The framework is nicely designed, smartly implemented and will be very useful with the forthcoming growth of the number of processor cores. But is this really the best framework available? There are several libraries for Java that already implement the worker farm model and moreover there are many languages that have native support for such things (HPJava, Fortran, Occam). Some libraries even go one level further and distribute the load on a cluster of machines (Map/Reduce, Hadoop).

What struck me most of all was that they didn’t integrate this parallelism in the language and the JVM. This was their most significant advantage over the existing solutions and they didn’t make use of it. It would have resulted in a much cleaner syntax and important optimizations could have been done at the JVM level. Maybe this will be a good testbed to something more radical in JDK8… or perhaps Scala?

PS: I apologize about some missing references, I’ll try to add these asap.

About these ads


  1. Maybe the syntax can be improved when closures are introduced.

    But fine grained parallelism is going to be subject to a lot of problems. The last few weeks I have found in 3 different systems HashMaps being used in multithreaded environment without any synchronization (structure was not read only). So there is still a long way to go before using something as advanced as the fork/join framework.

  2. Hunter Wright

    I’m a mechanical engineering student at the University of Texas doing some research in the process control world, specifically in software vendors using predictive analytics to design better maintenance tools and procedures. Usually this involves analytics on large data volumes and I believe that many of these vendors could benefit from a shift to software that utilizes concurrent processing on multi-core machines. The join/fork framework looks like something worth looking into… have you heard of DataRush? It’s another Java based framework for developing apps to take advantage of multi-core hardware.

  3. jaksa

    Funny enough, my previous project involved rotordynamic analysis (and also based on the Eclipse platform). This DataRush framework looks very interesting. But if youre doing analysis and don’t require fast responses (like in a Real Time system) go for the big gun, the grid. Having 32 cores is good, but having 100 machines with 32 cores is better. There are interesting new projects for distributing computations across large clusters. The last one I’ve heard of is GridGain and it looks very promising.

  4. Hunter Wright

    Thanks for the response jaksa,
    I’m just getting involved in this research so I’m a little green but extremely interested. At the GridGrain website it is stated that Gridgrain is not a data grid solution but computational grid product. What does this mean specifically? Both Google and Yahoo are developing grid computing services as well but it seems the grid-core debate seems to be very polarizing. I’ve read that grids have a lot of overhead in terms of the message-passing interface while advances from the major processor developers (Intel, AMD, etc) that the possibility of 1000+ core processors in the near future is almost certain. Has anybody integrated both cloud and multi-core computing successfully? And if you don’t mind me picking your brain I would be very interested to learn more about your work in rotordynamic analysis.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s