Today was workshop day at EuroSys 2011, and I spent the day at the inaugural SFMA workshop. The aim of the workshop was to bring together practitioners from the fields of operating system, programming language and computer architecture research, and provoke discussion about new trends in parallel computing. The most notable thing about the workshop was the number of practitioners that it attracted, starting off with standing room only at 9am in the morning, and maintaining a respectable audience of 35 people through to 5pm. I was on the program committee for the workshop, and Ross did a great job of organising the whole thing.
Andrew Baumann (MSR) started the day with a keynote on Barrelfish, which is a multi-core OS that will likely be familiar to readers of this blog. For me, one of the most interesting parts of the talk was his mention of the Drawbridge project, which has reimplemented many parts of Windows as a library OS, and which Andrew is now porting to run on Barrelfish. The recent ASPLOS paper seems like a good place to start reading.
In the main sessions, the first talk was by Daniel Waddington (Samsung), who described a new queue management scheme for lightweight tasks in a parallel programming runtime called SNAPPLE. He showed how it works on the (cache-coherent) Tilera TilePRO64 processor, and the adaptive scheme that they propose achieves speedups in some benchmarks. We should expect a fuller paper on SNAPPLE in due course.
Next, Benjamin Oechslein (Friedrich-Alexander University Erlangen-Nuremberg) introduced Invasive Computing, which is a new project looking at resource allocation in multi-core schemes, based on the metaphor of processes invading, and retreating from, cores. More information can be found on the project website.
Standing between us and lunch was our own Malte Schwarzkopf, who presented some recent work we've done on porting CIEL to many-core architectures. I've already introduced CIEL in a previous post, but today's talk dived into some of the performance overheads when working at a fine granularity. We reckon CIEL is a good candidate for programming future multi-core systems, because it is already designed for a world without cache-coherent shared memory (i.e. distributed clusters), and it has the potential to scale up to clusters of multi-core machines. In today's talk, Malte showed how we slashed the overheads of fine-grained tasks from 42x to 1.4x, and we have a large number of possible improvements to make before we're finished. As ever, stay tuned for more about this.
After lunch, I stepped into the role of session chair, and we had four talks on the topic of concurrency. First off, we had two WIPs: the first was from Ronald Strebelow (Augsburg University) who has been investigating design patterns for message handling on multi-core servers, and the second was from Irina Calciu (Brown University) who presented a new take on transactional memory using a shared-nothing (replicated data) approach.
The next full talk was from Vladimir Gajinov (Barcelona Supercomputing Center), and was about integrating data-flow abstractions with a transactional memory system. This was interesting to me as a developer of a rather different dataflow-based system, and the basic idea here was to use TM techniques to monitor writes to variables that would trigger subsequent tasks. This would reduce the amount of contention compared to a naïve TM-based implementation of some algorithms, by blocking tasks that couldn't possibly run until their data became available. I'll be interested to watch this project as it develops.
The final talk took a more theoretical approach to the problem of concurrency: Maurice Herlihy (Brown University) spoke on the nature of progress, and in particular about progress conditions in concurrent code. The talk grew out of his textbook on The Art of Multiprocessor Programming (with Nir Shavit), and contained a new classification of progress conditions in terms of whether they are dependent on a benevolent scheduler, and whether they guarantee maximal or minimal progress. This led to some speculation on trade-offs that could be made in application programming if more information about the OS scheduler's policy were available (or vice versa). The talk also made the best use of Prezi that I've seen so far [sorry, Anil!].
As if that weren't enough, we rounded off the day with an invigorating panel session, filled with controversial opinion, and lots of audience participation. Judging by the interest shown during the workshop, this is an exciting area of research to be working in right now, and I hope to see some of you at a future installment of SFMA in the years to come!