If parallel programming is to become as widespread as sequential programming, the languages supporting it should incorporate all the standard abstraction mechanisms including higher order functions, recursion, pattern matching, etc.. Yet for such languages to be practical scalable programming tools, abstraction should not come at the price of predictable performance. Unfortunately many parallel languages don't describe data placement so that performance is not predictable as a function of the source program. This is because data placement depends on the language implementation, not its semantics.
On the contrary, the bulk synchronous parallel (BSP) computing paradigm demonstrates that programs explicitly written for a static number p of processors can have predictable execution costs on a wide variety of architectures. But the combination of BSP algorithms with high-level language features is not well understood so that the evolution of BSP languages is hindered. We are investigating this question from a functional programming perspective.
We have obtained extensions of the lambda-calculus with BSP operations (BSlambda) basis for the design of a functional bulk synchronous parallel language (BSML).
A library for the Objective Caml language, called BSMLlib, has been designed. It implements all our flat BSP operations. Parallel compositions has been proposed (juxtaposition, superposition). BSML with parallel juxtaposition is not a pure functional language. These operations are not yet implemented. We investigated in the Caraml project the use of variants of the BSMLlib for meta-computing.
In the Propac project we are improving the safety of parallel programming based on BSML. This project has three main research directions:
Please read the following papers for further information: