Scottish Programming Languages Seminar (SPLS)

University of the West of Scotland, Paisley Campus, 26th June 2017

UniIcon SicsaIcon

About SPLS

The Scottish Programming Languages Seminar (SPLS) is a venue for discussion of all aspects of programming languages. This includes theory, implementation and design. SPLS has a long lineage of meetings, going back to 2004. The host of the meeting cycles between several Scottish universities. This is the first time it shall be held at the University of the West of Scotland, at its Paisley campus. This upcoming meeting shall be held on the 26th of June.

For further information on SPLS meetings and activites apply to the SPLS Mailing List.

Time and Location

This upcoming SPLS meeting will be held at the University of the West of Scotland's Paisley campus, from 12:00 until 17:30. It shall be hosted in the Paisley campus' F Block (also known as Henry Building West). You will be able to get refreshments in room F124 and the talks will be held in room F113.

Travel information to the campus and the campus map can be found here.

If arriving on foot from Paisley Gilmour Street train station, you will first approach the east or north entrance to the campus. For F124 and F113 though, the west entrance will simplify navigation. The Paisley campus' F Block (Henry Building West) is the three storey building in the centre of this Google Street View image which is taken from the west entrance on Lady Lane.

Registration

If you could please register using this doodle poll, so we can get an idea of numbers for food arrangements. And if you have any dietary requirements please get in contact with one of the organisers.

Speakers

Nick Brown - An implementation of Python for the micro-core Epiphany co-processor
Christopher Brown - ParaFormance: Democratizing Parallel Software Development
Andrew Gozillon - Programmable address spaces
Conor McBride - Why walk when you can take the tube?
Michel Steuwer - Towards Composable GPU Programming: Programming GPUs with Eager Actions and Lazy Views
Rob Stewart - Mapping dataflow programs to FPGAs
Thomas Wright - Process algebra meets cellular biology

Programme

Time Speaker / Affiliation / Title Abstract
12:00
Lunch
13:00
Conor McBride
University of Strathclyde
Why walk when you can take the tube?
Substituting terms for free variables in terms is a recurrent task in implementations of programming languages and proof assistants. It is not unusual for substitution operations to spend much time searching in vain for free variables in closed subterms. I'll show a way to zip around between the interesting nodes in terms by constructing a "tube network", built from sequences of one-hole contexts. Expect traversable functors, free monads, and a touch of differential calculus.
13:30
Thomas Wright
University of Edinburgh
Process algebra meets cellular biology
Cells and concurrent programs are not so different - both have to communicate, process information, and respond to their environment. However, biological systems consist of many more agents than even the largest computer networks, and are highly heterogeneous with no apparent overall design, presenting formidable challenges in modelling and understanding their behaviour. Worse still, biologists don't have access to the high level programming languages we rely on in designing concurrent programs, instead working at the level of machine code (DNA) and circuit diagrams (protein signalling networks). This has inspired many to investigate ways to apply ideas from concurrent programming and process algebra to build higher level languages for modelling biochemical systems.

We propose a new high level concurrent programming language for modelling biological systems. We build upon the π-calculus, to model cells as communicating agents, and show how concepts such as parallel composition, name binding, and synchronisation correspond to concepts in biology. We apply our language to V. A. Kuznetsov's classic model of immune response to tumour growth, and show how we able capture complex features including by nonlinear interaction dynamics, n-party interactions, and dynamic binding of agents to form new agents.
14:00
Coffee Break
14:30
Andrew Gozillon
University of the West of Scotland
Programmable Address Spaces
In the last decade, high-performance computing has made increasing use of heterogeneous many-core parallelism. Typically the individual processor cores within such a system are radically simpler than their predecessors; and an increased portion of the challenge in executing relevant programs efficiently is reassigned. Tasks, previously the responsibility of hardware, are now delegated to software. Fast, on-chip memory, will primarily be exposed within a series of trivially distinct programming languages, through a handful of address spaces annotations, which associate discrete sections of memory with pointers; or similar low-level abstractions. Traditional CPUs would provide a hardware data cache for such functionality. Our work aims to improve the programmability of address spaces by exposing new functionality within the existing template metaprogramming system of C++. This is achieved firstly via a new LLVM attribute, ext_address_space which facilitates integration with the non-type template parameters of C++. We also present a type traits API which encapsulates the address space annotations, to allow execution on both conventional and extended C++ compilers.
15:00
Christopher Brown
University of St Andrews
ParaFormance: Democratizing Parallel Software Development
Emerging multicore and manycore architectures offer major advantages in terms of performance and low energy usage. We are already seeing designs for 100+ cores CPUs and 1000+ cores GPUs, offering significant potential for parallelism. However, programming models are lagging behind. Exploiting the potential of new parallel systems, even using higher-level programming models, is highly challenging.

Fundamentally: "Parallelism is too hard for programmers today"
Bjarne Stroustrup, Inventor of C++

ParaFormance is a novel software toolset for C and C++ that allows software developers to optimise systems for performance and energy consumption by exploiting parallelism quickly and easy. Our ParaFormance tool discovers the potential areas in the application for parallelism, refactors it to introduce the parallel business logic automatically and checks it for thread-safety and runtime bugs. Our case studies have shown 2.5 million lines of code analysed and refactored using ParaFormance, that’s 1 month of manual effort reduced to around 5 minutes. In this talk I will introduce the ParaFormance toolset and give a demonstration of it on a realistic use-case.
15:30
Michel Steuwer
University of Edinburgh
Towards Composable GPU Programming: Programming GPUs with Eager Actions and Lazy Views
In this work, we advocate a composable approach to programming systems with Graphics Processing Units (GPU): programs are developed as compositions of generic, reusable patterns. Current GPU programming approaches either rely on low-level, monolithic code without patterns (CUDA and OpenCL), which achieves high performance at the cost of cumbersome and error-prone programming, or they improve the programmability by using pattern-based abstractions (e.g., Thrust) but pay a performance penalty due to inefficient implementations of pattern composition.

We develop an API for GPUs based programming on C++ with STL-style patterns and its compiler-based implementation. Our API gives the application developers the native C++ means (views and actions) to specify precisely which pattern compositions should be automatically fused during code generation into a single efficient GPU kernel, thereby ensuring a high target performance. We implement our approach by extending the range-v3 library which is currently being developed for the forthcoming C++ standards. The composable programming in our approach is done exclusively in the standard C++14, with STL algorithms used as patterns which we re-implemented in parallel for GPU. Our compiler implementation is based on the LLVM and Clang frameworks, and we use advanced multi-stage programming techniques for aggressive runtime optimizations.

We experimentally evaluate our approach using a set of benchmark applications and a real-world case study from the area of image processing. Our codes achieve performance competitive with CUDA monolithic implementations, and we outperform pattern-based codes written using Nvidia’s Thrust.
16:00
Coffee Break
16:30
Nick Brown
EPCC, University of Edinburgh
An implementation of Python for the micro-core Epiphany co-processor
The Epiphany is a many-core, low power, low on-chip memory co-processor typical of a number of innovative micro-core architectures. The very low power nature of these architectures means that there is potential for their use in future HPC machines, and their low cost makes them ideal for HPC education & prototyping. However there is a high barrier to entry in programming due to the associated complexities and immaturity of supporting tools.
17:00
Rob Stewart
Heriot-Watt University
Mapping dataflow programs to FPGAs
FPGAs are unlike fixed, conventional processor architectures. FPGAs are programmable, meaning that logic gates and special purpose hardware blocks can be configured to precisely meet the needs of algorithms. Dataflow languages map naturally to the distributed hardware layout on FPGA fabric, and offer a high level programming abstraction to design FPGA accelerators.

This talk will cover dataflow language models, from synchronous actors supporting signal processing, to asynchronous FSM actors supporting complex non-trivial algorithms. I will demonstrate how a hardware cost modeller is used in conjunction with a graphical program refactoring tool we have developed, to trade off throughput time with space. Our Petri Net dataflow abstraction aids parallelism discovery of stateful actors, to increase the generality of the program transformations. I will briefly present our image processing DSL, showing how the Dataflow Process Network model is an effective intermediary between higher order algorithm skeletons and FPGAs.
17:30
End

Acknowledgement

This meeting of SPLS has been supported by the Theory, Modelling and Computation theme of the Scottish Informatics and Computer Science Alliance (SICSA).

Organisers

Paul Keir, Paul.Keir@uws.ac.uk
Andrew Gozillon, Andrew.Gozillon@uws.ac.uk