|09:30 - 10:00||Welcome and communications check|
|10:00 - 10:30||Ralf Jung (MPI-SWS & Saarland Informatics Campus, Germany) Stacked Borrows: An Aliasing Model for Rust Vimeo|
|10:45 - 11:15||Ivan Čukić (University of Belgrade, Serbia) Linear types can save the API Vimeo|
|11:30 - 12:00||Vadim Zaytsev (University of Twente, The Netherlands) Hidden Mainstream: The Mainframe Languages Vimeo|
|12:15 - 13:55||Lunch Break|
|14:00 - 14:30||Jeremy Singer (University of Glasgow, UK) Python programmers have GPUs too Vimeo|
|14:45 - 15:15||Joël Falcou (Paris-Sud University, France) Designing the future of computation the C++ way Vimeo|
|15:15 - 16:00||Coffee and Chat|
Type systems are useful not just for the safety guarantees they provide, but also for helping compilers generate more efficient code by simplifying important program analyses. In Rust, the type system imposes a strict discipline on pointer aliasing, and it is an express goal of the Rust compiler developers to make use of that alias information for the purpose of program optimizations that reorder memory accesses. The problem is that Rust also supports unsafe code, and programmers can write unsafe code that bypasses the usual compiler checks to violate the aliasing discipline. To strike a balance between optimizations and unsafe code, the language needs to provide a set of rules such that unsafe code authors can be sure, if they are following these rules, that the compiler will preserve the semantics of their code despite all the optimizations it is doing.
In this work, we propose *Stacked Borrows*, an operational semantics for memory accesses in Rust. Stacked Borrows defines an aliasing discipline and declares programs violating it to have *undefined behavior*, meaning the compiler does not have to consider such programs when performing optimizations. We give formal proofs (mechanized in Coq) showing that this rules out enough programs to enable optimizations that reorder memory accesses around unknown code and function calls, based solely on intraprocedural reasoning. We also implemented this operational model in an interpreter for Rust and ran large parts of the Rust standard library test suite in the interpreter to validate that the model permits enough real-world unsafe Rust code.
C++ is a language celebrated for its abstraction mechanisms that do not incur performance penalties at runtime. It is often used in many mission critical systems, games, and everywhere else where speed and safety are paramount.
That does not mean all C++ programs are fast by default. One of the most time costly mistakes we can make in C++ is creating unnecessary copies. C++11 move semantics made a move into the right direction. The move semantics allow us to give away data that we no longer want to use to somebody else without the penalty of creating the copy and without the risk of data races due to data being shared between multiple entities.
But, while generally cheaper than copying, moving is still not a free operation. For this reason, many people diss on the FP-style APIs as inefficient and create impure stateful APIs instead.
We are going to cover "linear types" - an idea from Phil Wadler that was meant to simplify and optimize pure GC-based languages such as Haskell. We're going to show how the same concept that allows pure functional programming languages to efficiently "change the world" can improve software written in a traditionally imperative impure language such as C++. We will show that with linear types, the FP-style APIs do not need to be inherently slower than their impure imperative counterparts.
In this talk, we will go through an overview of typical mainframe languages and their features that make them both undesirable/unmaintainable and hard to migrate. There will also be examples of migration projects with their typical challenges, and possible research opportunities inspired by them. The presenter has worked for 5 years as an analyst, developer and CSO of the largest independent compiler company specialising in this kind of project.
Auto-parallelization fits the Python philosophy, provides effective performance, and is convenient for non-expert developers. Despite being a dynamic language, we show that Python is a suitable target for auto-parallelization. We show that staging dependence analysis is an effective way to maximize performance. We apply classical dependence analysis techniques, then leverage the Python runtime’s rich introspection capabilities to resolve additional loop bounds and variable types in a just-in-time manner. We use a cost model to predict which available target device will provide fastest execution time for each loop. In relevant cases, loop nest code is converted to CUDA kernels for GPU execution. We achieve orders of magnitude speedup over baseline interpreted execution and some speedup over CPU JIT-compiled execution, across 12 loop-intensive standard benchmarks.
Numeric simulations have become a pillar of the Modern Scientific Method, replacing experiments altogether in some fields. Within this new paradigm, people writing science had to quickly change to writing software. As with any new endeavour, this change came with teething problems as few science experts are also expert in software design, development or even research. So called scientific software needs to be fast and developed quickly as the race to publications, patents or viable products is faster than ever. But how can our scientists become efficient in software design? Dynamic languages like Python, Matlab or R fill a niche by allowing non-computer scientists to write software quickly.
So why turn to C++? Well, the resources required to do this kind of computation are not cheap and having to wait for results because software is slow is a bad experience. So why not use our good old, close to the metal, infrastructure language that is C++?
In this talk we will explore why C++, in its latest incarnation of C++20, fits this bill. We will see how designing languages as libraries gives us the best of both worlds: a high level of domain abstractions as well as a high level of performance. We shall see that recent C++ standards draw their power from borrowing powerful concepts from classical languages; concepts such as higher order functions, code as data, and many more.