Events

Upcoming events

Upcoming seminars

None currently planned.

Past events

Past seminars

Conjecture: A QuickCheck for everyone
David R. McIver
2016-02-12, 15:00, room S7.06

QuickCheck is a Haskell library for property based testing: "smart" unit tests that generate examples to test against as part of the test execution.

Unfortunately porting its ideas to mainstream languages has proven challenging. Hypothesis (my Python library for property based testing) is probably the most thorough implementation of these ideas in any mainstream language, including a number of features that go beyond what QuickCheck can do. Unfortunately, it's even more work to port to new languages than QuickCheck is.

Conjecture updates these ideas further. Using a clever choice of intermediate representation, Conjecture expands Hypothesis's feature set further while contracting its implementation complexity. The result is an approach to property based testing that is more powerful than any previously seen, while simultaneously being easier to use and vastly easier to port to new languages than QuickCheck is.

I'll give an overview of the expanded feature set and some of the challenges to making this work, then I'll outline the approach that Conjecture takes and the magic that makes it work. I'll finish by describing some of the future directions for my research on this.

How to Become Rich, Famous, and Popular While Using Your Programming Skills to Make the World a Better Place (slides)
Greg Wilson
2015-11-17, 16:00, room S1.12 (directions)

Computers are now as important to science as telescopes and test tubes, but most scientists are still never taught how to build programs they can read, validate, and re-use. As a result, they spend days or weeks doing things that should take minutes or hours, and often have no idea how reliable or reproducible their results are.

Founded in 1998, Software Carpentry's mission is to give researchers the skills they need to build and use software as proficiently as they build and use other kinds of laboratory equipment. Since rebooting in 2010, this volunteer non-profit organization has run more than 500 workshops for more than 16,000 people in 29 countries. Along the way, we have learned a lot about the gaps and overlooked opportunities in today's computing ecosystems. This talk will explore several of these, and use them to illustrate how professional programmers and computer scientists can support some crucial but neglected parts of 21st Century science.

Bio: Greg Wilson is the co-founder of Software Carpentry, a crash course in computing skills for scientists and engineers. He has worked for 30 years in both industry and academia, and is the author or editor of several books on computing (including the 2008 Jolt Award winner "Beautiful Code") and two for children. Greg received a Ph.D. in Computer Science from the University of Edinburgh in 1993.

Bidirectional Transformation is Effectful
Jeremy Gibbons, University of Oxford
2015-03-11, 11:00, S1.12

Bidirectional transformations (bx) are programs that maintain consistency between different information sources. Given that some information sources may contain information that is absent from the others, consistency restoration cannot simply be a pure function of the immediately changed source. We observe that this is inherently an effectful computation, for example treating the existing consistent sources as a state to be modified. This insight generalizes readily in a number of directions: from asymmetric ("master-slave") to symmetric ("peer-to-peer’" situations; from binary to multiary relationships; and to additional effects such as I/O, non-determinism, and exceptions. We conjecture that it will serve as a foundation for unifying many different strands of work in the bx community. In particular, we are working on studying the structure of witnesses as to how sources are consistent, rather than merely that they are consistent, and using this to understand the notion of "least change" in restoring consistency.

This is joint work with Perdita Stevens, James Cheney, James McKinna, and Faris Abou-Saleh.

Making the R language FasteR
Adam Welc, Oracle Labs
2014-11-03, 11:00, B4 (North Wing)

In this talk I will first briefly introduce the R programming language and talk about some of the existing bottlenecks in its current reference implementation. I will then describe an experimental infrastructure developed at Oracle Labs that enables efficient implementation of programming language execution environments. This infrastructure consists of the framework for construction of self-specializing interpreters (Truffle) and an optimizing compiler (Graal), and it is a foundation underlying FastR, a new open-source implementation of R developed at Oracle Labs in collaboration with our academic partners. I will also explain how Truffle and Graal can help to alleviate some of the existing R performance issues and discuss the current status of the FastR project.

Adam Welc is a Principal Member of Technical Staff at Oracle Labs. His work is in the area of programming language design and implementation, with specific interests in concurrency control, compiler and run-time system optimizations. He is currently involved in an effort to create an efficient implementation of the R programming language that utilizes experimental open source infrastructure consisting of the framework for construction of self-specializing interpreters (Truffle) and an optimizing compiler (Graal). Adam holds a PhD in Computer Science from Purdue University.

Larch: partially visual, interactive programming
Geoffrey French, University of East Anglia
2014-09-04, 11:30, K1.27

This talk will demonstrate the Larch Environment; an interactive, visual programming environment for Python. On the surface it is an interactive notebook-based programming environment that mixes rich text, code and execution output within a live document, similar to the IPython or Mathematica notebooks.

Underneath, Larch operates as a structured document editor that stores source code in a form similar to that of an AST, while presenting it to the user in a form that is similar to text; both in terms of appearance and editing behaviour.

Finally, I will discuss Larch's approach to language extension and partially visual programming. We make computer programs take on the form of interactive technical literature, retaining the strengths of textual source code, while augmenting it with visual queues, spatial layout, tables and other visual elements to represent programming constructs where they are beneficial, just as you would use visual elements alongside prose in a text book.

Building Virtual Machines for Language Designers
Stefan Marr, INRIA Lille
2014-05-20, 11:00, S6.06

Building VMs is a challenging undertaking. One issue is that VM implementers seem to disregard all notions of abstraction, modularity, and software architectures. So, we set out to solve the modularity issue of VMs with an Architecture Definition Language. It worked, kind of... We still need to do something about it. Eventually.

With the multicore 'crisis', concurrent programming became the next hot problem. So, what can we do to give language designers better means to solve the crises? After an odyssey of exploring the research of decades, exposing an ownership-based metaobject protocol to language designers seemed to be a good idea. It allows a language designer to implement concurrent programming abstractions such as actors, active objects, and software transactional memory with ease. However, the performance of metaobject protocols has always been an issue. Here, Meta-Tracing and Self-Optimizing Interpreters seem to be promising candidates to make metaobject protocols feasible. And indeed, AST-rewriting combined with meta-tracing makes for a great foundation to get the performance of our metaobject protocol to a practical level. What remains to be done now is to give basic build blocks for concurrent programming to language designers so that they do not need to reinvent the wheel every time. But that's for future work.

Fast dynamic type-checking of unsafe code
Stephen Kell, University of Cambridge
2014-03-12, 14:00, S4.36

Huge amounts of widely-used code are written in C, C++ and other unsafe languages (meaning languages which do not enforce any type- or memory-safety invariants). Considerable existing work has added some kind of invariant checking to such languages (usually just to C), but with several caveats: sacrificing source- or binary-level compatibility, imposing high run-time overheads, and (almost always) checking spatial and/or temporal memory correctness but ignoring type-correctness.

To start, I'll describe libcrunch, a language-agnostic infrastructure for dynamically checking the type-correctness of unsafe code. Using a novel disjoint metadata implementation and careful integration with existing toolchains (including a C front-end), libcrunch allows fast dynamic type checking with full source- and binary-level compatibility and with generally low run-time overheads.

Towards the end of the talk I'll zoom out a little to place libcrunch alongside some other projects, as part of a wider agenda: closing the gap between "static" and dynamic language infrastructure, hence enabling greater degrees of cross-language composition, reasoning and tool support.

One VM to Rule Them All
Chris Seaton, University of Manchester / Oracle Labs
2014-02-28, 12:00, S1.12

The Virtual Machines research group at Oracle Labs is exploring ways to implement high performance virtual machines for a wide range of languages, building on two exciting new technologies in the JVM - Graal and Truffle.

Graal is a JVM compiler, written in Java, that exposes a rich API to the running program. Using Graal a program can directly access its own IR and supporting mechanisms such as dynamic code generation and invalidation. However this can be tricky to use, so we also have Truffle - a framework for implementing languages in Java as simple AST interpreters, which are then compiled using Graal.

Over the past year we started to implement the Ruby programming language using Graal and Truffle. Ruby is an extremely dynamic language and an implementation such as JRuby that uses bytecode to interface to the JVM must make a lot of redundant checks and boxing. We will show how our techniques entirely remove these obstacles to high performance.

Our implementation of Ruby is both strikingly simple, written as just a Java program that interprets Ruby ASTs, and strikingly fast, running compute intensive benchmarks significantly faster than any other implementation of Ruby. We are now integrating this work into the main JRuby implementation where it will be part of the next major release.

Grace: An adaptable educational programming language
Michael Homer, Victoria University of Wellington, New Zealand
2013-09-23, 14:00, S5.20

We are engaged in the design of a new object-oriented educational programming language called Grace. Our motivation is frustration with available languages, most of which are approaching 20 years old and showing their wear. Grace has been designed to support many teaching approaches and orders, and to allow an extensible lattice of sub-languages tailored to different approaches and stages of learning.

In this talk, I'll outline the principal features of Grace and the motivations behind the major design choices. I will discuss the design and function of the "dialect" system that supports those sub-languages and teaching orders, and show how a relatively simple basis enables not only pedagogical language variants but also the coexistence of multiple complex user-defined domain-specific languages within the same program. Finally, I'll demonstrate an interactive visual editor for Grace combining a "jigsaw puzzle" view of syntax with ordinary textual source code across many dialects.

Rethinking the stack for distributed runtime systems
Tim Harris, Oracle Labs
2013-05-29, 17:00, K6.29

Cluster computing is becoming increasingly important because the size of workloads continues to grow faster than the size of individual machines. In this talk I will argue that:

  • The resource demands of emerging workloads (e.g., distributed Graph analytics) look different from software traditionally deployed on clusters (HPC and distributed/replicated servers).
  • With jobs spanning multiple machines, no individual system is in control of traditional OS functions such as scheduling and resource management. This leads to poor interactions (e.g., where cluster-wide scheduling of jobs to machines is unaware of the exact load on individual machines) and wasted resources (e.g., if machines or VMs are statically assigned, but resources go unused).

I will describe some of the trends I am seeing, and research directions I am exploring in the design of distributed runtime systems.