syslog
1Sep/140

ICFP 2014: Day 1

Posted by Leonhard Markert

This is day 1 of the 19th International Conference on Functional Programming.

Opening by the General Chair, Johan Jeuring

Quick facts about ICFP 2014:

  • 340 people registered for ICFP
  • 466 people registered for any event
  • most people are from Sweden, UK and USA
  • 21 sponsors, $60k in sponsoring

Keynote: Using Formal Methods to Enable More Secure Vehicles -- DARPA's HACMS Program

Kathleen Fisher (Tufts University)

Four and a half years long program, 2012--2017.

Many everyday objects today are computers! Infrastructure, medical devices, refrigerators, cars ... examples given for these things being hacked.

Focuses on cars: "computer on wheel" -- 100s of interconnected computing units. Everything is under control of software: braking, accelerating, locking.

First attack vector was via diagnostic bus but this requires physical control (you need to be in the car). Later found that physical control is not necessary to take over a car (Bluetooth, emergency number, audio CD).

Documentation for car systems exist and spare parts can be ordered online and analyzed (so no air gap, no security by obscurity).

Use the desktop computer approach? -- anti-virus scanning, intrusion detection systems, patching infrastructure. Problem: "security software" tends to introduce vulnerabilities. Also: embedded systems don't have the resources to monitor themselves all the time the way desktop computers do.

SAT solvers are getting faster; this allows us to use them to verify real software and hardware.

Clean-slate methods for high-assurance software: code synthesis, DSLs, Interactive Theorem Prover as PL.

Structure of the HACMS program: technical areas are vehicle experts, OS (building on verified sel4 kernel), control systems, research integration (management, composition: how to put high-assurance components together to create a bigger high-assurance component?, integration), red team (hints at problem of how to assess the security of a system -- currently a heuristic is: how long does it take for hackers to take over the system?).

Air team works on helicopter and quadcopter; ground team works on cars.

Demo video: WiFi-controlled quadcopter being taken over using aircrack-ng.

Every 18 months the red team gets access to the system and tries to find vulnerabilities.

Started with monolithic software, no RTOS, no security. After first iteration: use faster processor, put FreeRTOS and HAL under still monolithic Ardupilot software. Next iterations: generate more and more code from new DSL.

SMACCMCopter 18 months assessment: it flies! Air team can prove system-wide security properties like memory safety, ignoring malformed / non-authenticated messages; all good messages received will reach the motor controller. Red team found no security flaws in sex weeks with full access to source code. "Probably the most secure UAV on the planet" ("kinda scary").

Research integration (Rockwell Collins) used AADL to describe how hardware and software components fit together. Developed tools (AGREE, Resolute) to reason about the system. They are used to check assumptions about the system (example: there is only one way to communicate with the quadcopter).

Control systems (Galois) created Ivory and Tower, open-source EDSLs for synthesizing safe low-level code and tasks, respectively. Designed and built SMACCMCopter, the first high-assurance UAV software, in under two engineer-years.

OS (NICTA) proved full functional correctness of a fast OS microkernel, sel4 (now open source). Also working on eChronos, a high-assurance RTOS product line, and formally verified OS components (drivers, file system etc). CAmkES is a system for connecting and configuring these components.

Vehicle experts (Boeing) integrate HACMS technologies into the Unmanned Little Bird.

Air team in phase two tries to re-architecture (split) the system so it runs on a flight computer and a mission computer as is usual in real world scenarios. Red team will then try to gain control over the flight computer from the mission computer.

More detailed description of individual tasks follows.

Tech transition: a huge problem. The cyber-physical systems industry know they have a cyber-security problem but there are many barriers to adoption of HACMS-like technology.

C: Make it cool! Come up with a catastrophic event!
A: Well this is a response to "you can take over a car wirelessly". Another aspect: it will probably be mandated soon for cars to take to each other -- but remote code execution was never considered.

Q: Why not have quadcores in the car?
A: Very price-sensitive; the industry has tiny margins.

Q: How much does this project cost?
A: $70m.

Q: Research: top-down or bottom-up, which is better?
A: We need both. DARPA is bottom-up, NSF is top-down.

Q: You expect to pay more for high-assurance code. What is the cost? Especially for the second, third, ... project.
A: High-assurance code will always be more expensive. We hope to bring down the additional cost by developing tools that aid building high-assurance code. But have to consider: traditionally we pay people to write sloppy code; the cost from running sloppy code is never properly accounted for! Maybe high-assurance software is cheaper in the long run.

Q: Why did you replace the Arduino processor?
A: We wanted to start with an existing open source hardware platform.

Q: (bit unclear)
A: We're close to having one high-assurance system, the SMACCMCopter. Still far away from having a high-assurance car for example (one problem: no-one knows how many lines of code a normal car has!)

Q: How do you combine secure and unsecure code?
A: That's one of the challenges of course.

Q: How do you know the hardware does what you think it does?
A: In this project, we simply trusted the hardware. Other people are working on hardware issues.

Session 1: Domain Specific Languages I (Chair: Anil Madhavapeddy)

Building Embedded Systems with Embedded DSLs (Experience Report)

Patrick Hickey (presenting), Lee Pike, Trevor Elliott, James Bielman, John Lauchbury (Galois Inc)

Open source: github.com/GaloisInc

Goal: Build a high assurance quadcopter controller (long-term: helicopter controller). Small embedded system (microcontroller), hard real time, safe, secure. Had three engineers, 18 months, starting from scratch. Expected to write ~50kloc C/C++ code.

Embedded systems are everywhere. They're basically computers from the 80s: small and cheap. Development tools are from the 80s too (C). You get all the security flaws you'd expect -- buffer overflows etc. Can't push patches, and there are more attack surfaces than ever.

Approach. Haskell, OCaml? Problem: very resource limited system; GC imcompatible with hard real-time. C, C++? Tradeoff between safety and productivity -- e.g. NASA Jet Propulsion Lab writes high assurance C; extremely costly.

Alternative: build your own tools. Start with clean slate, use language approach, correct by construction. Built an embedded DSL in just a few months: Ivory, embedded in Haskell, compiles to safe subset of C. It's memory safe and allows no undefined or implementation defined behavior. No heap, only global and stack allocation.

Haskell type system guarantees properties of Ivory language programs; Haskell is Ivory's macro language.

Ivory: expressions are pure Haskell values; statements are effectful, embedded in monad. Examples for what has been embedded in the type system from the paper are given.

Ivory programs (modules) are a collection of top level C procedures and variables.

Tower DSL: organising entire applications. Composes Ivory procedures into applications. Initial problem: multithreading with message passing; code generation of low level primitives. Tower is implemented as a Haskell macro that generates Ivory programs. Tower contains concept of Tasks and Channels. It became the DSL describing software components.

(Side story: a third-party Python tool generating C output was modified to output Ivory code as well; this code could then be type checked to make sure it was correct. This way, the Python tool didn't need to be trusted.)

SMACCMPilot: 10kloc (and growing) of drivers, 3kloc application (stabilizing the quadcopter and flying), 10kloc message marshaling. Generates about 48kloc of correct-by-construction C.

Red team results: found no buffer overflow, no undefined behaviours, no denial of service. After five months they found two subtle architecture level bugs. Lifted the layer of abstraction: no need to worry about low-level problems, can concentrate on architecture.

Conclusion. Quickly and cheaply built a large, complex application; caught lots of errors early by Haskell type checking, could focus on application design; used Haskell as a macro system making it possible to build compositional programs.

Q: How is this an improvement over the OCaml code generator?
A: Cross-compilation was a specific concern, making sure the code runs on 8-bit, 16-bit, 32-bit architectures.

Q: Restricted amount of memory: how did you deal with that?
A: Don't yet have a way to assess the maximum stack use of a program. But we think it is solvable.

Q: You construct correct-by-construction code that is a subset of C, e.g. you don't do pointer arithmetic.
A: Yes.

Q: Do all arrays have a size known at compile time?
A: Yes.

Q: Type error messages were a problem?
A: Yes.

Q: You don't allow undefined behavior. How do you prevent overflows?
A: This is not done at the type level; instead we insert assertions in the C code (at the moment ~2500 assertions). Hope to analyze and discharge these assertions statically in the future.

Q: How about Rust?
A: Big fan. Rust was in its infancy when the project started two years ago. But when using Haskell we get type checking for our macros.

Concurrent NetCore: From Policies to Pipelines

Cole Schlesinger (presenting), Michael Greenberg, David Walker (Princeton University)

Review on networking: have an enterprise network with hosts, connected to a switch. In a SDN, the controller tells the switch which rules to use to deal with packets. Openflow 1.0 uses prioritised rules for this. Has predicates and actions. But switch hardware is not that simple: lots of tables in a switch pipeline. New configurable designs available now.

Openflow 2.0: splits interaction between controller and switch into two phases: 1. configuration (think flashing an FPGA); 2. population of the tables.

Contributions: core calculus for packet processing pipelines and controller policies; hardware models for three switch architectures; compilation algorithms for deploying general controller polices to target switch architectures.

Example: Broadcom "trident" switch. Each component can be thought of as function from packets sets of packets. (empty: drop, singleton: forward, multiple: e.g. flooding).

Have forwarding table; sequencing; if-statements over predicates; rewrite component.

Intel "flexpipe" supports reconfigurability. Each table can match on and write to every field. Also supports concurrency; introduce concurrency operator to model this.

What about the controller? Easiest for programmer to define virtual pipeline that describes the requirements.

New operator: packet duplication -- copy and run packet through multiple components.

Example: compilation to the barefoot "RMT" architecture -- three passes: 1. consolidate packet duplication (output ports must be known at the time of packet duplication, also need to add metadata to model processing as it would have taken place in the virtual pipeline); 2. refactor field modification; 3. place generated tables into physical tables (use commutativity of concurrency and other tricks -- the algorithm is a combination of brute force and dynamic programming).

Metatheory. Predicates form a Boolean algebra; grammar given for Predicates and Policies. Have a type system and small-step semantics.

Typing judgement of the concurrency operator prevents race conditions. Can prove normalization and confluence, in combination giving strong normalization (determinacy). Adequacy can be shown too.

Future work: implement the compiler and transformer on a real chipset.

Q: Does your system support encryption of packets inside the network, like IPSEC?
A: Yes and no -- you could think of encryption as an atomic action but it has not been fully worked out.

Q: Plus operator?
A: Plus operator is duplication. Semantics: both packets are sequentially processed by the remaining pipeline.

Session 2: Static Analysis (Chair: Ken Friis Larse)

SeLINQ: Tracking Information Across Application-Database Boundaries

Daniel Schoepe, Daniel Hedin, Andrei Sabelfeld (Chalmers University of Technology)

Motivation: confidential data is entrusted to many application which are composed of many components. Information leakage often occurs at boundaries, e.g. SQL injections. Need to look at confidentiality as end-to-end security to prevent attaches.

Contributions: track information flow for code using database queries. Security type system ensures noninterference and proof of soundness. Compiles to executable F# code; realistic demo.

Approach: Traditionally, attacks often occur at component boundary, e.g. SQL injections. Tierless approach: correct communication across boundary ensured at compile-time. Example: write everything in F#, some of which compiles to SQL.

Intuition: untrusted program shouldn't leak private data to third parties. Need to track flow of information through the program.

Noninterference: private inputs must not influence public outputs. Attacker doesn't learn anything about secrets by observing public behavior of the program.

Language-integrated query -- build queries via meta-programming. Example: LINQ in F#. Guarantees user input is sanitized and SQL is syntactically correct.

Idea: types annotated with security levels (L for public, H for private).

Security policy specified by giving a typing to each database. Code example given.

Introduce low-equivalence relation for values that are indistinguishable for the attacker. Defined structurally based on type; components with level L have to be equal.

Each secure program should be noninterfering with itself.

Security type system has soundness: if program is well typed then it is noninterfering.

Implementation: type checker (based on constraint solving and unification)and compiler implemented in Haskell using BNFC. Language compiles to executable F# code; compilation consists mostly of removing security types. Type system and soundness proofs allow user-defined algebraic datatypes.

Case study: near-realistic movie rental database. Want to protect names and exact addresses of customers and staff. Examples given of valid and invalid queries.

Summary: light-weight framework for tracking information flow; security type system ensures noninterference; proof-of-concept implementation given; extension to ADTs; case study.

Future work: extend to client code (JS).

Q: Can I make the security policy express things like "only certain people should be able to access this"
A: No, can't do this at the moment.

Q: Could this be embedded into a computational framework where one could assist the system in showing that some access is in fact safe when the type system's approximation cannot show it?
A: That would be an interesting extension.

Q: Do you deal with aggregation?
A: This has not been considered yet.

Type-Based Parametric Analysis of Program Families

Sheng Chen (presenting), Martin Erwig (Oregon State University)

Can often think of programs as "program families" which by configuration can be refined to single programs. But build & install may fail in unexpected ways. Trying all combinations of options is infeasible.

Idea: use variational analysis directly on the program family / variational program. Introduce a framework for doing variational analysis in the type system.

Notation for choice in variational programs introduced.

Program by search.

Creating variational analyses: 1. add variation data structures, e.g. lists become variational lists (VL, variation in vales and list structures); 2. adapt analysis, e.g. analysis of sort becomes analysis of VL sort; 3. correctness proof (tedious); 4. performance analysis

Example: variational typing rules shown.

This framework consists of annotations (sets), types and constraints. All three have extension points. Application and Choice typing rules are shown.

A constraint solving example is shown involving merging and splitting.

Instantiating the framework for variational 0CFA: no extensions required for annotations, types and constraints but rules need to be extended.

Correctness evaluation of the framework. Implemented Flyweight Java version of the framework and proved it correct.

Performance evaluation shows that the framework performs almost as good as manually lifted 0CFA analysis.

Q: Is the aggressive (faster) approach powerful enough to check types?
A: Yes.

Q: Have you tried to do effect analysis in this framework?
A: The framework should be expressive enough to do this but it hasn't been done yet.

Session 3: Binding Structure (Chair: Tarmo Uustal)

Romeo: A System for More Flexible Binding-Safe Programming

Paul Stansifer (presenting), Mitchell Wand (Northeastern University)

Introduction: metaprogramming allows us to automatically generate programs. Name binding is a problem -- example given. Recent survey found that 8 out of 9 metaprogramming systems had name binding problems.

Binding safety: alpha-equivalent inputs should produce alpha-equivalent outputs. In theory this works: FreshML, MetaML. But these only support simple binding forms like λ. Real languages have more than λ, e.g. let* in Racket, "local" construct in SML.

Contributions: a binding-safe PL, Romeo, with a notation to express complex binding, clear big-step semantics, proof of correctness.

Explains the concepts of "imports" and "exports" in the context of name binding.

In Romeo, binding information is in the types.

What makes two let*s alpha-equivalent? Follow-up: when are two Romeo declarations alpha-equivalent? -- must have exactly the same behavior. Thus their exported binders must be the same.

Romeo's runtime enforces safety. It automatically avoids name clashes. Accidentally unbound names are a dynamic error ... which Romeo's optional deduction system can statically rule out.

Conclusion: Romeo is a binding-safe language that supports term languages with complex binding structure.

Q: What about de Bruijn indices? Effects in the metalanguage?
A: Romeo is side-effect free.

Q: Can I play with Romeo?
A: I'll try to convince my supervisor to release the source code. Currently the implementation is out of sync with the formalism.

Q: This looks similar to λ-m.
A: Lambda-m is not powerful enough. We give you syntax case.

C: Thanks for starting with a concrete example.
A: Thanks to my peers for the criticism!

Q: How does hygienic macro expansion relate to this?
A: If your macro gets larger and larger, there is less safety -- hygienic macros are all about protecting macros from each other.

Q: Complexity of expanding Romeo code?
A: O squared, so that's not good. Macro systems have the same problem though. Trick: perform actual substitution lazily, and we think the same technique could be used with Romeo also.

Q: How much expressiveness does Romeo have?
A: Can we express all Racket macros -- probably, we think most are just simple extensions of what we have right now.

Maximal Sharing in the Lambda Calculus with letrec

Clemens Grabmayer (presenting, VU University Amsterdam), Jan Rochel (Universiteit Utrecht)

Motivation: increasing sharing is desirable -- compact code, avoid duplicated work at run-time, enables us to check equality of unfolding semantics of programs.

1. interpretation of λ-letrec terms as term graphs; 2. bisimilarity and bisimulation collapse of λ term graphs; 3. readback of λ term graphs.

This method extends common subexpression elimination and is targeted at maximizing sharing statically.

Example of unfolding equivalence of fixpoint operator.

Example of graph interpretation given. First we generate a λ higher order term graphs. Transform into first-order term graph with scope vertices with backlinks, then into λ term graph.

Then we do a bisimulation check between λ term graphs.

The class of eager-scope λ term graphs is closed under functional bisimilarity.

Implementation: "maxsharing" on Hackage, uses Utrecht University Attribute Grammar Compiler.

Time complexity analysis of the individual steps follows.

Possible extensions: support for full functional languages (would require work on a core language with constructors and case statements); prevent space leaks caused by disadvantageous sharing.

Applications: use maximal sharing at run-time, possibly directly on supercombinator graphs; can be coupled with GC; code improvement (statically) to detect code duplication; checking function equivalence (relevant for proof assistants, theorem provers, dependently-typed PLs).

Q: Explicit weakening. Have you tried to implement reduction of the first order term graph?
A: It's possible. Used port graphs to implement reduction because that seemed easier.

Q: Does this subsume Kleene algebra?
A: Maybe? Interesting idea.

Session 4: Program Optimisation (Chair: John Launchbury)

Practical and Effective Higher-Order Optimizations

Lars Bergstrom (presenting, Mozilla Research), Matthew Fluet (Rochester Institute of Technology), John Reppy University of Chicago), Nora Sandler (University of Chicago), Matthew Le (Rochester Institute of Technology)

Control flow analysis (CFA) allows higher-order inlining. In practice, high performance function programs avoid functions.

Two subproblems: 1. what is the concrete function called from some computed function call? -- already handled by 0CFA; 2. does the potential inlining point have a compatible environment? -- "reflow", JITs.

Contribution: determine when higher-order inlining is safe. Key insight: don't check "is the environment the same?" but "could any of the free variables have been rebound between the closure capture location and the call site / potential inlining point?". This supersedes ad-hoc higher-order inlining tricks.

Technique: 1. normalize and number source code (annotations for variable binding sites); 2. build control-flow graph; 3. add CFA-informed edges; 4. Perform rebinding search on paths between closure capture location and potential inlining point

Safe and unsafe example given; control flow graphs shown.

Reflow analysis is strictly stronger. But this analysis has about 3% time cost of the entire compilation process.

Conclusion: start using closures in benchmarks!

Higher-order inlining heuristics still missing; correctness proof still incomplete.

Q (SPJ): You showed that the analysis is cheap but what do we get?
A: Benchmarks are in the paper.

Q: If I want to use this: is it clear for me where the optimization will kick in?
A: Well, yes, if you're the compiler writer (laughs from the audience). But this is a problem indeed.

Q: I usually pass closures to "map" and "fold", so these will have many call sites. Can this still be optimized in this framework?
A: "map" and "fold" are usually handled in a special way by the compiler anyway.

Q: Is it imaginable to prove that the optimization actually makes the program run faster?
A: Next presentation will try to answer this. But it's very difficult.

Q: What's the relationship to Milton's optimizations?
A: Not entirely sure. Is defunctionalization or CFA-style analysis better? An interesting question.

Worker/Wrapper/Makes It/Faster

Jennifer Hackett (presenting), Graham Hutton (University of Nottingham)

Context: lots of research into optimization focuses on correctness: is meaning preserved?; comparatively little work on improvement -- ensuring programs become "better".

Example: list reversal function. How can we prove that the improved program is both faithful to the original version and also more efficient?

The Worker/Wrapper transformation: Original program has type A, worker has type B, wrapper has type B -> A.

Formally, given

f : A -> A, g : B -> B, abs : B -> A, rep : A -> B

and

abs . rep = id, g . rep = rep . f

then

fix f = abs (fix g)

Reasoning about efficiency. Need some way to compare the cost of two programs. Naive approach: count steps taken to evaluate terms to some normal form. But this fails in a lazy context! Need precongruence.

Precongruence: a preorder and a congruence -- this excludes negative interaction between improvements at different points in the program.

Improvement theory: counting steps, also have a notion of improvement within a constant factor.

Improvement induction principle shown. "tick" idea. It's like guarded coinduction.

Shows the Worker/Wrapper theorem for improvement theory (typeless).

Structure of improvement proofs is essentially the same as that of correctness proofs.

Conclusions: we can make rigorous arguments about the efficiency of lazy programs; make performance guarantees for a general-purpose optimization.

Further work: structured recursion; turn it into a typed theory (advantage: only need to worry about contexts where the types make sense); look at improvement of space usage and other resources; quantify improvement (how much faster? which optimization should be applied if there are alternatives?); automate it.

Q: Do you have completeness results?
A: It's not clear whether Worker/Wrapper improvement proofs can be translated from fixpoint style to improvement theory.

Q: Do you have any examples where Worker/Wrapper transformation makes things slower?
A: Of course we have come across these, but we wouldn't put that in the paper! (Laughs from the audience). [Tree reversal example follows]

Q: Problem in GHC is the cost of building closures. How is this reflected in your framework?
A: Read early papers on improvement theory.

Q: How about local improvements?
A: This work does not look at the surroundings.

Q: Would things change dramatically if you applied this to a call by value language?
A: There has been some work on that.

Session 5: Context Dependence (Chair: Yukiyoshi Kameyama)

Compositional Semantics for Composable Continuations: From Abortive to Delimited Control

Paul Downen, Zena M. Ariola (University of Oregon)

Classical control: callcc is the classic control operator, going back to Scheme. Corresponds to classical logic (the way the λ-calculus corresponds to intuitionist logic).

Delimited control: delimit the scope of effects; continuations compose like functions. These little changes make delimited control vastly more expressive than classical control.

λ-calculus + callcc operator and λμ-calculus are equivalent. Relaxing λμ syntactically gives Λμ which allows delimited control! Λμ in turn is equivalent to lambda + shift0 + reset0.

Classical control in detail. Operational semantics of callcc as an extension of CBV lambda-calculus given. Alternatively, we can understand callcc using an equational theory. But: it's weaker than operational semantics. Some programs can be evaluated to a value but equational theory for callcc cannot reach a value! How can we know that we have the whole context?

Calling a continuation never returns -- it "jumps". A jump is the same when inside a larger evaluation context so it delimits the usable extent of a continuation.

λμ syntactically distinguishes jumps as "commands".

Relaxing the syntax. Λμ collapses term/command distinction. It is interpreted with the same rules only that we have more expressive meta-variables. No new constructs, no new rules. As typed calculi, Λμ is considered equivalent to Parigot's λμ. What happens if you ignore types?

shift and reset: common basis for delimited control. Continuations return and can be composed like functions.

λ-calculus + shift + reset is a true subset of Λμ. What's missing?

Difference between shift and shift0: the latter removes its surrounding delimiter.

λ-calculus + shift0 + reset0 is equivalent to Λμ.

Λμ as a framework for delimited control gives provable observational guarantees about the operators, e.g. idempotency of reset.

More in the paper: can get nice reasoning principles using call-by-value, call-by-name, call-by-need; equational correspondence with compositional transformations.

Q: What do you mean by equality in this presentation?
A: Observational equality -- this is about tools for reasoning about programs rather than expressivity of the calculi.

Q: λμ has a well known type system but to introduce delimited control you had to relax the syntax. Is there a type system for this?
A: (?)

Q: Classical logic, intuitionist logic -- delimited logic?
A: YES! Should investigate this.

Coeffects: A Calculus of Context-Dependent Computation

Tomas Petricek (presenting), Dominic Orchard, Alan Mycroft (University of Cambridge)

Coeffects is the dual of Effects. Done!

This talk is about the context in a type judgement.

Effect systems: Γ :- e : τ & σ
Coeffect : Γ & x :- e : τ

Interesting properties of Γ ("view from the extreme left"): variable related -- liveness, bounded linear logic, data-flow programming, provenance; environment related -- implicit parameters, type classes, distributed computing, platform versioning.

Coeffect calculus for bounded reuse: annotate each variable in the environment with the number of its uses in the expression, essentially tracking the number of variable uses.

Variable, abstraction and contraction rules shown.

Another possibility: coeffect calculus with historical values. Replace addition with max operation in the contraction rule.

Environment related: coeffect calculus for implicit parameters. Example: implicit parameter for time zone and time. We can calculate how many parameters are still required (i.e. not given by the current environment). The corresponding rule is non-deterministic.

What's in the paper? Unified system: coeffect scalar structure (a generalized semiring structure) and shape-indexed coeffects (context splitting and merging, per-variable or whole-context). Also have monadic and comonadic semantics -- not simple as λ-calculus is asymmetric (many to one)!

Why do coeffects matter? Generalize interesting systems (liveness, data-flow, implicit parameters, type classes) and (indexed) comonads are cool.

Q: What happens if you have (co)monads in the classical calculus?
A: Interesting question.

Program Chair's Report

Quick paper stats:

28 accepted papers out of 97 submissions.

2/9 functional pearls and 2/3 experience reports accepted.

299 reviews -- at least three per paper.

72 hour author response period (as usual).

It was decided not to allow papers from members of the program committee.

Comparison with previous years: number of papers approximately stable, share of accepted papers even more so.

Functional pearls are accepted less than normal papers.

Author's location: USA, UK, France, Japan, Netherlands, Sweden, Germany, other.

UK has highest acceptance rate, then USA.

Papers submitted only a couple of hours before the deadline were less likely to be accepted -- before that there seems to be no correlation between submission time and acceptance probability.

Big thanks to the program committee!

Deputy Lord Mayer of Gothenburg: invitation for drinks, short history of the city.

31Aug/140

ICFP 2014: Day 0 (WGP)

Posted by Leonhard Markert

The 19th International Conference on Functional Programming will begin tomorrow. Two affiliated events, the 10th Workshop on Generic Programming (WGP) and the Workshop on Higher-Order Programming with Effects (HOPE), take place today.

WGP Session 1

Invited talk: Functional Programming, Object-Oriented Programming and Algebras

Bruno Oliveira (University of Hong Kong)

How to achieve modularity, type-safety and reuse without requiring a PhD in type theory? Bruno Oliveira proposes algebras as an alternative to algebraic datatypes and OO hierarchies, more specifically: variants of F-Algebras. Fold Algebras (using products of functions) and F-Algebras (using sums of products) are isomorphic.

An example of encoding generalized algebraic datatypes (GADTs) using type classes in a way similar to Church encodings due to Hinze from 2003, when there were no GADTs in Haskell.

Similarly, in OO, Church encodings were used to model the visitor pattern.

The Expression Problem: a problem of modularity. Given an existing type of expressions, how do you extend it (e.g. how do you extend data Exp = Lit Int | Add Exp Exp with multiplication)?

Solution 1: encode algebras as type classes, extend using subclasses.

Solution 2: use F-Algebras (Datatypes à la Carte)

In OO languages, can also solve the Expression Problem with algebras: use interface inheritance.

All current solutions using algebras still require heavy encodings. In the future: programming language support for algebras?

WGP Session 2

Generic Constructors and Eliminators from Descriptions

Larry Diehl (presenting), Tim Sheard (Portland State University)

Abstract: Dependently typed languages with an “open” type theory introduce new datatypes using an axiomatic approach. Each new datatype introduces axioms for constructing values of the datatype, and an elimination axiom (which we call the standard eliminator) for consuming such values. In a “closed” type theory a single introduction rule primitive and a single elimination rule primitive can be used for all datatypes, without adding axioms to the theory.

Abstract continued: We review a closed type theory, specified as an A GDA program, that uses descriptions for datatype construction. Descriptions make datatype definitions first class values, but writing programs using such datatypes requires low-level understanding of how the datatypes are encoded in terms of descriptions. In this work we derive constructors and standard eliminators, by defining generic functions parameterized by a description. Our generic type theory constructions are defined as generic wrappers around the closed type theory primitives, which are themselves generic functions in the A GDA model. Thus, we allow users to write programs in the model without understanding the details of the description-based encoding of datatypes, by using open type theory constructions as an internal domain-specific language (IDSL).

Ornaments in Practice

Thomas Williams (presenting), Pierre-Évariste Dagand, Didier Rémy (INRIA)

We often write very similar functions for structurally similar data types, e.g. "add" for natural numbers and "append" for lists. Natural numbers and lists have the same recursive structure and a coherent property, a "length" attribute:

add (length ml) (length nl) = length (append ml nl)

A simple projection function exists from lists to natural numbers: the "length" function.

The motivation for this paper was to add ornaments to ML. Naturals/lists example: an ornament is defined by a projection function from the ornamented type to the bare type, e.g. "length". Declare an ornament as

ornament from length : a list -> nat

The system checks whether recursive structure is preserved.

Syntactic lifting is how "append" can be derived from "add" almost automatically. Given

let rec add m n = match m with
  | Z -> n
  | S m' -> S (add m' n)

and the lifting expression

let lifting append from add with
  {length} -> {length} -> {length}

the system can derive

let rec append ml nl = match ml with
  | Nil -> nl
  | Cons(x, ml') -> Cons(?, append ml' nl)

But the ? is ambiguous (it's the "creative part") and needs to be completed manually, by code inference (heuristically) or a "patch".

A patch would look like

let append from add
  with {length} -> {length} -> {length}
  patch fun _ -> match _ with Cons(x, _) -> Cons({x}, _)

Applications of this: for refactoring (see question further down), or to derive an evaluation function (uniquely) from "conv", a bijective function between expression types.

More complex data structures can be lifted too (e.g. Sets and Maps); so can higher-order functions. Lifting for GADTs is also possible and is automatic for some invariants given the expected type of the function.

Conclusions: describing ornaments by projection is a good fit for ML, syntactic lifting gives good predictable results. Future work: better patches, integration into ML, combining ornaments.

Q: Projections seem very similar to abstraction functions in abstract interpretation.
A: Could be seen as abstract interpretation.

Q: Is this weaker than what was presented in the 2012 paper? There you could compute ornaments out of indices.
A: Indices flow in the other direction in the presented approach.

Q: Refactoring example. Normally want to throw old code away and keep new code. Here we get some conversion between old and new.
A: Yes, could remove old code.

Q: How about code maintenance?
A: Patches are part of the answer.

Q: Can the patch language also talk about the logical parts?
A: That would not be easy.

Type Inference for the Spine View of Data

Matthew Roberts (presenting), Tony Sloane (Macquarie University)

Context: rho-Calculus (theory of term rewriting), Pattern Calculus and bondi, Stratego, "Scrap your Boilerplate" (SYB paper), kiama, Reloaded/Revolutions.

Motivation: precisely characterize the type inference machinery needed to support the spine view of data.

Result: need to add "ispair" expression (see further down) to first-class polymorphism (FCP), which is a small extension of Hindley-Milner.

Fully applied constructor view means data equals constructor with all arguments (fully applied). When arguments are missing this is a function, so we can't pattern match against it.

Spine view only knows nodes of arity two or zero. This means that all pattern matching can be done using

ispair d bind (x, y) in e1 else e2

which is morally equivalent to

case c of o x y -> e1
          _     -> e2

Here x refers to the constructor and all its arguments except the last one, and y is the last argument.

Examples: generic map and fold.

Implementations: the "decidedly generic compiler" dgen and "kiama".

Q: Why is complete type inference so important?
A: The semantics of doing spine view with only one construct took many attempts; this one allows for type inference so it was chosen over the others.

Q: Have you tried to use FCP on other spine view representations?
A: With FCP, you trade off type annotations with data annotations.

WGP Session 3

First-class Isomorphic Specialization by Staged Evaluation

Alexander Slesarenko (presenting), Alexander Filippov, Alexey Romanov (Huawei Technologies)

There are three types of program specialization: partial evaluation, supercompilation and isomorphic specialization. Idea: we have A, B, P such that P : A -> B; also A', B' which are isomorphic to A and B, respectively, and P' : A' -> B' which can be computed more efficiently than P. Potentially there are multiple options for A', B' and P'. How can we use P' on A and B? (A and B: 'domain' data; A', B': 'core' data)

The example used is matrix-vector multiplication. Matrices, for example, can be represented as dense (vector of vectors), This is implemented as an EDSL (embedded domain specific language) in Scala. Here we can abstract over dense and sparse vectors and matrices. The specialization can take place at runtime, and we can allow for alternative specialized versions. Specialization can lead to faster code.

Generic composition of isomorphisms -- idea: build isomorphisms for each constructor in the core language. These isomorphisms are first class citizens of the framework.

How it works: staged method invocation with graphs.

Take-home agenda:

  1. Develop algorithm in EDSL
  2. Identify isomorphic representations in the domain language
  3. Implement ADTs using a core language and isomorphic representations
  4. Generate representation-specific implementations in the core language

Implementation can be found at github.com/scalan.

Q: How did you proceed? Paper includes theory and implementation notes
A: Original idea was to use graphs, that was the starting point

Q: You talk about large performance increases compared to original Scala implementation. Have you compared with C/C++?
A: No.

Algebraic Effects and Effect Handlers for Idioms and Arrows

Sam Lindley (University of Edinburgh)

Idea due to Plotkin: algebraic effects describe abstract computations. Abstract computations are trees (modulo equations).

Monad trees are unrestricted abstract computation trees (dynamic control and data flow). Arrow trees are monad trees where only static control flow is allowed (static control flow, dynamic data flow). Idiom trees (idiom = applicative functor) are arrow trees where only static data flow is allowed (static control and data flow).

Examples are given for each monad, arrow, and idiom; the control flow and data flow trees show the dynamic / static nature.

Static computations are useful as they allow optimisations and low-level implementations (arrows correspond approximately to circuits, for example).

Key contribution: using Flow Effects, we can write monad/arrow/idiom code in a single style -- this is very different from Haskell, where different notations exist for each.

A handler is an interpreter for an abstract computation. It is defined as a fold over a computation tree, specifying how return values and operations are interpreted. Given a monad handler, we can derive an arrow and idiom handler.

Q: Can it be inferred from a Flow Effects expression whether the computation it defines is a monad, an arrow or an idiom?
A: Yes.

Q: What's the status of the Flow Effects syntax?
A: Not finalised yet -- the syntax shown in the paper is a "mathematical" syntax with braces, indices etc.

A short discussion followed on whether a unifying notation is a good idea -- maybe it is a good thing that we can tell from the syntax whether a computation is a monad, an arrow or an idiom?

Scoping Rules on a Platter -- A Framework for Understanding and Specifying Name Binding

Larisse Voufo, Marcin Zalewski (presenting), Andrew Lumsdaine (Indiana University)

Name binding: for a given reference (use), where is the corresponding declaration (what it refers to)?

In order to understand and unambiguously define name binding rules, a number of name binding combinators has been developed.

Scope combinators are: hiding, merging, opening, weak hiding.

Together they provide a language that describes how scopes are combined to look up a reference. These are then used to describe how a specific language does name binding. The example is C++: the standard devotes over 60 pages in total to name binding rules (and it turns out both Clang and GCC get operator resolution wrong).

Discussant: Ilya Sergey

Q: Could we use attribute grammars for name binding?
A: Finding a grammar for C++ has been attempted many times... It's hard to take a principled approach and applying to C++.

Q: Did you find inconsistencies in the C++ standard when you tried to collect the scoping rules?
A: No, to our surprise -- it's complex but consistent.

Q: Did you come up with any ideas to simplify the rules along the way?
A: There don't seem to be any ways to simplify the rules (except using a different language).

WGP Session 4

Composing and Decomposing Data Types -- A Closed Type Families Implementation of Data Types à la Carte

Patrick Bahr (University of Copenhagen)

This paper was inspired by the recent addition of closed type families to the Glasgow Haskell Compiler (GHC). Closed type families were used to implement Data Types à la Carte, specifically the subtyping constraint :<:

Data Types à la Carte -- idea: decompose data types into two-level types. Express recursive data types as the fixpoint of a functor. These functors can be combined by coproduct construction :+:.

Example:

-- recursive data type
data Exp = Val Int | Add Exp Exp

-- as fixpoint of a functor
data Arith a = Val Int | Add a a
type Exp = Fix Arith

-- combine by coproduct construction
data Mul a = Mul a a
type Exp' = Fix (Arith :+: Mul)

The subtyping constraint is easily expressed as a type class with an injection function inj and a projection function prj. But this approach treats :+: asymmetrically, doesn't inspect the left-hand side and is ambiguous.

Contribution of this paper: a re-implementation of :<: which behaves as expected (f :<: g <==> there exists a unique injection from f to g), avoids ambiguous subtyping (by rejecting multiple occurrences of signatures) and allows expressing isomorphism.

Implementation: Have a type-level function Embed taking signatures f, g as arguments. Produces proof object p for f :<<: g -- a relaxed version of :<: allowing ambiguity -- and checks whether p proves f :<: g. We can then derive implementations of inj and prj using the proof object as an oracle in instance declarations.

The code for Embed, Sub and helper type classes is shown. It's all contained in the "compdata" package.

Caveats: unhelpful error messages; compile time performance is unpredictable.

Discussant: Wouter Swierstra

With this approach we gain lots of expressive power, e.g. control over how the search proceeds, at the price of more type level programming.

C: A solution for the error messages apparently exists. Also, Idris does this really nicely (it's also much more complicated).

Q: Motivation: want to use sums of products. Interesting tradeoff by maybe focusing on coproducts of products?
A: ?

Q: Would instance chains have solved your problem?
A: Yes.

Q: All this type hackery for deriving injections and projections -- how about just providing them every time?
A: Of course you can do that, but it's inconvenient and obvious from the type. If you need more flexibility, you can still do it manually.

Q: No canonical ordering for injection (eg sum of int and bool) -- is this a problem?
A: Yes, at the moment you have to annotate.

True Sums of Products

Edsko de Vries, Andres Löh (presenting) (Well-Typed LLP)

A new approach to datatype-generic programming in Haskell; notable features: faithful representation of datatypes as n-ary sums of n-ary products; a library of high-level combinators to encourage concise and reusable functions; separation of metadata in the representation.

Available on Hackage as "generics-sop". "basic-sop", "lens-sop" and "json-sop" provide examples, generic lenses and generic JSON (de)serialization respectively.

Motivation was to improve on existing models: GHC.Generics representation is too constrained (binary sums and products) but at the same time too flexible (arbitrary nesting). Also has noisy metadata everywhere. Functions end up making implicit assumptions about the structure of the data which is not guaranteed by the type system.

Discussant: Patrik Janssen

Q: What kind of code does this generate?
A: Not looked at yet. But lots of implicit arguments are potentially being passed around at runtime.

Q: Can this be done better in Agda?
A: Library was completely developed in Haskell. Can probably be done more nicely in Agda, but in this case the development was driven by an actual need to deal with certain generics problems in Haskell.

Q: n-ary lists are left-biased: don't you run into similar performance problems as with binary representation?
A: List-based representation gives uniform encoding for all constructors. But probably not ideal for data types with thousands of constructors.

2Nov/130

Live-blog from LADIS 2013

Posted by Malte Schwarzkopf

Several of us are attending the LADIS workshop in advance of SOSP in Nemacolin Woodlands in colourful Pennsylvania today and tomorrow. We will cover some of the papers and keynotes in our live blog below.

2Oct/130

Liveblogging the first Human Data Interaction workshop

Posted by Anil Madhavapeddy

I'm at the Open Data institute, with Richard Mortier, Jon Crowcroft, Amir Chaudhry and Hamed Haddadi , live-blogging a daylong workshop about our emerging Human-Data Interaction research initiative.  The room is packed with notable researchers from all over the UK, so this promises to be an exciting day!

30Sep/130

Liveblogging LCDNets 2013

Posted by hh360

Good Morning from Miami, I'm here today with Arjuna Sathiaseelan at ACM MobiCom Workshop on Lowest Cost Denominator Networking for Universal Access (LCDNet 2013)