syslog
3Sep/140

ICFP 2014: Day 3

Posted by hh360

Good Morning from the 3rd and final day of the 19th International Conference on Functional Programming, as ever Leo and I (Heidi) are here to bring you the highlights.

Keynote (Chair: Edwin Brady)

Depending on Types

Stephanie Weirich (University of Pennsylvania

Is GHC a dependently type PL ? Yes*

The story of dependently typed Haskell ! We are not claiming that every Agda program could be ported to Haskell

Example: Red-black trees (RBT) from Agda to haskell

All the code for today's talk is on github: https://github.com/sweirich/dth

Example of insertion into RBT from Okasaki, 1993.

How do we know that insert preserves RBT invariants ?
We are going to use types

Examples RBT in Agda from Licata. We can now turn this into GHC using GADT's and datatype promotion.

Haskell distinguishes types from terms, Agda does not.

In Haskell types are erased at runtime unlike adge.

Datatype Promotion: Recent extension to GHC, we can't promote things like GADT's though.
GADT's: introduced to GHC 10 years ago, the really challenge was integration with Hindley-Milner type inference

Silly Type Families - GADT's from 20 years ago

Our current insert to RBT needs to temporary suspect the invariances for RBT, but how can we express this in the haskell type scheme ?

Singleton Types: Standard trick for languages with type-term distinctions.

The difference between Agda and haskell: Totality

What's next for GHC ?
Datatype promotion only works once

Wishlist
- TDD: Type-driven development (as described by norman yesterday)
- Totality checking in GHC
- Extended type inference
- Programmable error messages
- IDE support - automatic case splitting, code completion and synthesis

GHC programmers can use dependent type*

Q: Why not use incr in the definition to unify the two constructors ?
A: You can, I was just simplifying for explanation

Q: Its seems like this increase cost, is this the case ?
A: Yes, but not too much

Q: Are dependent types infectious ?
A: Good questions, no we can add a SET interface to this RBT to abstract

Q: To what extend is Haskell the right place for these new features ?
A: Its a great testbed for industrial scale and we can take lessons back into dependently typed

Q: Can you do this without type families ?
A: Yes, I wanted show type families

Q: Why not just use Agda ?
A: Haskell has large user base, significant ecosystems, 2 decades of compiler dev

Q: What's the story for keeping our type inference predictable ?

Q: What level of students could use these ideas ?
A: I've used in advanced programming for seniors


Session 10: Homotopy Type Theory (Chair: Derek Dreyer)

Homotopical Patch Theory

Carlo Angiuli (Carnegie Mellon University); Ed Morehouse (Carnegie Mellon University); Daniel Licata (Wesleyan University); Robert Harper (Carnegie Mellon University)

Idea: define a patch theory within Homotopy Type Theory (HoTT)

Functorial semantics: define equational theories as functors out of some category. A group is a product-preserving functor G -> Set

Homotopy Type Theory: constructive, proof-relevant theory of equality inside dependent type theory. Equality proofs a = b are identifications of a with b

Can have identifications of identifications.

Higher Inductive Types introduce non-sets: arbitrary spaces.

Patch Theory defines repositories and changes in the abstract -- model repositories, changes, patch laws governing them.

Key idea: think of patches as identifications!

Bijections between sets X and Y yield identifications X = Y.

Example: repository is an integer; allow adding one. Everything works out nicely.

Next example: repository is a natural number; allow adding one. Problem: can't always subtract one (which would be the inverse of adding one)! -- solved using singleton types.

Closing thoughts: computation vs. homotopy. There is a tension between equating terms by identifications and distinguishing them by computations. Analogy: function extensionality already equates bubble sort and quicksort -- they are the same function but different programs. Computation is finer-grained than equality.

C: Very understandable talk! But I still need to see a talk / paper where Homotopy Type Theory guides me to something new.
A: The functorial part comes for free within HoTT. Also we're forced to do everything functorially.

Q: Patch computation -- does your theory say anything about that?
A: Didn't work out nicely.

Q: Can you give me some intuition about the singleton type used in the naturals example?
A: This is a hack. We want a directed type that does not have symmetry.

Q: What about higher layers?
A: Don't need to go further than the second layer when dealing with Sets.

Pattern Matching without K

Jesper Cockx (KU Leuven); Dominique Devriese (KU Leuven); Frank Piessens (KU Leuven)

Pattern matching without K

Dependent pattern matching is very powerful, but HoTT is incompatible with dependent pattern matching. The reason: it depends on the K axiom (uniqueness of identity proofs), which HoTT does not allow. But not all forms of dependent pattern matching depend on the K axiom!

Dependent pattern matching. Have a dependent family of types -- example of pattern matching given with dependent pattern matching and without.

K is incompatible with univalence.

How to avoid K: don't allow deleting reflexive equations, and when applying injectivity on an equation c s = c t of type D u, the indices u should be self-unifiable.

Proof-relevant unification: eliminating dependent pattern matching. 1. basic case analysis -- translate each case split to an eliminator; 2. specialization by unification -- solve the equations on the indices; 3. structural recursion -- fill in the recursive calls.

Instead of McBride's heterogeneous equality, use homogeneous telescopic equality.

Possible extensions: detecting types that satisfy K (i.e. sets -- allow deletion of "obvious" equations like 2 = 2); implementing the translation to eliminators (this is being worked on or is done but not yet released); extending pattern matching to higher inductive types.

Conclusion: by restricting the unification algorithm, we can make sure that K is never used -- no longer have to worry when using pattern matching for HoTT!

Q: What was your experience contributing to Agda?
A: Not a huge change -- not too hard.

Q: Is there a different way of fixing the "true = false" proof?
A: K infects the entire theory.

Q: You said the criterion is conservative. Have you tried recompiling Agda libraries? What did you observe?
A: 36 errors in the standard library, 16 of which are easily fixable, 20 are unfixable or it is unknown how to fix them.

Q: Is K infectious?
A: Yes -- when activating the "without K" option, you cannot use a library that defines something equivalent to K.


Session 11: Abstract Interpretation (Chair: Patricia Johann)

Refinement Types For Haskell

Niki Vazou (UC San Diego); Eric L. Seidel (UC San Diego); Ranjit Jhala (UC San Diego); Dimitrios Vytiniotis (Microsoft Research); Simon Peyton-Jones (Microsoft Research)

Refinement Types: types with a predicate from a logical sub-language

can be used to give functions preconditions

Example: division by zero.

ML, F#, F* have had refinement type implementations but they're CBV! How can this be translated to Haskell?

CBV-style typing is unsound under call-by-name evaluation.

Encode subtyping as logical verification condition (VC)

Under call-by-name, binders may not be values! How to encode "x has a value" in CBN? Introduce labels to types (giving stratified types): label values that provably reduce to a value.

How to enforce stratification? -- Termination analysis! Can use refinement types for this.

Can check non-terminating code. (collatz example)

How well does it work in practice? LiquidHaskell is an implementation of a refinement type checker for Haskell.

Evaluation: most Haskell functions can be proved to be terminating automatically.

Q: What's a stratified algebraic data type?
A: It's an ADT with a label.

Q: How do you prove termination of structurally recursive functions?
A: Map to numbers (e.g. list to length)

Q: Refinement types for call-by-push-value?
A: There is some work on refinement types which does not depend on the evaluation strategy.

Q: In the vector algorithm algorithm library, every function had to be annotated.
A: Here the first argument usually increases rather than decreasing so had to write a custom metric but this was simple.

Q: How does this interact with other features of Haskell like GADTs?
A: LiquidHaskell works on Core so there should be no problem.

Q: Did you find any bugs in your termination analysis of different libraries?
A: Yes a tiny bug in Text, has been fixed now.

A Theory of Gradual Effect Systems

Felipe Bañados Schwerter (University of Chile); Ronald Garcia (University of British Columbia); Éric Tanter (University of Chile)

Programs produce results and side-effects. Can use type and effect systems to control this. Here, functions have an effect annotation -- a set of effects that the function may cause. Expressions are also annotated with their effects.

Get into trouble when passing an effectful function to another function: as a safe approximation, the higher-order function must always assume that the function passed in may have any possible effect.

Introduce "unknown" effect (upside-down question mark). Combine this with a generic type-and-effect system which comes with "adjust" and "check" funtions.

Use abstract interpretation to lift adjust and check from static to gradual typing. Idea: extend the domains of sets to include "unknown" side-effects.

Define concretization function from consistent privileges to the feasible set of effect sets.

Introduce internal language: with "has" and "restrict" constructs.

Future work: implementation (currently targeting Scala); Blame

Q: Can you always derive the concretization function?
A: It is provided by us.

Q: If you're doing dynamic checking and dealing with arrays, but you can't raise errors -- what are you supposed to do?
A: You don't want to produce a side effect when you're not allowed to do so.

Q: Does your system allow parameterization on effect sets?
A: No, not in this paper. The implementation requires it though so this is future work.


Session 12: Dependent Types (Chair: Ulf Norell)

How to Keep Your Neighbours in Order

Conor McBride (University of Strathclyde)

Conclusion first: Push types in to type less code!

A very engaging and amusing talk. Conor demonstrates (with lots of Agda code!) how to use types to build correct binary heaps and 2-3 trees.

Take-aways: Write types which push requirements in. Write pograms which generate evidence.

Q: What does "push requirements in" mean?
A: Start from the requirements. The information that's driving the maintenance of the requirements is passed in. (?)

A Relational Framework for Higher-Order Shape Analysis

Gowtham Kaki (Purdue); Suresh Jagannathan (Purdue)

Shape analysis in functional languages: ADTs track shapes of data. Types can document shape invariants of functions -- but we want to be able to refined these constraints!

Can we refine ML types to express and automatically verify rich relationships? Need a common language to express fine-grained shapes. Build up from relations and relational algebra. Example: in-order and forward-order.

Relational operators are union and cross
Predicates are equality and inclusion

But: relational types for polymorphic and higher-order functions must be general enough to relate different shapes at different call sites. Examples: id, treefoldl : a tree -> b -> (b -> a -> b) -> b

Consider id : a-> a -- shape of argument is the shape of its result.

Functions can be parameterized over relations. Also get parametric relations: relations can be parameterized over relations.

We consider the "effectively propositional fragment" of many-sorted first-order logic. Key observation for translation: a fully instantiated parametric relation can be defined in terms of its component non-parametric relations.

Implementation: CATALYST, tycon.github.io/catalyst/

Q: Does your type system allow the user to introduce new relations?
A: Yes.

Q: Can you compile any kind of parametric relation or are there restrictions?
A: You can also write relations on infinite data structures.


Session 13: Domain Specific Languages II (Chair: Yaron Minsky)

There is no Fork: an Abstraction for Efficient, Concurrent, and Concise Data Access

Simon Marlow (Facebook); Louis Brandy (Facebook); Jonathan Coens (Facebook); Jon Purdy (Facebook)

Imagine you a web program, the server side code for generating a webpage, with many sources. For efficient, we need concurrency, batching, caching. We could brute force, but that destroys modularity and we its messy.

What about using explicitly concurrency ? It really on the programmer too much e.g. they forget to fork, add false dependancies.

A rule engine: input, evaluate rules and return.

Consider coding a blog, example code including Fetch monad. The concurrency is implicit in the code, just use monand and applicative.

No manual batching, again implicit in code.

Demonstrating code for map and applicative. The monadic bind gives a tree of dependencies for IO.

This was developed for the haxl project at FB, migrating from an in-house DSL called FXL to Haskell. The code framework is open-source, github.com/facebook/haxl and cabal install haxi.

Implicit parallelism is hard but data fetching doesn't suffer from this problem.

Q: Is binder linear in depth of blocked requests ?
A: Take it offline

Q: Is there accidental sequential code ?
A: We haven't rolled it out yet

Folding Domain-Specific Languages: Deep and Shallow Embeddings (Functional Pearl)

Jeremy Gibbons (University of Oxford); Nicolas Wu (University of Oxford)

Introducing a trivial toy language for examples. Defined a simple deep and shallow embedding for the toy language. Shallow embedding must be compositional. The toy language have eval and print, print is messy to add to shalow embedding.

Dependent interpretations is introduced for the deep embedding and again is ok to shallowing embedding but not great. Context senstive interpretation can again be expressed as a fold.

Deep embedding are ADT's, shallow embeddings are folds over that AS.

Q: Has you method for shallow embedding sacrificed extensibility ?
A: Yes but we can work around it

Q: What about binders, what about recursion ? You example is too simple.
A: Not sure


Session 14: Abstract Machines (Chair: David Van Horn)

Krivine Nets

Olle Fredriksson (University of Birmingham); Dan Ghica (University of Birmingham)

We're good at writing machine-independent good. But how about architecture independence? (CPU, FPGA, distributed system).

Work: underlying exeution mechanism for suporting node annotations t @ A.

Approach: put your conventional abstract machines in the cloud.

Krivine net = instantiation of a network model with Distributed Krivine machine. Network model: synchronous message passing and asynchronous message passing.

Basic idea of execution: remote stack extension.

Krivine nets perform much better than previous approaches (GOI and GAMC).

Future: work on runtime system -- failure detection and recovery, may need GC.

Implementation and formalization: bitbucket.org/ollef

Q: How about more complex data like arrays which are expensive to send over the network?
A: We're working on this -- introduce a notion of mobile and immobile.

Distilling Abstract Machines

Beniamino Accattoli (University of Bologna); Pablo Barenbaum (University of Buenos Aires); Damiano Mazza (Université Paris 13)

Setting: lambda-calculus. There has been a lot of research into the relationship between abstract machines and substitution. We introduce the Linear Substitution Calculus which decomposes beta-reduction. It is equipped with a notion of structural equivalence.

Origins: proof nets: a graphical language for linear logic. Multiplicative cut (symmetric monoidal closed category), exponential cut (for every A, !A is a commutative comonoid)

Introduce Linear Substitution Calculus -- proof nets in disguise. Have multiplicative and exponential step.

Then define reduction strategies on it: by name / by value (LR / RL) / by need require a specific form of evaluation context together with specific versions of the multiplicative and the exponential step.

Distilleries: choice of deterministic multiplicative and exponential evaluation rule and an equivalence relation which is a strong bisimulation; an abstract machine with transitions defined by the chosen rules; a decoding function. Moral: the LSC is exactly the abstract machnie "forgetting" the commutative transitions. In all cases, distillation preserves complexity.

Q: What does the fourth possible combination of multiplicative and exponential evaluation correspond to (both by value)?
A: That would be a sort of call-by-value, depending on how you define the context.

Q: Can you bring some order to the seemingly unrelated evaluation contexts?
A: With these contexts, you tell the machine where to look for the next redex. You can even do non-deterministic machines. At some point, your choice is limited if you want execution to be deterministic.

Q: Are these evaluation contexts maximally liberal while staying deterministic?
A: Yes.

Q: From reduction semantics to an abstract machine is a mechanical process. Is it possible to find an algorithm that generates distilleries?
A: I can do it by hand ... don't know if there is an algorithm.

Student Research Competition Award Presentation

14 submissions; 13 accepted (7 grad, 6 undergrad); 3 winners for each category.

ICFP 2015 Advert & Closing

ICFP 2015 will take place in Vancouver, Canada: icfpconference.org/icfp2015

The venue is great -- connectivity, location etc.

New: will have a Mentoring Workshop for beginning graduate students doing research. Workshop proposals and paper submission deadlines are on the website.

2Sep/140

ICFP 2014: Day 2

Posted by Leonhard Markert

This is day 2 of the 19th International Conference on Functional Programming. Leo and I(Heidi) are here bright and early to bring you the highlights from Gothenburg, Sweden. We are not the only livebloggers here at ICFP, check out Edward Yang's excellent notes.

Keynote (Chair: Jesse Tov)

Behavioral Software Contracts
Speaker: Robert Bruce Findler (Northwestern University)

Intro: Robby's work on behavioural software coIMG_0008ntracts is beautiful :)

Instead of an abstract, the keynote has a bibliography of interesting work in the area

Contracts are infectious

Example of racket code, working in a call-by-value language here a point-in function. Demonstration of contract violation including a blame, from applying bools instead of integers. Robby's adds an contract to guard access to point-in and the blame now points to the top-level.

Graph demonstrating how the number contracts in Racket in git has constantly increased. Robby identifiers the outliers e.g. big libraries with lots of contracts have been introduced. Most commits add 1 contract, if they add any.

Everyone is the audience is thinking, why use contracts when you can use types, as identified by Robby.

Contracts Verse Types: The previous example is used with types instead of contracts and negative integers are used to demonstrate how types are insufficient. He introduces a dependant contract, hinting at a comparison to dependent types. We have the arguments at the time of checking unlike dependant types hence its actually possible in real languages.

Switch to an example of a function called dc, which takes a drawing function and bonding box and draws as a side effect. Robby introduces a broken function, which introduces state pollution. We can't check that state after equals the state before, but this would be too late. Here's a contract example, generating a random state and testing for state pollution, this doesn't give us complete corrects but its good enough in this contexts.

Don't think about contracts in terms of projections, but in terms of boundaries (work by Christos)

Addition of contacts to simply typed lambda calculus, in particular how monitors add contracts to boundaries. Example of if we have contract (x =< 3), how values of x > 3 causes the function to abort. If we have another contract to checks f's arguments and return values are x=<3, we allow the function to passed though to with extra monitors on return values.

What happens if we have a bug in the monitor ?

Applications: How can contracts work if you don't give have access to the code the other side of the boundary. We can use the contract exercise function to generate random functions to test monitors. We have an notion of whether one contract is stronger than another.

Gradual typing: How we interfere between typed and untyped languages, see how we do it with Typed Racket.
Static Analysis: Havic, exhaustive boundary checking

This presentation is actually written in racket, all output is real output.

Takeaways: Contracts are not just crappy types, contracts infect code bases, boundaries are vital.

ICFP knows how to prove specifications. Contract infectiousness is our opportunity to share that knowledge with the world

Q: Randomisation would break formal tools, why not use NaN ?
A: Just setting to default may not find errors

Q: Contracts are checked at runtime, what about the overhead ? Is there a switch to turn it off ?
A: There's no kill switch, but there are option contracts

Q: Effects are alarming !
A: Well, it doesn't alarm me, its vital for practical code and we need to look into this further

Q: Why can't we introduce randomness to type checking ... ?
A: that the point

Q: Thank you for the cross community impact !

Q: How is the blaming system implemented ? Can we add it to python or C ?
A: Fundamentally we could do this, we would love to see more of this.

Q: The opposite of gradual typing, often checks aren't just typed based. Can we use types to check for placement of contracts ?
A: Contracts are specification, they can be wrong like normal programs.

Q: Dependent contracts seems to explicitly pass all arguments, its looks a bit strange ?
A: Explicitly labelling dependency makes it cheaper

 


 

Session 6: Contracts & Scheme (Chair: Michael Sperber)

Soft Contract Verification
Phuc C. Nguyen (University of Maryland); Sam Tobin-Hochstadt (Indiana University); David Van Horn (University of Maryland)

This is the snake game with contracts, most computation time is executing contracts not code, there is serious overhead.

Contract and blame avoidance cycle, how can we then introduce static contract verification.

We will consider soft contract verification:
- we can analysis first class contracts
- using SMT solver
- competitive
- works even without apparent contracts

CPCF

Introduce PCF language, and introducing contract notation.

How can we verify CPCF ? We abstract CPCF function to a type, symbolic values are sets of contracts.
Soundness: All concentration are approximated by CPCF

Check refine values are they flow through, these refines can influence the computation.

Theorem: Verified modules can't be blamed - Proved
We proof well to soft typing, occurrence typing, H.O recursion schemes and dependent refinements and video games for the beginning.

Contracts fail at first order, only need a F.O solver for verification
Q: If we replace with pos? with >4, where you short cutting the SMT solver ?
A: Yes, the name was just a shortcut

Q: How to do you handle recursion ?
A: This is described in the paper

On Teaching How to Design Programs: Observations from a Newcomer
Norman Ramsey (Tufts University)

This is course in problem solving, not functional programming

Process:
1) Describe data, input, making data examples
2) Describing functions, making function examples
3) Creating code templates from types, fill in the template
4) Test, review and re-factor

Example of 2D-tree data definition and writing a function to find the nearest point.

What's hard of students ?IMG_0011

- Type directed development is difficult even though there's mechanical rules to apply
- "Purpose statements" are hard, student just ignore them and read code, this breaks down then they get to recursion

The Tech:
- You don't need 5 language levels
- Don't be fooled by the DrRacket IDE - designed for fill racket, but every time you compile, untested code is thrown in your face
- Go deep intro "world programs" - interactive apps by composing pure functions

Open Problem - How can we evaluate students code ?
- Is this code the produce of systematic design ?
- Experienced instructions use point deducts

HtDP: low cost, high reward

Q: Design recipes suggest how to evaluate student assignments, this is the solution to your problem ?
A: We'll sit down and talk about it

 


 

Session 7: Tools Used in Anger (Chair: Jacques Garrigue)

SML# in Industry: A Practical ERP System Development (Experience Report)
Atsushi Ohori (Tohoku University); Katsuhiro Ueno (Tohoku University); Kazunori Hoshi (NEC Software Tohoku, Ltd.); Shinji Nozaki (NEC Software Tohoku, Ltd.); Takashi Sato (NEC Software Tohoku, Ltd.); Tasuku Makabe (NEC Software Tohoku, Ltd.); Yuki Ito (NEC Software Tohoku, Ltd.)

Experience report from using SML# in production systems

With SML# with can produce robust systems, regoros specification and high quality flexible code. We conducted a pilot for using SML# for indeustry software, 15 month project with to man-months

Features of SML# for practical software development:
Normal ML features plus record polymorphism, direct C IMG_0012interface, seamless SQL integration and fully concurrent non-moving GC => mulit-core is simple

The system represents a typical business application.

Lessons learned: We had various issues with ML as a production language and its essential to select best of bread components.

To deploy ML with need to train people, software quality control systems.

Record polymorphism is very useful for data intensive application.

Q: What's the evidence that its reliable/productivity ? So no quantifiable evidence ?
A: We need to work on this

Q: Is accounting interesting ?
A: Yes, I enjoyed it

Lem: Reusable Engineering of Real-World Semantics
Dominic P. Mulligan (University of Cambridge); Scott Owens (University of Kent); Kathryn E. Gray (University of Cambridge); Tom Ridge (University of Leicester); Peter Sewell (University of Cambridge)

Sorry for the bombastic title

CS community has build many models of real world artifices, build by engineers instead of computer scientists. E.g. memory models, processor architectures etc.. This take many postdoc years to build.

Models developed inside theorem prover or PL and then hand ported, porting maybe machine assisted (perl scripts :)). This language to language translation is ugly. We have theorem prover lock-in. Porting requires experiments in both systems. There it not a library of off the shelf components.

Example of C++11 relaxed memory models

Can we have a write once, prove anywhere model ?

Example model: Power and ARM operational model of relaxed memory (Sarkar et al.)
Build from multiprocessor memory subsystem and thread semantics
Another Example model: Axiomatic model for C/C++11 concurrency
Another Example model: CakeML language semantics & verified compiler (Kumar et al.)

Introducing Lem:
Source -> Lem -> HOL, OCaml, Documentation
Lem syntax is similar to OCaml
Lem is a pragmatic tool, we are happy to accept code that causes problems for TP's
Example Lem including highlighting how whitspace and comments are presented

get lem from cl.cam.ac.uk/~pes20/lem

Q: How to you support naming and binding ?
A: We would like to add nominal naming


 

Session 8: Type Systems (Chair: Geoffrey Mainland)

Safe Zero-Cost Coercions for Haskell
Joachim Breitner (Karlsruhe Institute of Technology); Richard A. Eisenberg (presenting, University of Pennsylvania); Simon Peyton Jones (Microsoft Research); Stephanie Weirich (University of Pennsylvania)

Generative type abstractions like newtype HTML = MkH String are useful to prevent programmer errors. The constructor has no runtime overhead. But stringList :: [HTML] -> [String] requires a "map", taking linear time!

Answer: a new equivalence relation, ≈ (written in Haskell syntax as Coercible a b instead of a ≈ b): coerce :: Coercible a b => a -> b which should do nothing at runtime!

Want: Coercible a b then Coercible [a] [b]

Need instances of Coercible for newtype ("wrapping") and data ("lifting") declarations.

Naive approach is too permissive: can derive Int ≈ Bool -> Bool when type families are used. Reason: have two equalities, nominal (compile time) and representational (run time) equality. Type families do not respect representational equality.

Answer: assign roles to type parameters -- nominal, representational, or phantom. Example for phantom: instance Proxy p1 ≈ Proxy p2

Role Inference. Goal is to determine the most permissive yet safe role for type parameters.

Application: GeneralizedNewtypeDeriving has been safely reimplemented in terms of coerce. Still only want to allow newtype (un)wrapping when constructor is in scope.

Abstraction: when do we want to allow coercion? The Default Debate: "preserve abstraction! Make roles default to nominal!" vs. "Be backward compatible! Allow GeneralizedNewtypeDeriving!" (latter was chosen).

Trouble on the horizon? Problem with Monads including "join".

Conclusion: efficient, safe, straightforward interface, implemented and released in GHC 7.8.

Q: Miranda dealt with ADTs rather differently. It later turned out that the Haskell and Miranda approach can be translated into each other.
A: Miranda's ADT system might run into trouble with type families...

Q: How many packages would need role annotations?
A: Not clear how to assess that. Probably very few (2?).

Q: How about Safe Haskell?
A: Coercible is not yet in the safe subset.

Q: Why not put these role annotations in the kind syntax?
A: Too big a change to the kind system. Makes it less flexible.

Hindley-Milner Elaboration in Applicative Style (Functional Pearl)
Francois Pottier (INRIA)

The story of ML type inference and how it has been explained over the years.

70s: Milner invents ML type system and polymorphism. Publishes declarative presentation, Algorithm W, and imperative one, Algorithm J. Contains global variable for "current substitution". Both compose substitutions produced by unification, and create new variables as needed.

80s: Cardelli, Wand formulate type inference as a two-stage process: generating and solving a conjunction of equations. Benefits: higher-level thinking: equations and conjunction instead of substitutions and composition; also greater modularity: constraint solving in library, generation by user. But new variables still created via global side effects.

90s: Kirchner and Jouannaud, Rémy explain new variables as existential quantification and constraint solving as rewriting -- necessary step on the road towards explaining polymorphic inference.

2000s: Gustavsson and Svenningsson explain polymorphic type inference using constraint satisfaction.

Constraint solving: on paper, every constraint can be rewritten until it is either solved or you get an error.

A problem: submitting a closed ML term to the generator yields a closed constraint which the solver rewrites to false or true (is well typed or is not well typed).

Question: can one perform elaboration without compromising the modularity and elegance of the constraint-based approach?

A "low level" solution: the generator could produce a pair of a constraint and a template for an elaborated term, sharing mutable placeholders for evidence so that after the constraint is solved, the template can be "solidified" into an elaborated term.

This approach has three stages: generation, solving, solidification; each user construct is dealt with twice -- not very elegant.

Can we describe generation and solidification in a unified manner? Yes: give the user a DSL to express computations which generate constraints and read their solutions simultaneously. The implementation of this is simple, and allows the user to define inference and elaboration in one inductive function.

Conclusion: solution is simple, modular, elegant, performant. Could be used in other settings potentially -- e.g. higher-order pattern unification?

Q: Would this work with lazy evaluation?
A: Not sure.

C: Notation of the algorithm is awkward.
A: Yes, this shows that we need a nicer way to express computations with applicative functors.

Q: How about a constructive proof instead of returning true or false?
A: Possible but then the proof tree would have to be inspected -- would still need to do work twice.

Session 9: Incremental Computing (Chair: Tiark Rompf)

Settable and Non-Interfering Signal Functions for FRP
Daniel Winograd-Cort (presenting, Paul Hudak (Yale University)

FRP: programming with continuous values and streams of events, like signal processing diagrams.

Used in Yampa, Nettle, Euterpea.

Event-based vs. continuous: introduce notation for differentiating the two from each other. Standard arrow operators: partial application, feedback loops, composition; stateful arrows (delay), arrow choice -- running the signal function becomes a dynamic decision.

Higher order arrows (with switch) are inherently dynamic, can replace arrow function. Arrows with switch are equivalent to Monad. Makes it harder to optimize.

Why arrows with switch? -- "power start and stop".

Contribution: introduce resettability and non-interfering choice. Get general resettability and arrowized recursion.

Example: IntegralReset. Want to be able to reset it with an event. Easy to do with switch: just replace the integral function. Without switch we can simulate a reset but we can't modify the function itself. Not nice: doesn't generalize.

Resetting state: state is in delays and loops.

General resettability: can take any signal function and transform it into a settable signal function.

Presents "settable laws": identity, uniformity, default.

Other example: pausing an arrow. IntegralWhen, a signal function that performs an integral only under a given condition. Can't do this with arrow choice. Solution: use non-interfering choice.

Non-interfering choice gives recursion! Two kinds: fixpoint-like recursion and structural recursion (which is static).

Arrowized recursion is "predictably dynamic" which nicely allows optimization.

Causal Commutative Arrows (CCA) can be heavily optimized. Allows choice but not switch. Reduces arrows to one of two forms.

Summary: introduced settability, a new model for controlling FRP state; non-interfering choice allows new forms of expression and arrowized recursion. Switch is really only needed for true higher order expressions!

Q: Is settability like a transistor?
A: Yes.

Q: Are there other circuit design ideas that we could re-use?
A: Not sure. Interesting idea.

Q: Can you use CCA factorization with GHC e.g. using rewrite rules?
A: Unresolved so far... There is a Template Haskell library for doing something like this.

Functional Programming for Dynamic and Large Data with Self-Adjusting Computation
Yan Chen (Max Planck Institute for Software Systems); Umut Acar (Carnegie Mellon University); Kanat Tangwongsan (Mahidol University)

Context: Big Data. Challenges are variety, volume, velocity -> requirements: expressive, parallel, incremental.

Why not use FP for computing big data? Parallelism can be done. How about incremental data?

Implicit self-adjusting computation: in addition to normal output, the program creates a dependency graph so when an input changes, only the necessary recomputation is done.

Result: faster but needs more memory to keep dependency graph -- time vs. space trade-off. This paper: controlling memory use.

Speedup as a function of memory usage has diminishing returns. Want to find sweetspot on the speedup vs. memory usage curve.

Type-directed translation: extend SML types with Changeable / Stable.

Need a technique to control granularity of the dependency graph. Treat blocks of data as a single reference cell. Fixed block size leads to disproportionately slow update time -- so instead use probabilistic chunking scheme.

Applications and Evaluation: incremental PageRank, graph connectivity, social circles. Very high speedups achieved in prototype implementations.

Future work: parallel self-adjusting computation.

Q: Have you done experiments with larger changes?
A: Yes.

Q: Is it realistic for companies to use 10x more memory in order to speed up computations?
A: It's a trade-off.

Q: What would be the speedup when you don't use additional memory?
A: Zero (?)

Q: This looks like it's intrinsically sequential. How do you plan to parallelize this?
A: It's all about dependency tracking -- dependencies can be used to decide where to parallelize.

Q: How would a non-pure merge sort perform?
A: Need to check.

Q: Can you incrementalize the whole or parts of SML?
A: Interesting question. Non-pure parts of SML would be problematic.

ICFP Contest Presentation by Duncan Coutts and Nicolas Wu

Created by members of the Department of Computer Science at Oxford University and Well-Typed.

The ICFP Programming Contest started in 1998 (in response to ACM programming contest allowing only mainstream imperative programming languages).

Open to anyone, any PL, teams of any size, takes place on the internet, 24h lightning round, 3-day full round.

Certain similarities between the problem ten years ago and this year's. But this year: two compilers instead of only one!

Setting the scene for this year's contest: 80s pop culture, 80s PL technology, 70s PL research.

LambdaMan runs around in a maze eating pills and evading ghosts ... striking resemblance to a more well-known game!

LamCo (producer of LambdaMan) had some interesting but weird technology and a spectacularly dysfunctional development process: team of 8-bit microcontroller fanatics and team of LISP fans combined.

Lightning round: LambdaMan AI running on LISP cpu
Full round: ghost AI running on 8-bit microcontroller in addition

Task description: many pages of cpu specs.

Simulator and debugger given -- in the "lingua franca of the internets", JavaScript (actually written in Haskell and the compiled using GHCJS).

Before the contest, pointless hints were tweeted.

Major panic as the git-based site deployment system went haywire just when the contest started, resolved manually. Next 5 hours spent with all hands on deck. Participants could see specification diffs on GitHub.

Judging process: collected and verified all submissions. Had enough resources to run all vs all rather than multiple-round tournament (one team's lambda man against another team's ghosts and the other way round). Ran 200220 games.

First prize: Supermassive Black Hom-set (Pavel Lepin) -- Haskell is the PL of choice for discriminating hackers!

Second prize: UnaGi -- C++ is a fine tool for many applications.

Third prize: DiamondPrincess -- Perl is also not too shabby.

Winner lightning round: jabber.ru (Alexey Shchepin) -- OCaml is very suitable for rapid prototyping.

Judge's prize: gagallium -- Team gagallium are an extremely cool bunch of hackers! Used OCaml. Best ghost. Used some intermediate stage of the OCaml compiler to compile their code from.

Pavel Levin gives a short presentation of his approach to solving the contest.

ICFP 2004 Most Influential Paper Award presented by Derek Dreyer

Goes to "Scrap More Boilerplate": Reflections, Zips, and Generalised Casts" by Ralf Lämmel and Simon Peyton Jones.

Significantly extended the authors' earlier "Scrap Your Boilerplate" paper and library.

Simon's observations: 1. an unequal partnership made this possible 2. good titles are powerful!

Ralf: used inspiration from different fields.

1Sep/140

ICFP 2014: Day 1 (another perspective)

Posted by hh360

This is day 1 of the 19th International Conference on Functional Programming. Leo and I (Heidi) are here live blogging from Gothenburg, Sweden. He posts is here, we will combine them (with photos :)) in the coming weeks.

IMG_0016

the beautiful conference location

Opening

Good morning and time to get started. Official definition of ICPF: its a forum for discussion of researchers. 466 people have registered for the conference or associated events. 21 sponsors giving $60K of sponsorship.

We can get a boat to the conference banquet :)

A big big thank you to the PC.


 

Keynote: Using Formal Methods to Enable More Secure Vehicles: DARPA's HACMS Program
Author & Speaker: Kathleen Fisher (Tufts University)

DAPRA, US funding organisation with rotating project management so constant surprises. Clean slate formal methods for proving security for embedded systems. Power, TV, medical devices, phones and vehicles, all have remote security risks with amusing exploits. Car is a computer on wheels, with many ECU's contacted with buses both high and low speed. If you can connect to the network, you can control anything under software control, getting to the network is easier then you'd think e,g using the diagnostic plug under the steering wheel. Almost all the car systems are under software control, include breaks. You can get access remotely by 4 ways: e.g. via bluetooth, allstar, virus to mechanises diagnostic equipment, via music to buffer overfull car software.

The traditional fix: air gaps & obscurity. Air gaps a no longer the case, cars need to be networked e.g. for remote monitoring.
Lots of documentation for this systems nowadays

The desktop approach fix: this approach barely works on desktop. The security issues with security code, approach 1/3 in recent watch list.
Current solutions are millions of LOC, extensive monitoring need extra CPU. Embedded codes goes though lots of review, no "patch every Tuesday" approach.

The new fix: Formal method based approach. Why now ? Lots of tools & infrastructure which is now mature and advances in SAT solves in later 10 years.

3 approaches:

code synthesis: people can't write secure C code, computers can and provide proofs. DSLs, Interactive theorem prover as PL.
HACMS program: collection of performers, bid for technical areas such as:
- Vehicle experts
- OS
- Control systems e.g. fly control
- Research Integration: managing rest of performers, composition, tools
- Red team: white hat hackers, measure effort needed to hack

Air team
- Boeing - unmanned little bird helicopter & open source version
- NICTA
- Galois
- RC*/U.Minn

All cars are vulnerable, no point in naming names.

composition means how do you get high assurance components and build a high assurance systems ?
Started with Pem testing 4 vehicles, 2 prep and 2 open source. The systems had been engineered to be easy to connect so this wasn't a surprise. Hijacking demonstration of AI drown using aircrack. This can be quite a shock to the user. e.g care driver. Ever 18 months, the red teams will report on how much effort is required to hack. Starting with monolithic quad. They began by upgrading processor and refactor monolithic software, the H/W abstraction layer was a big hit with the community. Gradually swapping out legacy code to high assure code.

How should you do if you know your under a DoS attack ? it depends on context

Its been proved that system is memory safe, ignore malformed messages, ignores non-authenticated and all good messages will reach motor control.

"The most secure UAV on the planet" :)

AADL - language for architecture specification used to propagate system wide properties down until an assurance it reached. A formal systematics haves been added to AADL and provided using resolute tool, assurance cases and synthesis tools. The glue code can be generated, this is typically the weak point.

Galois haven written two DSL's in Haskell, They have created Ivory for C like code and Tower for tasks/connections.

NICTA have written a fully functionally code micro kernel.

Boeing are using the tools build to develop the unmanned little bird. There on track for running real HACKEMS code on a helicopters in 2 years. There's a linux partition for mission specific code, the red team will have full access to this partition.

Q: Are you working on the wrong problem ? We don't have the talented people to make this commonplace, this isn't cool enough
A: Hacking demonstrations get people interested. The US government is considering mandating that all cars are networked, but this is dramatic
security issues, remote code execution

Q: Why is the hardware so constrained ?
A: It is very high scale, there's very small margins

Q: What did this cost, what this research want to do something differ to what management want ?
A; $70 million, we discuss but that why its good to NFS

Q: How much more expensive will high assures systems be ?
A: We don't know yet, that what we are working on. We focus on re-usability, open source, scale to reduce costs.

Q: What's the multi-core story ?
A: Its much more difficult to verify multi-core, bowing want single core, maybe they won't be available soon.

Q:
A: AR Don was started with an existing open source platform

Q: How will this approach scale
A: 80 % quadcoper is high assurance. But we don't even now how many CoL there is in car ? Linux is big, but we can add high assurance code progressively.

Q: How can insure and secure software co-exist ?
A: That's exactly what we are testing with the red team and linux partition.

Q: How do you know that hardware isn't vulnerable ?
A: Our project is big enough, other teams work this question.


Session 1: Domain Specific Languages I (Chair: Anil Madhavapedd)

Building Embedded Systems with Embedded DSLs (Experience Report)
Speaker: Patrick Hickey (Galois Inc)

A big thank you to kathleen

Goal: build a high assurance helicopter (quadcopter)

Challenges: Its an embedded systems, its high real
time system (1 millsec), system needs to be safe and secure.
Constrains: 3 engineers and 18 months, similar projects are 50K CoL.

There are billions of embedded systems, the 1 pence cost sensitivity it very high. Basically choice between C, C or C.
Typical secure issues, difficult to push a patch.

Approach: We would love to use Haskell or OCaml, but GC is not compatible with hard real-time. C or C++ (a subsec), a trade-off between safety and developer productivity.

Build your own tools, starting with a clean state language, correct by construction. We built a embedded DSL compiler in just a few month. The language is embedded in Haskell and compiles to C, its OS and called Ivory. Ivory is safe subset of C with memory & type system, no undefined behaviour. No heap, you can only allocate to the stack or global. We embedded in Haskell so we can utilise its type checker. Haskell is ivory macro language.

Modules are composed to expressions and global, there's a global name space.IMG_0002

Tower DSL - for organising whole applications. Build on top off ivory, to combine procedures into applications.

The quadcopter story - we have almost build all code in ivory. The bulk of the code is building drivers, that was about 10 kloc, the application was only 3 kloc. We needed to operate with existing the messaging format, we mangled some python code to generate ivory. Totally about 48 kloc.

Evaluation:
The code was delivered to the red team, and now no undefined/buffer overflows etc.. There exists some broad system level issues.

This was a large complex application, but we build high assurance code very quickly. Embedding DSL in Haskell allowed us to build compositional programs.

Source code is at ivorylang.org

Q: You and OCaml both target ARM, so why not us OCaml
A: We wanted compatibility with niche systems

Q: Did you run out of memory ? how do you communication this to the programmer ?
A: We don't currently have way to determine maximum stack depth, we are working on it.

Q: Correct of construction C code, you mean subset of C right ?
A: Yes, no pointer arithmetic

Q: Array handling
A: All arrays are static length

Q: What about error messages for embedded DSL's ?
A: It tough, and we would love to improve it

Q: How do you stop integer overflow ?
A: Using assertions instead of expressing in the type systems. The user can manually add annotations.

Q: Your re-targeted a python script to generate ivory instead of C, how many bugs did you find ?
A: Not sure, its was more clean slate

Concurrent NetCore: From Policies to Pipelines
Speaker: Cole Schlesinger (Princeton University)

IMG_0003Policy language for describing network policies. A quick intro to basic networking, we traditionally we use distributed protocols such as OSPF but now we have a centralized server to coordinate. We us openflow 1.0, each switch is a simple predicate and action table. There isn't just 1 table, different tables for many difference purposes e.g. different layer. Some switches have reconfigurable pipelines. We are now looking at OpenFlow 2.0 to switch into two phrases: configuration (like flashing router) and population.

We have developed a core calculus for pipelines, we have hardware models for 3 switch architectures and add virtual pipelines to our compiler.

The language:

example using the Broadcom trident switch, each table is a function from a packet to a set of packets

The syntax for an example pipeline (trident switch)
W;(all,{port,vport}) ; x:(dst MAC,port) ; if -(vport=1) then y:(vport,port) else id ; z:(all,all)

The control can connect to many switchers, the programmer can express a virtual pipeline as a minimum for each physical pipeline

example controller
m:(typ, port) + r:(dst MAC, port)

We developed 3 general techniques we designed for "RMT" architecture
1. Consolidate packet duplication
2. Refactor field modification
3. find field size

The syntax for the language, see paper. We are inspired by the frenetic family of languages.

We have added concurrency and thus race conditions, we using typing judgements for read and write permissions on each field.
Our normalisation lemma handles rules such as no overlap between read and write sets.

Q: Your plus operator seems to compile after parallelism
A: Its not compiled away just implicit

 


 

Session 2: Static Analysis (Chair: Ken Friis Larse)

SeLINQ: Tracking Information Across Application-Database Boundaries

Speaker: Daniel Schoepe (Chalmers University of Technology)

Motivation: Information flow control for computation & database queries. We given data to many complex application that we don't trust. Communication bounds better application is weak spot, so we consider end-to-end. Our contribution is end-to-end information flow with type checker and compiles to F#.

Typically we interface with SQL database, if we don't do it properly then there's lots if issues. Now people write code to generate inference code with SQL

Information flow code should into lead private data. Non- interference means private data channels shouldn't interact with public data channels. Language-integrated query using a toy language (subset of F#) with includes meta-programming. Security policies and security levels are expressed in the type system. We can then annotate databases columns, e.g. name is public but age is private.

Low equivalence relation is equivalence taking into consideration the new types for security levels. We can then use this relation to encode non-interference.

The type system is based on flowCaml, by Pottier and Simonet, 2003.

Our type checker and compiler implementation is written in Haskell using BNFC, we support ADT's .

Using a movie rental database as an example, we consider addresses as private but cities as public. We can query for all rental from a city with public but if we then try to get addresses, the type checker will reject this.

Q: Do you support more complex labelling schemes ?
A: Yes, we support lattices

Q: Can we classify fields or rows ?
A: No

Q: The exists function you demonstrated doesn't depend on type of elements, does it ?
A: No, but we can express that

Q: Do you deal with aggregation ?
A: We haven't considered that yet

Type-Based Parametric Analysis of Program Families
Speaker: Sheng Chen (Oregon State University)

A software system is a family of programs with a configuration, making error messages are difficult to read. We start with a program family, select a program and analysis but variational analysis takes a different route. We will look at the variational analysis today (VA):

Programming by search, applying an analysis gives us results R', but we expected R. We can instead introduce variations to data structures. e.g list -> variational list. Then we adapt analysis, create correctness proof and evaluate performance.

There's a lot of very helpful code extracts, syntax definitions etc.. that I didn't have time to copy, see the paper for them.

Q: In your performance results, did you examples fit into the more or less aggressive approach ?
A: They work even in the more aggressive example
We are breaking for lunch so we'll be back at 2:00


Session 3: Binding Structure (Chair: Tarmo Uustal)

Romeo: A System for More Flexible Binding-Safe Programming
Paul Stansifer (Northeastern University); Mitchell Wand (Northeastern University)

IMG_0005Programming is hard. Meta programming allows us to program to program, but I can introducing name issues. Alpha equivalent inputs, should produce alpha equivalent outputs. We have developed a binding safe programming language, with notation to express complex binding. Romeo expresses binding information in types.

Example remeo code:
stx = (Lamda (a) (+ a 1))

Romeo automatically renames a (as its not exposed) to makes sure no names clash.

Again, have a look at the paper for cool code examples

Romeo is a binding safe language that support term languages with complex binding structure

Q: In meta programming we want effects, how do deal with e.g. meta programming for loop invariant code motion?
A: Remeo is side-effect free, effects in an meta programming we'll take this offline

Q: Is this online ?
A: Not yet

Q: This is very similar to lamdba M ?
A: This was your inspiration but its not powerful enough as we have syntax cases

Q: Hygienic macro expansion ?
A: We indirectly mentioned it

Q: Thanks for getting starting with an example
A: Thanks

Q: What's the complexity of expansing remeo code ?
A: potentially n^2, we think we could perform the substitutions lazily to improve this

Q: can we represent practical thing ?
A: We don't have a story for possible racket macro's you could want, some are simple

Maximal Sharing in the Lambda Calculus with letrec
Speaker: Clemens Grabmayer (VU University Amsterdam)

People desire increased sharing, e.g. to reduce code duplication and check equality of unfolding semantics of programs.

IMG_00063 Steps:
- interpretation - turning lamdba-letrec into lambda term graphs
- bisimulation check & collapse of lambda let graphs -
- readback

Our tool is available on hackage
Possible extensions include adding support to full functional languages.

Q: How is this similar to disiding type equivalent in System F?
A: I will look into this

Q: Is this subsumed by Kleene algebra ?
A: I will look into this


Session 4: Program Optimisation (Chair: John Launchbury)

Practical and Effective Higher-Order Optimizations
Speaker: Lars Bergstrom (Mozilla Research)

Many CFA-based optimizations addressed in other ways, but higher-order inlining is the missing optimization. In practice, high performance functional programs avoid functions, e.g. OCaml compiler example.

What is the concrete function called from some computed function call ? (handled by 0CFA)

We have come up with a new analysis to see when can do higher-ordering inlining. We see if any free variables have been rebound between the closure capture location and the call site.

Technique:
- Normalize and number soure code
- Build control-flow graph
- Add CFA-infromed edges
- Perform rebinding

Q: How can I as a programmer understand when the optimisation will kick in ?
A: If your the compiler writer, yes, otherwise we don't yet have a story for this

Q: What happens with things like List.map and List.map ?
A: This are often special case in the compiler to inline

Worker/Wrapper/Makes It/FasterIMG_0007
Speaker: Jennifer Hackett (University of Nottingham)

Research focus on correctness of optimisations but there's a lot less work on how we check that the transformation is actually an improvement.
Example of adding an accumulator to th
e List.rev function

The worker wrapper theroms captures induction.

I'm headiing off and direct the reader over to Leo at http://www.syslog.cl.cam.ac.uk/2014/09/01/icfp-2014-day-1/

1Sep/140

ICFP 2014: Day 1

Posted by Leonhard Markert

This is day 1 of the 19th International Conference on Functional Programming.

Opening by the General Chair, Johan Jeuring

Quick facts about ICFP 2014:

  • 340 people registered for ICFP
  • 466 people registered for any event
  • most people are from Sweden, UK and USA
  • 21 sponsors, $60k in sponsoring

Keynote: Using Formal Methods to Enable More Secure Vehicles -- DARPA's HACMS Program

Kathleen Fisher (Tufts University)

Four and a half years long program, 2012--2017.

Many everyday objects today are computers! Infrastructure, medical devices, refrigerators, cars ... examples given for these things being hacked.

Focuses on cars: "computer on wheel" -- 100s of interconnected computing units. Everything is under control of software: braking, accelerating, locking.

First attack vector was via diagnostic bus but this requires physical control (you need to be in the car). Later found that physical control is not necessary to take over a car (Bluetooth, emergency number, audio CD).

Documentation for car systems exist and spare parts can be ordered online and analyzed (so no air gap, no security by obscurity).

Use the desktop computer approach? -- anti-virus scanning, intrusion detection systems, patching infrastructure. Problem: "security software" tends to introduce vulnerabilities. Also: embedded systems don't have the resources to monitor themselves all the time the way desktop computers do.

SAT solvers are getting faster; this allows us to use them to verify real software and hardware.

Clean-slate methods for high-assurance software: code synthesis, DSLs, Interactive Theorem Prover as PL.

Structure of the HACMS program: technical areas are vehicle experts, OS (building on verified sel4 kernel), control systems, research integration (management, composition: how to put high-assurance components together to create a bigger high-assurance component?, integration), red team (hints at problem of how to assess the security of a system -- currently a heuristic is: how long does it take for hackers to take over the system?).

Air team works on helicopter and quadcopter; ground team works on cars.

Demo video: WiFi-controlled quadcopter being taken over using aircrack-ng.

Every 18 months the red team gets access to the system and tries to find vulnerabilities.

Started with monolithic software, no RTOS, no security. After first iteration: use faster processor, put FreeRTOS and HAL under still monolithic Ardupilot software. Next iterations: generate more and more code from new DSL.

SMACCMCopter 18 months assessment: it flies! Air team can prove system-wide security properties like memory safety, ignoring malformed / non-authenticated messages; all good messages received will reach the motor controller. Red team found no security flaws in sex weeks with full access to source code. "Probably the most secure UAV on the planet" ("kinda scary").

Research integration (Rockwell Collins) used AADL to describe how hardware and software components fit together. Developed tools (AGREE, Resolute) to reason about the system. They are used to check assumptions about the system (example: there is only one way to communicate with the quadcopter).

Control systems (Galois) created Ivory and Tower, open-source EDSLs for synthesizing safe low-level code and tasks, respectively. Designed and built SMACCMCopter, the first high-assurance UAV software, in under two engineer-years.

OS (NICTA) proved full functional correctness of a fast OS microkernel, sel4 (now open source). Also working on eChronos, a high-assurance RTOS product line, and formally verified OS components (drivers, file system etc). CAmkES is a system for connecting and configuring these components.

Vehicle experts (Boeing) integrate HACMS technologies into the Unmanned Little Bird.

Air team in phase two tries to re-architecture (split) the system so it runs on a flight computer and a mission computer as is usual in real world scenarios. Red team will then try to gain control over the flight computer from the mission computer.

More detailed description of individual tasks follows.

Tech transition: a huge problem. The cyber-physical systems industry know they have a cyber-security problem but there are many barriers to adoption of HACMS-like technology.

C: Make it cool! Come up with a catastrophic event!
A: Well this is a response to "you can take over a car wirelessly". Another aspect: it will probably be mandated soon for cars to take to each other -- but remote code execution was never considered.

Q: Why not have quadcores in the car?
A: Very price-sensitive; the industry has tiny margins.

Q: How much does this project cost?
A: $70m.

Q: Research: top-down or bottom-up, which is better?
A: We need both. DARPA is bottom-up, NSF is top-down.

Q: You expect to pay more for high-assurance code. What is the cost? Especially for the second, third, ... project.
A: High-assurance code will always be more expensive. We hope to bring down the additional cost by developing tools that aid building high-assurance code. But have to consider: traditionally we pay people to write sloppy code; the cost from running sloppy code is never properly accounted for! Maybe high-assurance software is cheaper in the long run.

Q: Why did you replace the Arduino processor?
A: We wanted to start with an existing open source hardware platform.

Q: (bit unclear)
A: We're close to having one high-assurance system, the SMACCMCopter. Still far away from having a high-assurance car for example (one problem: no-one knows how many lines of code a normal car has!)

Q: How do you combine secure and unsecure code?
A: That's one of the challenges of course.

Q: How do you know the hardware does what you think it does?
A: In this project, we simply trusted the hardware. Other people are working on hardware issues.

Session 1: Domain Specific Languages I (Chair: Anil Madhavapeddy)

Building Embedded Systems with Embedded DSLs (Experience Report)

Patrick Hickey (presenting), Lee Pike, Trevor Elliott, James Bielman, John Lauchbury (Galois Inc)

Open source: github.com/GaloisInc

Goal: Build a high assurance quadcopter controller (long-term: helicopter controller). Small embedded system (microcontroller), hard real time, safe, secure. Had three engineers, 18 months, starting from scratch. Expected to write ~50kloc C/C++ code.

Embedded systems are everywhere. They're basically computers from the 80s: small and cheap. Development tools are from the 80s too (C). You get all the security flaws you'd expect -- buffer overflows etc. Can't push patches, and there are more attack surfaces than ever.

Approach. Haskell, OCaml? Problem: very resource limited system; GC imcompatible with hard real-time. C, C++? Tradeoff between safety and productivity -- e.g. NASA Jet Propulsion Lab writes high assurance C; extremely costly.

Alternative: build your own tools. Start with clean slate, use language approach, correct by construction. Built an embedded DSL in just a few months: Ivory, embedded in Haskell, compiles to safe subset of C. It's memory safe and allows no undefined or implementation defined behavior. No heap, only global and stack allocation.

Haskell type system guarantees properties of Ivory language programs; Haskell is Ivory's macro language.

Ivory: expressions are pure Haskell values; statements are effectful, embedded in monad. Examples for what has been embedded in the type system from the paper are given.

Ivory programs (modules) are a collection of top level C procedures and variables.

Tower DSL: organising entire applications. Composes Ivory procedures into applications. Initial problem: multithreading with message passing; code generation of low level primitives. Tower is implemented as a Haskell macro that generates Ivory programs. Tower contains concept of Tasks and Channels. It became the DSL describing software components.

(Side story: a third-party Python tool generating C output was modified to output Ivory code as well; this code could then be type checked to make sure it was correct. This way, the Python tool didn't need to be trusted.)

SMACCMPilot: 10kloc (and growing) of drivers, 3kloc application (stabilizing the quadcopter and flying), 10kloc message marshaling. Generates about 48kloc of correct-by-construction C.

Red team results: found no buffer overflow, no undefined behaviours, no denial of service. After five months they found two subtle architecture level bugs. Lifted the layer of abstraction: no need to worry about low-level problems, can concentrate on architecture.

Conclusion. Quickly and cheaply built a large, complex application; caught lots of errors early by Haskell type checking, could focus on application design; used Haskell as a macro system making it possible to build compositional programs.

Q: How is this an improvement over the OCaml code generator?
A: Cross-compilation was a specific concern, making sure the code runs on 8-bit, 16-bit, 32-bit architectures.

Q: Restricted amount of memory: how did you deal with that?
A: Don't yet have a way to assess the maximum stack use of a program. But we think it is solvable.

Q: You construct correct-by-construction code that is a subset of C, e.g. you don't do pointer arithmetic.
A: Yes.

Q: Do all arrays have a size known at compile time?
A: Yes.

Q: Type error messages were a problem?
A: Yes.

Q: You don't allow undefined behavior. How do you prevent overflows?
A: This is not done at the type level; instead we insert assertions in the C code (at the moment ~2500 assertions). Hope to analyze and discharge these assertions statically in the future.

Q: How about Rust?
A: Big fan. Rust was in its infancy when the project started two years ago. But when using Haskell we get type checking for our macros.

Concurrent NetCore: From Policies to Pipelines

Cole Schlesinger (presenting), Michael Greenberg, David Walker (Princeton University)

Review on networking: have an enterprise network with hosts, connected to a switch. In a SDN, the controller tells the switch which rules to use to deal with packets. Openflow 1.0 uses prioritised rules for this. Has predicates and actions. But switch hardware is not that simple: lots of tables in a switch pipeline. New configurable designs available now.

Openflow 2.0: splits interaction between controller and switch into two phases: 1. configuration (think flashing an FPGA); 2. population of the tables.

Contributions: core calculus for packet processing pipelines and controller policies; hardware models for three switch architectures; compilation algorithms for deploying general controller polices to target switch architectures.

Example: Broadcom "trident" switch. Each component can be thought of as function from packets sets of packets. (empty: drop, singleton: forward, multiple: e.g. flooding).

Have forwarding table; sequencing; if-statements over predicates; rewrite component.

Intel "flexpipe" supports reconfigurability. Each table can match on and write to every field. Also supports concurrency; introduce concurrency operator to model this.

What about the controller? Easiest for programmer to define virtual pipeline that describes the requirements.

New operator: packet duplication -- copy and run packet through multiple components.

Example: compilation to the barefoot "RMT" architecture -- three passes: 1. consolidate packet duplication (output ports must be known at the time of packet duplication, also need to add metadata to model processing as it would have taken place in the virtual pipeline); 2. refactor field modification; 3. place generated tables into physical tables (use commutativity of concurrency and other tricks -- the algorithm is a combination of brute force and dynamic programming).

Metatheory. Predicates form a Boolean algebra; grammar given for Predicates and Policies. Have a type system and small-step semantics.

Typing judgement of the concurrency operator prevents race conditions. Can prove normalization and confluence, in combination giving strong normalization (determinacy). Adequacy can be shown too.

Future work: implement the compiler and transformer on a real chipset.

Q: Does your system support encryption of packets inside the network, like IPSEC?
A: Yes and no -- you could think of encryption as an atomic action but it has not been fully worked out.

Q: Plus operator?
A: Plus operator is duplication. Semantics: both packets are sequentially processed by the remaining pipeline.

Session 2: Static Analysis (Chair: Ken Friis Larse)

SeLINQ: Tracking Information Across Application-Database Boundaries

Daniel Schoepe, Daniel Hedin, Andrei Sabelfeld (Chalmers University of Technology)

Motivation: confidential data is entrusted to many application which are composed of many components. Information leakage often occurs at boundaries, e.g. SQL injections. Need to look at confidentiality as end-to-end security to prevent attaches.

Contributions: track information flow for code using database queries. Security type system ensures noninterference and proof of soundness. Compiles to executable F# code; realistic demo.

Approach: Traditionally, attacks often occur at component boundary, e.g. SQL injections. Tierless approach: correct communication across boundary ensured at compile-time. Example: write everything in F#, some of which compiles to SQL.

Intuition: untrusted program shouldn't leak private data to third parties. Need to track flow of information through the program.

Noninterference: private inputs must not influence public outputs. Attacker doesn't learn anything about secrets by observing public behavior of the program.

Language-integrated query -- build queries via meta-programming. Example: LINQ in F#. Guarantees user input is sanitized and SQL is syntactically correct.

Idea: types annotated with security levels (L for public, H for private).

Security policy specified by giving a typing to each database. Code example given.

Introduce low-equivalence relation for values that are indistinguishable for the attacker. Defined structurally based on type; components with level L have to be equal.

Each secure program should be noninterfering with itself.

Security type system has soundness: if program is well typed then it is noninterfering.

Implementation: type checker (based on constraint solving and unification)and compiler implemented in Haskell using BNFC. Language compiles to executable F# code; compilation consists mostly of removing security types. Type system and soundness proofs allow user-defined algebraic datatypes.

Case study: near-realistic movie rental database. Want to protect names and exact addresses of customers and staff. Examples given of valid and invalid queries.

Summary: light-weight framework for tracking information flow; security type system ensures noninterference; proof-of-concept implementation given; extension to ADTs; case study.

Future work: extend to client code (JS).

Q: Can I make the security policy express things like "only certain people should be able to access this"
A: No, can't do this at the moment.

Q: Could this be embedded into a computational framework where one could assist the system in showing that some access is in fact safe when the type system's approximation cannot show it?
A: That would be an interesting extension.

Q: Do you deal with aggregation?
A: This has not been considered yet.

Type-Based Parametric Analysis of Program Families

Sheng Chen (presenting), Martin Erwig (Oregon State University)

Can often think of programs as "program families" which by configuration can be refined to single programs. But build & install may fail in unexpected ways. Trying all combinations of options is infeasible.

Idea: use variational analysis directly on the program family / variational program. Introduce a framework for doing variational analysis in the type system.

Notation for choice in variational programs introduced.

Program by search.

Creating variational analyses: 1. add variation data structures, e.g. lists become variational lists (VL, variation in vales and list structures); 2. adapt analysis, e.g. analysis of sort becomes analysis of VL sort; 3. correctness proof (tedious); 4. performance analysis

Example: variational typing rules shown.

This framework consists of annotations (sets), types and constraints. All three have extension points. Application and Choice typing rules are shown.

A constraint solving example is shown involving merging and splitting.

Instantiating the framework for variational 0CFA: no extensions required for annotations, types and constraints but rules need to be extended.

Correctness evaluation of the framework. Implemented Flyweight Java version of the framework and proved it correct.

Performance evaluation shows that the framework performs almost as good as manually lifted 0CFA analysis.

Q: Is the aggressive (faster) approach powerful enough to check types?
A: Yes.

Q: Have you tried to do effect analysis in this framework?
A: The framework should be expressive enough to do this but it hasn't been done yet.

Session 3: Binding Structure (Chair: Tarmo Uustal)

Romeo: A System for More Flexible Binding-Safe Programming

Paul Stansifer (presenting), Mitchell Wand (Northeastern University)

Introduction: metaprogramming allows us to automatically generate programs. Name binding is a problem -- example given. Recent survey found that 8 out of 9 metaprogramming systems had name binding problems.

Binding safety: alpha-equivalent inputs should produce alpha-equivalent outputs. In theory this works: FreshML, MetaML. But these only support simple binding forms like λ. Real languages have more than λ, e.g. let* in Racket, "local" construct in SML.

Contributions: a binding-safe PL, Romeo, with a notation to express complex binding, clear big-step semantics, proof of correctness.

Explains the concepts of "imports" and "exports" in the context of name binding.

In Romeo, binding information is in the types.

What makes two let*s alpha-equivalent? Follow-up: when are two Romeo declarations alpha-equivalent? -- must have exactly the same behavior. Thus their exported binders must be the same.

Romeo's runtime enforces safety. It automatically avoids name clashes. Accidentally unbound names are a dynamic error ... which Romeo's optional deduction system can statically rule out.

Conclusion: Romeo is a binding-safe language that supports term languages with complex binding structure.

Q: What about de Bruijn indices? Effects in the metalanguage?
A: Romeo is side-effect free.

Q: Can I play with Romeo?
A: I'll try to convince my supervisor to release the source code. Currently the implementation is out of sync with the formalism.

Q: This looks similar to λ-m.
A: Lambda-m is not powerful enough. We give you syntax case.

C: Thanks for starting with a concrete example.
A: Thanks to my peers for the criticism!

Q: How does hygienic macro expansion relate to this?
A: If your macro gets larger and larger, there is less safety -- hygienic macros are all about protecting macros from each other.

Q: Complexity of expanding Romeo code?
A: O squared, so that's not good. Macro systems have the same problem though. Trick: perform actual substitution lazily, and we think the same technique could be used with Romeo also.

Q: How much expressiveness does Romeo have?
A: Can we express all Racket macros -- probably, we think most are just simple extensions of what we have right now.

Maximal Sharing in the Lambda Calculus with letrec

Clemens Grabmayer (presenting, VU University Amsterdam), Jan Rochel (Universiteit Utrecht)

Motivation: increasing sharing is desirable -- compact code, avoid duplicated work at run-time, enables us to check equality of unfolding semantics of programs.

1. interpretation of λ-letrec terms as term graphs; 2. bisimilarity and bisimulation collapse of λ term graphs; 3. readback of λ term graphs.

This method extends common subexpression elimination and is targeted at maximizing sharing statically.

Example of unfolding equivalence of fixpoint operator.

Example of graph interpretation given. First we generate a λ higher order term graphs. Transform into first-order term graph with scope vertices with backlinks, then into λ term graph.

Then we do a bisimulation check between λ term graphs.

The class of eager-scope λ term graphs is closed under functional bisimilarity.

Implementation: "maxsharing" on Hackage, uses Utrecht University Attribute Grammar Compiler.

Time complexity analysis of the individual steps follows.

Possible extensions: support for full functional languages (would require work on a core language with constructors and case statements); prevent space leaks caused by disadvantageous sharing.

Applications: use maximal sharing at run-time, possibly directly on supercombinator graphs; can be coupled with GC; code improvement (statically) to detect code duplication; checking function equivalence (relevant for proof assistants, theorem provers, dependently-typed PLs).

Q: Explicit weakening. Have you tried to implement reduction of the first order term graph?
A: It's possible. Used port graphs to implement reduction because that seemed easier.

Q: Does this subsume Kleene algebra?
A: Maybe? Interesting idea.

Session 4: Program Optimisation (Chair: John Launchbury)

Practical and Effective Higher-Order Optimizations

Lars Bergstrom (presenting, Mozilla Research), Matthew Fluet (Rochester Institute of Technology), John Reppy University of Chicago), Nora Sandler (University of Chicago), Matthew Le (Rochester Institute of Technology)

Control flow analysis (CFA) allows higher-order inlining. In practice, high performance function programs avoid functions.

Two subproblems: 1. what is the concrete function called from some computed function call? -- already handled by 0CFA; 2. does the potential inlining point have a compatible environment? -- "reflow", JITs.

Contribution: determine when higher-order inlining is safe. Key insight: don't check "is the environment the same?" but "could any of the free variables have been rebound between the closure capture location and the call site / potential inlining point?". This supersedes ad-hoc higher-order inlining tricks.

Technique: 1. normalize and number source code (annotations for variable binding sites); 2. build control-flow graph; 3. add CFA-informed edges; 4. Perform rebinding search on paths between closure capture location and potential inlining point

Safe and unsafe example given; control flow graphs shown.

Reflow analysis is strictly stronger. But this analysis has about 3% time cost of the entire compilation process.

Conclusion: start using closures in benchmarks!

Higher-order inlining heuristics still missing; correctness proof still incomplete.

Q (SPJ): You showed that the analysis is cheap but what do we get?
A: Benchmarks are in the paper.

Q: If I want to use this: is it clear for me where the optimization will kick in?
A: Well, yes, if you're the compiler writer (laughs from the audience). But this is a problem indeed.

Q: I usually pass closures to "map" and "fold", so these will have many call sites. Can this still be optimized in this framework?
A: "map" and "fold" are usually handled in a special way by the compiler anyway.

Q: Is it imaginable to prove that the optimization actually makes the program run faster?
A: Next presentation will try to answer this. But it's very difficult.

Q: What's the relationship to Milton's optimizations?
A: Not entirely sure. Is defunctionalization or CFA-style analysis better? An interesting question.

Worker/Wrapper/Makes It/Faster

Jennifer Hackett (presenting), Graham Hutton (University of Nottingham)

Context: lots of research into optimization focuses on correctness: is meaning preserved?; comparatively little work on improvement -- ensuring programs become "better".

Example: list reversal function. How can we prove that the improved program is both faithful to the original version and also more efficient?

The Worker/Wrapper transformation: Original program has type A, worker has type B, wrapper has type B -> A.

Formally, given

f : A -> A, g : B -> B, abs : B -> A, rep : A -> B

and

abs . rep = id, g . rep = rep . f

then

fix f = abs (fix g)

Reasoning about efficiency. Need some way to compare the cost of two programs. Naive approach: count steps taken to evaluate terms to some normal form. But this fails in a lazy context! Need precongruence.

Precongruence: a preorder and a congruence -- this excludes negative interaction between improvements at different points in the program.

Improvement theory: counting steps, also have a notion of improvement within a constant factor.

Improvement induction principle shown. "tick" idea. It's like guarded coinduction.

Shows the Worker/Wrapper theorem for improvement theory (typeless).

Structure of improvement proofs is essentially the same as that of correctness proofs.

Conclusions: we can make rigorous arguments about the efficiency of lazy programs; make performance guarantees for a general-purpose optimization.

Further work: structured recursion; turn it into a typed theory (advantage: only need to worry about contexts where the types make sense); look at improvement of space usage and other resources; quantify improvement (how much faster? which optimization should be applied if there are alternatives?); automate it.

Q: Do you have completeness results?
A: It's not clear whether Worker/Wrapper improvement proofs can be translated from fixpoint style to improvement theory.

Q: Do you have any examples where Worker/Wrapper transformation makes things slower?
A: Of course we have come across these, but we wouldn't put that in the paper! (Laughs from the audience). [Tree reversal example follows]

Q: Problem in GHC is the cost of building closures. How is this reflected in your framework?
A: Read early papers on improvement theory.

Q: How about local improvements?
A: This work does not look at the surroundings.

Q: Would things change dramatically if you applied this to a call by value language?
A: There has been some work on that.

Session 5: Context Dependence (Chair: Yukiyoshi Kameyama)

Compositional Semantics for Composable Continuations: From Abortive to Delimited Control

Paul Downen, Zena M. Ariola (University of Oregon)

Classical control: callcc is the classic control operator, going back to Scheme. Corresponds to classical logic (the way the λ-calculus corresponds to intuitionist logic).

Delimited control: delimit the scope of effects; continuations compose like functions. These little changes make delimited control vastly more expressive than classical control.

λ-calculus + callcc operator and λμ-calculus are equivalent. Relaxing λμ syntactically gives Λμ which allows delimited control! Λμ in turn is equivalent to lambda + shift0 + reset0.

Classical control in detail. Operational semantics of callcc as an extension of CBV lambda-calculus given. Alternatively, we can understand callcc using an equational theory. But: it's weaker than operational semantics. Some programs can be evaluated to a value but equational theory for callcc cannot reach a value! How can we know that we have the whole context?

Calling a continuation never returns -- it "jumps". A jump is the same when inside a larger evaluation context so it delimits the usable extent of a continuation.

λμ syntactically distinguishes jumps as "commands".

Relaxing the syntax. Λμ collapses term/command distinction. It is interpreted with the same rules only that we have more expressive meta-variables. No new constructs, no new rules. As typed calculi, Λμ is considered equivalent to Parigot's λμ. What happens if you ignore types?

shift and reset: common basis for delimited control. Continuations return and can be composed like functions.

λ-calculus + shift + reset is a true subset of Λμ. What's missing?

Difference between shift and shift0: the latter removes its surrounding delimiter.

λ-calculus + shift0 + reset0 is equivalent to Λμ.

Λμ as a framework for delimited control gives provable observational guarantees about the operators, e.g. idempotency of reset.

More in the paper: can get nice reasoning principles using call-by-value, call-by-name, call-by-need; equational correspondence with compositional transformations.

Q: What do you mean by equality in this presentation?
A: Observational equality -- this is about tools for reasoning about programs rather than expressivity of the calculi.

Q: λμ has a well known type system but to introduce delimited control you had to relax the syntax. Is there a type system for this?
A: (?)

Q: Classical logic, intuitionist logic -- delimited logic?
A: YES! Should investigate this.

Coeffects: A Calculus of Context-Dependent Computation

Tomas Petricek (presenting), Dominic Orchard, Alan Mycroft (University of Cambridge)

Coeffects is the dual of Effects. Done!

This talk is about the context in a type judgement.

Effect systems: Γ :- e : τ & σ
Coeffect : Γ & x :- e : τ

Interesting properties of Γ ("view from the extreme left"): variable related -- liveness, bounded linear logic, data-flow programming, provenance; environment related -- implicit parameters, type classes, distributed computing, platform versioning.

Coeffect calculus for bounded reuse: annotate each variable in the environment with the number of its uses in the expression, essentially tracking the number of variable uses.

Variable, abstraction and contraction rules shown.

Another possibility: coeffect calculus with historical values. Replace addition with max operation in the contraction rule.

Environment related: coeffect calculus for implicit parameters. Example: implicit parameter for time zone and time. We can calculate how many parameters are still required (i.e. not given by the current environment). The corresponding rule is non-deterministic.

What's in the paper? Unified system: coeffect scalar structure (a generalized semiring structure) and shape-indexed coeffects (context splitting and merging, per-variable or whole-context). Also have monadic and comonadic semantics -- not simple as λ-calculus is asymmetric (many to one)!

Why do coeffects matter? Generalize interesting systems (liveness, data-flow, implicit parameters, type classes) and (indexed) comonads are cool.

Q: What happens if you have (co)monads in the classical calculus?
A: Interesting question.

Program Chair's Report

Quick paper stats:

28 accepted papers out of 97 submissions.

2/9 functional pearls and 2/3 experience reports accepted.

299 reviews -- at least three per paper.

72 hour author response period (as usual).

It was decided not to allow papers from members of the program committee.

Comparison with previous years: number of papers approximately stable, share of accepted papers even more so.

Functional pearls are accepted less than normal papers.

Author's location: USA, UK, France, Japan, Netherlands, Sweden, Germany, other.

UK has highest acceptance rate, then USA.

Papers submitted only a couple of hours before the deadline were less likely to be accepted -- before that there seems to be no correlation between submission time and acceptance probability.

Big thanks to the program committee!

Deputy Lord Mayer of Gothenburg: invitation for drinks, short history of the city.

31Aug/140

ICFP 2014: Day 0 (WGP)

Posted by Leonhard Markert

The 19th International Conference on Functional Programming will begin tomorrow. Two affiliated events, the 10th Workshop on Generic Programming (WGP) and the Workshop on Higher-Order Programming with Effects (HOPE), take place today.

WGP Session 1

Invited talk: Functional Programming, Object-Oriented Programming and Algebras

Bruno Oliveira (University of Hong Kong)

How to achieve modularity, type-safety and reuse without requiring a PhD in type theory? Bruno Oliveira proposes algebras as an alternative to algebraic datatypes and OO hierarchies, more specifically: variants of F-Algebras. Fold Algebras (using products of functions) and F-Algebras (using sums of products) are isomorphic.

An example of encoding generalized algebraic datatypes (GADTs) using type classes in a way similar to Church encodings due to Hinze from 2003, when there were no GADTs in Haskell.

Similarly, in OO, Church encodings were used to model the visitor pattern.

The Expression Problem: a problem of modularity. Given an existing type of expressions, how do you extend it (e.g. how do you extend data Exp = Lit Int | Add Exp Exp with multiplication)?

Solution 1: encode algebras as type classes, extend using subclasses.

Solution 2: use F-Algebras (Datatypes à la Carte)

In OO languages, can also solve the Expression Problem with algebras: use interface inheritance.

All current solutions using algebras still require heavy encodings. In the future: programming language support for algebras?

WGP Session 2

Generic Constructors and Eliminators from Descriptions

Larry Diehl (presenting), Tim Sheard (Portland State University)

Abstract: Dependently typed languages with an “open” type theory introduce new datatypes using an axiomatic approach. Each new datatype introduces axioms for constructing values of the datatype, and an elimination axiom (which we call the standard eliminator) for consuming such values. In a “closed” type theory a single introduction rule primitive and a single elimination rule primitive can be used for all datatypes, without adding axioms to the theory.

Abstract continued: We review a closed type theory, specified as an A GDA program, that uses descriptions for datatype construction. Descriptions make datatype definitions first class values, but writing programs using such datatypes requires low-level understanding of how the datatypes are encoded in terms of descriptions. In this work we derive constructors and standard eliminators, by defining generic functions parameterized by a description. Our generic type theory constructions are defined as generic wrappers around the closed type theory primitives, which are themselves generic functions in the A GDA model. Thus, we allow users to write programs in the model without understanding the details of the description-based encoding of datatypes, by using open type theory constructions as an internal domain-specific language (IDSL).

Ornaments in Practice

Thomas Williams (presenting), Pierre-Évariste Dagand, Didier Rémy (INRIA)

We often write very similar functions for structurally similar data types, e.g. "add" for natural numbers and "append" for lists. Natural numbers and lists have the same recursive structure and a coherent property, a "length" attribute:

add (length ml) (length nl) = length (append ml nl)

A simple projection function exists from lists to natural numbers: the "length" function.

The motivation for this paper was to add ornaments to ML. Naturals/lists example: an ornament is defined by a projection function from the ornamented type to the bare type, e.g. "length". Declare an ornament as

ornament from length : a list -> nat

The system checks whether recursive structure is preserved.

Syntactic lifting is how "append" can be derived from "add" almost automatically. Given

let rec add m n = match m with
  | Z -> n
  | S m' -> S (add m' n)

and the lifting expression

let lifting append from add with
  {length} -> {length} -> {length}

the system can derive

let rec append ml nl = match ml with
  | Nil -> nl
  | Cons(x, ml') -> Cons(?, append ml' nl)

But the ? is ambiguous (it's the "creative part") and needs to be completed manually, by code inference (heuristically) or a "patch".

A patch would look like

let append from add
  with {length} -> {length} -> {length}
  patch fun _ -> match _ with Cons(x, _) -> Cons({x}, _)

Applications of this: for refactoring (see question further down), or to derive an evaluation function (uniquely) from "conv", a bijective function between expression types.

More complex data structures can be lifted too (e.g. Sets and Maps); so can higher-order functions. Lifting for GADTs is also possible and is automatic for some invariants given the expected type of the function.

Conclusions: describing ornaments by projection is a good fit for ML, syntactic lifting gives good predictable results. Future work: better patches, integration into ML, combining ornaments.

Q: Projections seem very similar to abstraction functions in abstract interpretation.
A: Could be seen as abstract interpretation.

Q: Is this weaker than what was presented in the 2012 paper? There you could compute ornaments out of indices.
A: Indices flow in the other direction in the presented approach.

Q: Refactoring example. Normally want to throw old code away and keep new code. Here we get some conversion between old and new.
A: Yes, could remove old code.

Q: How about code maintenance?
A: Patches are part of the answer.

Q: Can the patch language also talk about the logical parts?
A: That would not be easy.

Type Inference for the Spine View of Data

Matthew Roberts (presenting), Tony Sloane (Macquarie University)

Context: rho-Calculus (theory of term rewriting), Pattern Calculus and bondi, Stratego, "Scrap your Boilerplate" (SYB paper), kiama, Reloaded/Revolutions.

Motivation: precisely characterize the type inference machinery needed to support the spine view of data.

Result: need to add "ispair" expression (see further down) to first-class polymorphism (FCP), which is a small extension of Hindley-Milner.

Fully applied constructor view means data equals constructor with all arguments (fully applied). When arguments are missing this is a function, so we can't pattern match against it.

Spine view only knows nodes of arity two or zero. This means that all pattern matching can be done using

ispair d bind (x, y) in e1 else e2

which is morally equivalent to

case c of o x y -> e1
          _     -> e2

Here x refers to the constructor and all its arguments except the last one, and y is the last argument.

Examples: generic map and fold.

Implementations: the "decidedly generic compiler" dgen and "kiama".

Q: Why is complete type inference so important?
A: The semantics of doing spine view with only one construct took many attempts; this one allows for type inference so it was chosen over the others.

Q: Have you tried to use FCP on other spine view representations?
A: With FCP, you trade off type annotations with data annotations.

WGP Session 3

First-class Isomorphic Specialization by Staged Evaluation

Alexander Slesarenko (presenting), Alexander Filippov, Alexey Romanov (Huawei Technologies)

There are three types of program specialization: partial evaluation, supercompilation and isomorphic specialization. Idea: we have A, B, P such that P : A -> B; also A', B' which are isomorphic to A and B, respectively, and P' : A' -> B' which can be computed more efficiently than P. Potentially there are multiple options for A', B' and P'. How can we use P' on A and B? (A and B: 'domain' data; A', B': 'core' data)

The example used is matrix-vector multiplication. Matrices, for example, can be represented as dense (vector of vectors), This is implemented as an EDSL (embedded domain specific language) in Scala. Here we can abstract over dense and sparse vectors and matrices. The specialization can take place at runtime, and we can allow for alternative specialized versions. Specialization can lead to faster code.

Generic composition of isomorphisms -- idea: build isomorphisms for each constructor in the core language. These isomorphisms are first class citizens of the framework.

How it works: staged method invocation with graphs.

Take-home agenda:

  1. Develop algorithm in EDSL
  2. Identify isomorphic representations in the domain language
  3. Implement ADTs using a core language and isomorphic representations
  4. Generate representation-specific implementations in the core language

Implementation can be found at github.com/scalan.

Q: How did you proceed? Paper includes theory and implementation notes
A: Original idea was to use graphs, that was the starting point

Q: You talk about large performance increases compared to original Scala implementation. Have you compared with C/C++?
A: No.

Algebraic Effects and Effect Handlers for Idioms and Arrows

Sam Lindley (University of Edinburgh)

Idea due to Plotkin: algebraic effects describe abstract computations. Abstract computations are trees (modulo equations).

Monad trees are unrestricted abstract computation trees (dynamic control and data flow). Arrow trees are monad trees where only static control flow is allowed (static control flow, dynamic data flow). Idiom trees (idiom = applicative functor) are arrow trees where only static data flow is allowed (static control and data flow).

Examples are given for each monad, arrow, and idiom; the control flow and data flow trees show the dynamic / static nature.

Static computations are useful as they allow optimisations and low-level implementations (arrows correspond approximately to circuits, for example).

Key contribution: using Flow Effects, we can write monad/arrow/idiom code in a single style -- this is very different from Haskell, where different notations exist for each.

A handler is an interpreter for an abstract computation. It is defined as a fold over a computation tree, specifying how return values and operations are interpreted. Given a monad handler, we can derive an arrow and idiom handler.

Q: Can it be inferred from a Flow Effects expression whether the computation it defines is a monad, an arrow or an idiom?
A: Yes.

Q: What's the status of the Flow Effects syntax?
A: Not finalised yet -- the syntax shown in the paper is a "mathematical" syntax with braces, indices etc.

A short discussion followed on whether a unifying notation is a good idea -- maybe it is a good thing that we can tell from the syntax whether a computation is a monad, an arrow or an idiom?

Scoping Rules on a Platter -- A Framework for Understanding and Specifying Name Binding

Larisse Voufo, Marcin Zalewski (presenting), Andrew Lumsdaine (Indiana University)

Name binding: for a given reference (use), where is the corresponding declaration (what it refers to)?

In order to understand and unambiguously define name binding rules, a number of name binding combinators has been developed.

Scope combinators are: hiding, merging, opening, weak hiding.

Together they provide a language that describes how scopes are combined to look up a reference. These are then used to describe how a specific language does name binding. The example is C++: the standard devotes over 60 pages in total to name binding rules (and it turns out both Clang and GCC get operator resolution wrong).

Discussant: Ilya Sergey

Q: Could we use attribute grammars for name binding?
A: Finding a grammar for C++ has been attempted many times... It's hard to take a principled approach and applying to C++.

Q: Did you find inconsistencies in the C++ standard when you tried to collect the scoping rules?
A: No, to our surprise -- it's complex but consistent.

Q: Did you come up with any ideas to simplify the rules along the way?
A: There don't seem to be any ways to simplify the rules (except using a different language).

WGP Session 4

Composing and Decomposing Data Types -- A Closed Type Families Implementation of Data Types à la Carte

Patrick Bahr (University of Copenhagen)

This paper was inspired by the recent addition of closed type families to the Glasgow Haskell Compiler (GHC). Closed type families were used to implement Data Types à la Carte, specifically the subtyping constraint :<:

Data Types à la Carte -- idea: decompose data types into two-level types. Express recursive data types as the fixpoint of a functor. These functors can be combined by coproduct construction :+:.

Example:

-- recursive data type
data Exp = Val Int | Add Exp Exp

-- as fixpoint of a functor
data Arith a = Val Int | Add a a
type Exp = Fix Arith

-- combine by coproduct construction
data Mul a = Mul a a
type Exp' = Fix (Arith :+: Mul)

The subtyping constraint is easily expressed as a type class with an injection function inj and a projection function prj. But this approach treats :+: asymmetrically, doesn't inspect the left-hand side and is ambiguous.

Contribution of this paper: a re-implementation of :<: which behaves as expected (f :<: g <==> there exists a unique injection from f to g), avoids ambiguous subtyping (by rejecting multiple occurrences of signatures) and allows expressing isomorphism.

Implementation: Have a type-level function Embed taking signatures f, g as arguments. Produces proof object p for f :<<: g -- a relaxed version of :<: allowing ambiguity -- and checks whether p proves f :<: g. We can then derive implementations of inj and prj using the proof object as an oracle in instance declarations.

The code for Embed, Sub and helper type classes is shown. It's all contained in the "compdata" package.

Caveats: unhelpful error messages; compile time performance is unpredictable.

Discussant: Wouter Swierstra

With this approach we gain lots of expressive power, e.g. control over how the search proceeds, at the price of more type level programming.

C: A solution for the error messages apparently exists. Also, Idris does this really nicely (it's also much more complicated).

Q: Motivation: want to use sums of products. Interesting tradeoff by maybe focusing on coproducts of products?
A: ?

Q: Would instance chains have solved your problem?
A: Yes.

Q: All this type hackery for deriving injections and projections -- how about just providing them every time?
A: Of course you can do that, but it's inconvenient and obvious from the type. If you need more flexibility, you can still do it manually.

Q: No canonical ordering for injection (eg sum of int and bool) -- is this a problem?
A: Yes, at the moment you have to annotate.

True Sums of Products

Edsko de Vries, Andres Löh (presenting) (Well-Typed LLP)

A new approach to datatype-generic programming in Haskell; notable features: faithful representation of datatypes as n-ary sums of n-ary products; a library of high-level combinators to encourage concise and reusable functions; separation of metadata in the representation.

Available on Hackage as "generics-sop". "basic-sop", "lens-sop" and "json-sop" provide examples, generic lenses and generic JSON (de)serialization respectively.

Motivation was to improve on existing models: GHC.Generics representation is too constrained (binary sums and products) but at the same time too flexible (arbitrary nesting). Also has noisy metadata everywhere. Functions end up making implicit assumptions about the structure of the data which is not guaranteed by the type system.

Discussant: Patrik Janssen

Q: What kind of code does this generate?
A: Not looked at yet. But lots of implicit arguments are potentially being passed around at runtime.

Q: Can this be done better in Agda?
A: Library was completely developed in Haskell. Can probably be done more nicely in Agda, but in this case the development was driven by an actual need to deal with certain generics problems in Haskell.

Q: n-ary lists are left-biased: don't you run into similar performance problems as with binary representation?
A: List-based representation gives uniform encoding for all constructors. But probably not ideal for data types with thousands of constructors.