{"id":1819,"date":"2014-09-03T05:16:41","date_gmt":"2014-09-03T05:16:41","guid":{"rendered":"http:\/\/www.syslog.cl.cam.ac.uk\/?p=1819"},"modified":"2016-10-04T13:21:14","modified_gmt":"2016-10-04T13:21:14","slug":"icfp-2014-day-3","status":"publish","type":"post","link":"https:\/\/www.syslog.cl.cam.ac.uk\/2014\/09\/03\/icfp-2014-day-3\/","title":{"rendered":"ICFP 2014: Day 3"},"content":{"rendered":"

Good Morning from the 3rd and final day\u00c2\u00a0of the 19th International Conference on Functional Programming, as ever Leo and I (Heidi) are here to bring you the highlights.<\/span><\/p>\n

Keynote (Chair: Edwin Brady)<\/span><\/h4>\n

Depending on Types<\/span><\/span><\/h4>\n

Stephanie Weirich (University of Pennsylvania<\/span><\/h4>\n

Is GHC a dependently type PL ? Yes*<\/p>\n

The story of dependently typed Haskell ! We are not claiming that every Agda program could be ported to Haskell<\/p>\n

Example: Red-black trees (RBT) from Agda to haskell<\/p>\n

All the code for today's talk is on github: https:\/\/github.com\/sweirich\/dth<\/p>\n

Example of insertion into RBT from Okasaki, 1993.<\/p>\n

How do we know that insert preserves RBT invariants ?
\nWe are going to use types<\/p>\n

Examples RBT in Agda from Licata. We can now turn this into GHC using GADT's and datatype promotion.<\/p>\n

Haskell distinguishes types from terms, Agda does not.<\/p>\n

In Haskell types are erased at runtime unlike adge.<\/p>\n

Datatype Promotion: Recent extension to GHC, we can't promote things like GADT's though.
\nGADT's: introduced to GHC 10 years ago, the really challenge was integration with Hindley-Milner type inference<\/p>\n

Silly Type Families - GADT's from 20 years ago<\/p>\n

Our current insert to RBT needs to temporary suspect the invariances for RBT, but how can we express this in the haskell type scheme ?<\/p>\n

Singleton Types: Standard trick for languages with type-term distinctions.<\/p>\n

The difference between Agda and haskell: Totality<\/p>\n

What's next for GHC ?
\nDatatype promotion only works once<\/p>\n

Wishlist
\n- TDD: Type-driven development (as described by norman yesterday)
\n- Totality checking in GHC
\n- Extended type inference
\n- Programmable error messages
\n- IDE support - automatic case splitting, code completion and synthesis<\/p>\n

GHC programmers can use dependent type*<\/p>\n

Q: Why not use incr in the definition to unify the two constructors ?
\nA: You can, I was just simplifying for explanation<\/p>\n

Q: Its seems like this increase cost, is this the case ?
\nA: Yes, but not too much<\/p>\n

Q: Are dependent types infectious ?
\nA: Good questions, no we can add a SET interface to this RBT to abstract<\/p>\n

Q: To what extend is Haskell the right place for these new features ?
\nA: Its a great testbed for industrial scale and we can take lessons back into dependently typed<\/p>\n

Q: Can you do this without type families ?
\nA: Yes, I wanted show type families<\/p>\n

Q: Why not just use Agda ?
\nA: Haskell has large user base, significant ecosystems, 2 decades of compiler dev<\/p>\n

Q: What's the story for keeping our type inference predictable ?<\/p>\n

Q: What level of students could use these ideas ?
\nA: I've used in advanced programming for seniors<\/p>\n


\n

Session 10: Homotopy Type Theory (Chair: Derek Dreyer)<\/span><\/h4>\n

Homotopical Patch Theory<\/em><\/span><\/span><\/h4>\n

Carlo Angiuli (Carnegie Mellon University); Ed Morehouse (Carnegie Mellon University); Daniel Licata (Wesleyan University); Robert Harper (Carnegie Mellon University)<\/span><\/h4>\n

Idea: define a patch theory within Homotopy Type Theory (HoTT)<\/p>\n

Functorial semantics: define equational theories as functors out of some category. A group is a product-preserving functor G -> Set<\/p>\n

Homotopy Type Theory: constructive, proof-relevant theory of equality inside dependent type theory. Equality proofs a = b are identifications of a with b<\/p>\n

Can have identifications of identifications.<\/p>\n

Higher Inductive Types introduce non-sets: arbitrary spaces.<\/p>\n

Patch Theory defines repositories and changes in the abstract -- model repositories, changes, patch laws governing them.<\/p>\n

Key idea: think of patches as identifications!<\/p>\n

Bijections between sets X and Y yield identifications X = Y.<\/p>\n

Example: repository is an integer; allow adding one. Everything works out nicely.<\/p>\n

Next example: repository is a natural number; allow adding one. Problem: can't always subtract one (which would be the inverse of adding one)! -- solved using singleton types.<\/p>\n

Closing thoughts: computation vs. homotopy. There is a tension between equating terms by identifications and distinguishing them by computations. Analogy: function extensionality already equates bubble sort and quicksort -- they are the same function but different programs. Computation is finer-grained than equality.<\/p>\n

C: Very understandable talk! But I still need to see a talk \/ paper where Homotopy Type Theory guides me to something new.
\nA: The functorial part comes for free within HoTT. Also we're forced to do everything functorially.<\/p>\n

Q: Patch computation -- does your theory say anything about that?
\nA: Didn't work out nicely.<\/p>\n

Q: Can you give me some intuition about the singleton type used in the naturals example?
\nA: This is a hack. We want a directed type that does not have symmetry.<\/p>\n

Q: What about higher layers?
\nA: Don't need to go further than the second layer when dealing with Sets.<\/p>\n

Pattern Matching without K<\/em><\/span><\/span><\/h4>\n

Jesper Cockx (KU Leuven); Dominique Devriese (KU Leuven); Frank Piessens (KU Leuven)<\/span><\/h4>\n

Pattern matching without K<\/p>\n

Dependent pattern matching is very powerful, but HoTT is incompatible with dependent pattern matching. The reason: it depends on the K axiom (uniqueness of identity proofs), which HoTT does not allow. But not all forms of dependent pattern matching depend on the K axiom!<\/p>\n

Dependent pattern matching. Have a dependent family of types -- example of pattern matching given with dependent pattern matching and without.<\/p>\n

K is incompatible with univalence.<\/p>\n

How to avoid K: don't allow deleting reflexive equations, and when applying injectivity on an equation c s = c t of type D u, the indices u should be self-unifiable.<\/p>\n

Proof-relevant unification: eliminating dependent pattern matching. 1. basic case analysis -- translate each case split to an eliminator; 2. specialization by unification -- solve the equations on the indices; 3. structural recursion -- fill in the recursive calls.<\/p>\n

Instead of McBride's heterogeneous equality, use homogeneous telescopic equality.<\/p>\n

Possible extensions: detecting types that satisfy K (i.e. sets -- allow deletion of \"obvious\" equations like 2 = 2); implementing the translation to eliminators (this is being worked on or is done but not yet released); extending pattern matching to higher inductive types.<\/p>\n

Conclusion: by restricting the unification algorithm, we can make sure that K is never used -- no longer have to worry when using pattern matching for HoTT!<\/p>\n

Q: What was your experience contributing to Agda?
\nA: Not a huge change -- not too hard.<\/p>\n

Q: Is there a different way of fixing the \"true = false\" proof?
\nA: K infects the entire theory.<\/p>\n

Q: You said the criterion is conservative. Have you tried recompiling Agda libraries? What did you observe?
\nA: 36 errors in the standard library, 16 of which are easily fixable, 20 are unfixable or it is unknown how to fix them.<\/p>\n

Q: Is K infectious?
\nA: Yes -- when activating the \"without K\" option, you cannot use a library that defines something equivalent to K.<\/p>\n


\n

Session 11: Abstract Interpretation (Chair: Patricia Johann)<\/span><\/h4>\n

Refinement Types For Haskell<\/em><\/span><\/span><\/h4>\n

Niki Vazou (UC San Diego); Eric L. Seidel (UC San Diego); Ranjit Jhala (UC San Diego); Dimitrios Vytiniotis (Microsoft Research); Simon Peyton-Jones (Microsoft Research)<\/span><\/h4>\n

Refinement Types: types with a predicate from a logical sub-language<\/p>\n

can be used to give functions preconditions<\/p>\n

Example: division by zero.<\/p>\n

ML, F#, F* have had refinement type implementations but they're CBV! How can this be translated to Haskell?<\/p>\n

CBV-style typing is unsound under call-by-name evaluation.<\/p>\n

Encode subtyping as logical verification condition (VC)<\/p>\n

Under call-by-name, binders may not be values! How to encode \"x has a value\" in CBN? Introduce labels to types (giving stratified types): label values that provably reduce to a value.<\/p>\n

How to enforce stratification? -- Termination analysis! Can use refinement types for this.<\/p>\n

Can check non-terminating code. (collatz example)<\/p>\n

How well does it work in practice? LiquidHaskell is an implementation of a refinement type checker for Haskell.<\/p>\n

Evaluation: most Haskell functions can be proved to be terminating automatically.<\/p>\n

Q: What's a stratified algebraic data type?
\nA: It's an ADT with a label.<\/p>\n

Q: How do you prove termination of structurally recursive functions?
\nA: Map to numbers (e.g. list to length)<\/p>\n

Q: Refinement types for call-by-push-value?
\nA: There is some work on refinement types which does not depend on the evaluation strategy.<\/p>\n

Q: In the vector algorithm algorithm library, every function had to be annotated.
\nA: Here the first argument usually increases rather than decreasing so had to write a custom metric but this was simple.<\/p>\n

Q: How does this interact with other features of Haskell like GADTs?
\nA: LiquidHaskell works on Core so there should be no problem.<\/p>\n

Q: Did you find any bugs in your termination analysis of different libraries?
\nA: Yes a tiny bug in Text, has been fixed now.<\/p>\n

A Theory of Gradual Effect Systems<\/em><\/span><\/span><\/h4>\n

Felipe Ba\u00c3\u00b1ados Schwerter (University of Chile); Ronald Garcia (University of British Columbia); \u00c3\u2030ric Tanter (University of Chile)<\/span><\/h4>\n

<\/h4>\n

Programs produce results and side-effects. Can use type and effect systems to control this. Here, functions have an effect annotation -- a set of effects that the function may cause. Expressions are also annotated with their effects.<\/p>\n

Get into trouble when passing an effectful function to another function: as a safe approximation, the higher-order function must always assume that the function passed in may have any possible effect.<\/p>\n

Introduce \"unknown\" effect (upside-down question mark). Combine this with a generic type-and-effect system which comes with \"adjust\" and \"check\" funtions.<\/p>\n

Use abstract interpretation to lift adjust and check from static to gradual typing. Idea: extend the domains of sets to include \"unknown\" side-effects.<\/p>\n

Define concretization function from consistent privileges to the feasible set of effect sets.<\/p>\n

Introduce internal language: with \"has\" and \"restrict\" constructs.<\/p>\n

Future work: implementation (currently targeting Scala); Blame<\/p>\n

Q: Can you always derive the concretization function?
\nA: It is provided by us.<\/p>\n

Q: If you're doing dynamic checking and dealing with arrays, but you can't raise errors -- what are you supposed to do?
\nA: You don't want to produce a side effect when you're not allowed to do so.<\/p>\n

Q: Does your system allow parameterization on effect sets?
\nA: No, not in this paper. The implementation requires it though so this is future work.<\/p>\n


\n

Session 12: Dependent Types (Chair: Ulf Norell)<\/span><\/h4>\n

How to Keep Your Neighbours in Order<\/em><\/span><\/span><\/h4>\n

Conor McBride (University of Strathclyde)<\/span><\/h4>\n

Conclusion first: Push types in to type less code!<\/p>\n

A very engaging and amusing talk. Conor demonstrates (with lots of Agda code!) how to use types to build correct binary heaps and 2-3 trees.<\/p>\n

Take-aways: Write types which push requirements in. Write pograms which generate evidence.<\/p>\n

Q: What does \"push requirements in\" mean?
\nA: Start from the requirements. The information that's driving the maintenance of the requirements is passed in. (?)<\/p>\n

A Relational Framework for Higher-Order Shape Analysis<\/em><\/span><\/span><\/h4>\n

Gowtham Kaki (Purdue); Suresh Jagannathan (Purdue)<\/span><\/h4>\n

Shape analysis in functional languages: ADTs track shapes of data. Types can document shape invariants of functions -- but we want to be able to refined these constraints!<\/p>\n

Can we refine ML types to express and automatically verify rich relationships? Need a common language to express fine-grained shapes. Build up from relations and relational algebra. Example: in-order and forward-order.<\/p>\n

Relational operators are union and cross
\nPredicates are equality and inclusion<\/p>\n

But: relational types for polymorphic and higher-order functions must be general enough to relate different shapes at different call sites. Examples: id, treefoldl : a tree -> b -> (b -> a -> b) -> b<\/p>\n

Consider id : a-> a -- shape of argument is the shape of its result.<\/p>\n

Functions can be parameterized over relations. Also get parametric relations: relations can be parameterized over relations.<\/p>\n

We consider the \"effectively propositional fragment\" of many-sorted first-order logic. Key observation for translation: a fully instantiated parametric relation can be defined in terms of its component non-parametric relations.<\/p>\n

Implementation: CATALYST, tycon.github.io\/catalyst\/<\/p>\n

Q: Does your type system allow the user to introduce new relations?
\nA: Yes.<\/p>\n

Q: Can you compile any kind of parametric relation or are there restrictions?
\nA: You can also write relations on infinite data structures.<\/p>\n


\n

Session 13: Domain Specific Languages II (Chair: Yaron Minsky)<\/span><\/h4>\n

There is no Fork: an Abstraction for Efficient, Concurrent, and Concise Data Access<\/em><\/span><\/span><\/h4>\n

Simon Marlow (Facebook); Louis Brandy (Facebook); Jonathan Coens (Facebook); Jon Purdy (Facebook)<\/span><\/h4>\n

Imagine you a web program, the server side code for generating a webpage, with many sources. For efficient, we need concurrency, batching, caching. We could brute force, but that destroys modularity and we its messy.<\/p>\n

What about using explicitly concurrency ? It really on the programmer too much e.g. they forget to fork, add false dependancies.<\/p>\n

A rule engine: input, evaluate rules and return.<\/p>\n

Consider coding a blog, example code including Fetch monad. The concurrency is implicit in the code, just use monand and applicative.<\/p>\n

No manual batching, again implicit in code.<\/p>\n

Demonstrating code for map and applicative. The monadic bind gives a tree of dependencies for IO.<\/p>\n

This was developed for the haxl project at FB, migrating from an in-house DSL called FXL to Haskell. The code framework is open-source, github.com\/facebook\/haxl and cabal install haxi.<\/p>\n

Implicit parallelism is hard but data fetching doesn't suffer from this problem.<\/p>\n

Q: Is binder linear in depth of blocked requests ?
\nA: Take it offline<\/p>\n

Q: Is there accidental sequential code ?
\nA: We haven't rolled it out yet<\/p>\n

Folding Domain-Specific Languages: Deep and Shallow Embeddings (Functional Pearl)<\/em><\/span><\/span><\/h4>\n

Jeremy Gibbons (University of Oxford); Nicolas Wu (University of Oxford)<\/span><\/h4>\n

Introducing a trivial toy language for examples. Defined a simple deep and shallow embedding for the toy language. Shallow embedding must be compositional. The toy language have eval and print, print is messy to add to shalow embedding.<\/p>\n

Dependent interpretations is introduced for the deep embedding and again is ok to shallowing embedding but not great. Context senstive interpretation can again be expressed as a fold.<\/p>\n

Deep embedding are ADT's, shallow embeddings are folds over that AS.<\/p>\n

Q: Has you method for shallow embedding sacrificed extensibility ?
\nA: Yes but we can work around it<\/p>\n

Q: What about binders, what about recursion ? You example is too simple.
\nA: Not sure<\/p>\n


\n

Session 14: Abstract Machines (Chair: David Van Horn)<\/span><\/h4>\n

Krivine Nets<\/em><\/span><\/span><\/h4>\n

Olle Fredriksson (University of Birmingham); Dan Ghica (University of Birmingham)<\/span><\/h4>\n

We're good at writing machine-independent good. But how about architecture independence? (CPU, FPGA, distributed system).<\/p>\n

Work: underlying exeution mechanism for suporting node annotations t @ A.<\/p>\n

Approach: put your conventional abstract machines in the cloud.<\/p>\n

Krivine net = instantiation of a network model with Distributed Krivine machine. Network model: synchronous message passing and asynchronous message passing.<\/p>\n

Basic idea of execution: remote stack extension.<\/p>\n

Krivine nets perform much better than previous approaches (GOI and GAMC).<\/p>\n

Future: work on runtime system -- failure detection and recovery, may need GC.<\/p>\n

Implementation and formalization: bitbucket.org\/ollef<\/p>\n

Q: How about more complex data like arrays which are expensive to send over the network?
\nA: We're working on this -- introduce a notion of mobile and immobile.<\/p>\n

Distilling Abstract Machines<\/em><\/span><\/span><\/h4>\n

Beniamino Accattoli (University of Bologna); Pablo Barenbaum (University of Buenos Aires); Damiano Mazza (Universit\u00e9 Paris 13)<\/span><\/h4>\n

Setting: lambda-calculus. There has been a lot of research into the relationship between abstract machines and substitution. We introduce the Linear Substitution Calculus which decomposes beta-reduction. It is equipped with a notion of structural equivalence.<\/p>\n

Origins: proof nets: a graphical language for linear logic. Multiplicative cut (symmetric monoidal closed category), exponential cut (for every A, !A is a commutative comonoid)<\/p>\n

Introduce Linear Substitution Calculus -- proof nets in disguise. Have multiplicative and exponential step.<\/p>\n

Then define reduction strategies on it: by name \/ by value (LR \/ RL) \/ by need require a specific form of evaluation context together with specific versions of the multiplicative and the exponential step.<\/p>\n

Distilleries: choice of deterministic multiplicative and exponential evaluation rule and an equivalence relation which is a strong bisimulation; an abstract machine with transitions defined by the chosen rules; a decoding function. Moral: the LSC is exactly the abstract machnie \"forgetting\" the commutative transitions. In all cases, distillation preserves complexity.<\/p>\n

Q: What does the fourth possible combination of multiplicative and exponential evaluation correspond to (both by value)?
\nA: That would be a sort of call-by-value, depending on how you define the context.<\/p>\n

Q: Can you bring some order to the seemingly unrelated evaluation contexts?
\nA: With these contexts, you tell the machine where to look for the next redex. You can even do non-deterministic machines. At some point, your choice is limited if you want execution to be deterministic.<\/p>\n

Q: Are these evaluation contexts maximally liberal while staying deterministic?
\nA: Yes.<\/p>\n

Q: From reduction semantics to an abstract machine is a mechanical process. Is it possible to find an algorithm that generates distilleries?
\nA: I can do it by hand ... don't know if there is an algorithm.<\/p>\n

Student Research Competition Award Presentation<\/span><\/h4>\n

14 submissions; 13 accepted (7 grad, 6 undergrad); 3 winners for each category.<\/p>\n

ICFP 2015 Advert & Closing<\/span><\/h4>\n

ICFP 2015 will take place in Vancouver, Canada: icfpconference.org\/icfp2015<\/p>\n

The venue is great -- connectivity, location etc.<\/p>\n

New: will have a Mentoring Workshop for beginning graduate students doing research. Workshop proposals and paper submission deadlines are on the website.<\/p>\n","protected":false},"excerpt":{"rendered":"

Good Morning from the 3rd and final day\u00c2\u00a0of the 19th International Conference on Functional Programming, as ever Leo and I (Heidi) are here to bring you the highlights. Keynote (Chair: Edwin Brady) Depending on Types Stephanie Weirich (University of Pennsylvania Is GHC a dependently type PL ? Yes* The story of dependently typed Haskell ! […]<\/p>\n","protected":false},"author":35,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[6,30],"tags":[],"_links":{"self":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts\/1819"}],"collection":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/users\/35"}],"replies":[{"embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/comments?post=1819"}],"version-history":[{"count":13,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts\/1819\/revisions"}],"predecessor-version":[{"id":2360,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts\/1819\/revisions\/2360"}],"wp:attachment":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/media?parent=1819"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/categories?post=1819"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/tags?post=1819"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}