syslog
1Sep/140

ICFP 2014: Day 1

This is day 1 of the 19th International Conference on Functional Programming.

Opening by the General Chair, Johan Jeuring

Quick facts about ICFP 2014:

  • 340 people registered for ICFP
  • 466 people registered for any event
  • most people are from Sweden, UK and USA
  • 21 sponsors, $60k in sponsoring

Keynote: Using Formal Methods to Enable More Secure Vehicles -- DARPA's HACMS Program

Kathleen Fisher (Tufts University)

Four and a half years long program, 2012--2017.

Many everyday objects today are computers! Infrastructure, medical devices, refrigerators, cars ... examples given for these things being hacked.

Focuses on cars: "computer on wheel" -- 100s of interconnected computing units. Everything is under control of software: braking, accelerating, locking.

First attack vector was via diagnostic bus but this requires physical control (you need to be in the car). Later found that physical control is not necessary to take over a car (Bluetooth, emergency number, audio CD).

Documentation for car systems exist and spare parts can be ordered online and analyzed (so no air gap, no security by obscurity).

Use the desktop computer approach? -- anti-virus scanning, intrusion detection systems, patching infrastructure. Problem: "security software" tends to introduce vulnerabilities. Also: embedded systems don't have the resources to monitor themselves all the time the way desktop computers do.

SAT solvers are getting faster; this allows us to use them to verify real software and hardware.

Clean-slate methods for high-assurance software: code synthesis, DSLs, Interactive Theorem Prover as PL.

Structure of the HACMS program: technical areas are vehicle experts, OS (building on verified sel4 kernel), control systems, research integration (management, composition: how to put high-assurance components together to create a bigger high-assurance component?, integration), red team (hints at problem of how to assess the security of a system -- currently a heuristic is: how long does it take for hackers to take over the system?).

Air team works on helicopter and quadcopter; ground team works on cars.

Demo video: WiFi-controlled quadcopter being taken over using aircrack-ng.

Every 18 months the red team gets access to the system and tries to find vulnerabilities.

Started with monolithic software, no RTOS, no security. After first iteration: use faster processor, put FreeRTOS and HAL under still monolithic Ardupilot software. Next iterations: generate more and more code from new DSL.

SMACCMCopter 18 months assessment: it flies! Air team can prove system-wide security properties like memory safety, ignoring malformed / non-authenticated messages; all good messages received will reach the motor controller. Red team found no security flaws in sex weeks with full access to source code. "Probably the most secure UAV on the planet" ("kinda scary").

Research integration (Rockwell Collins) used AADL to describe how hardware and software components fit together. Developed tools (AGREE, Resolute) to reason about the system. They are used to check assumptions about the system (example: there is only one way to communicate with the quadcopter).

Control systems (Galois) created Ivory and Tower, open-source EDSLs for synthesizing safe low-level code and tasks, respectively. Designed and built SMACCMCopter, the first high-assurance UAV software, in under two engineer-years.

OS (NICTA) proved full functional correctness of a fast OS microkernel, sel4 (now open source). Also working on eChronos, a high-assurance RTOS product line, and formally verified OS components (drivers, file system etc). CAmkES is a system for connecting and configuring these components.

Vehicle experts (Boeing) integrate HACMS technologies into the Unmanned Little Bird.

Air team in phase two tries to re-architecture (split) the system so it runs on a flight computer and a mission computer as is usual in real world scenarios. Red team will then try to gain control over the flight computer from the mission computer.

More detailed description of individual tasks follows.

Tech transition: a huge problem. The cyber-physical systems industry know they have a cyber-security problem but there are many barriers to adoption of HACMS-like technology.

C: Make it cool! Come up with a catastrophic event!
A: Well this is a response to "you can take over a car wirelessly". Another aspect: it will probably be mandated soon for cars to take to each other -- but remote code execution was never considered.

Q: Why not have quadcores in the car?
A: Very price-sensitive; the industry has tiny margins.

Q: How much does this project cost?
A: $70m.

Q: Research: top-down or bottom-up, which is better?
A: We need both. DARPA is bottom-up, NSF is top-down.

Q: You expect to pay more for high-assurance code. What is the cost? Especially for the second, third, ... project.
A: High-assurance code will always be more expensive. We hope to bring down the additional cost by developing tools that aid building high-assurance code. But have to consider: traditionally we pay people to write sloppy code; the cost from running sloppy code is never properly accounted for! Maybe high-assurance software is cheaper in the long run.

Q: Why did you replace the Arduino processor?
A: We wanted to start with an existing open source hardware platform.

Q: (bit unclear)
A: We're close to having one high-assurance system, the SMACCMCopter. Still far away from having a high-assurance car for example (one problem: no-one knows how many lines of code a normal car has!)

Q: How do you combine secure and unsecure code?
A: That's one of the challenges of course.

Q: How do you know the hardware does what you think it does?
A: In this project, we simply trusted the hardware. Other people are working on hardware issues.

Session 1: Domain Specific Languages I (Chair: Anil Madhavapeddy)

Building Embedded Systems with Embedded DSLs (Experience Report)

Patrick Hickey (presenting), Lee Pike, Trevor Elliott, James Bielman, John Lauchbury (Galois Inc)

Open source: github.com/GaloisInc

Goal: Build a high assurance quadcopter controller (long-term: helicopter controller). Small embedded system (microcontroller), hard real time, safe, secure. Had three engineers, 18 months, starting from scratch. Expected to write ~50kloc C/C++ code.

Embedded systems are everywhere. They're basically computers from the 80s: small and cheap. Development tools are from the 80s too (C). You get all the security flaws you'd expect -- buffer overflows etc. Can't push patches, and there are more attack surfaces than ever.

Approach. Haskell, OCaml? Problem: very resource limited system; GC imcompatible with hard real-time. C, C++? Tradeoff between safety and productivity -- e.g. NASA Jet Propulsion Lab writes high assurance C; extremely costly.

Alternative: build your own tools. Start with clean slate, use language approach, correct by construction. Built an embedded DSL in just a few months: Ivory, embedded in Haskell, compiles to safe subset of C. It's memory safe and allows no undefined or implementation defined behavior. No heap, only global and stack allocation.

Haskell type system guarantees properties of Ivory language programs; Haskell is Ivory's macro language.

Ivory: expressions are pure Haskell values; statements are effectful, embedded in monad. Examples for what has been embedded in the type system from the paper are given.

Ivory programs (modules) are a collection of top level C procedures and variables.

Tower DSL: organising entire applications. Composes Ivory procedures into applications. Initial problem: multithreading with message passing; code generation of low level primitives. Tower is implemented as a Haskell macro that generates Ivory programs. Tower contains concept of Tasks and Channels. It became the DSL describing software components.

(Side story: a third-party Python tool generating C output was modified to output Ivory code as well; this code could then be type checked to make sure it was correct. This way, the Python tool didn't need to be trusted.)

SMACCMPilot: 10kloc (and growing) of drivers, 3kloc application (stabilizing the quadcopter and flying), 10kloc message marshaling. Generates about 48kloc of correct-by-construction C.

Red team results: found no buffer overflow, no undefined behaviours, no denial of service. After five months they found two subtle architecture level bugs. Lifted the layer of abstraction: no need to worry about low-level problems, can concentrate on architecture.

Conclusion. Quickly and cheaply built a large, complex application; caught lots of errors early by Haskell type checking, could focus on application design; used Haskell as a macro system making it possible to build compositional programs.

Q: How is this an improvement over the OCaml code generator?
A: Cross-compilation was a specific concern, making sure the code runs on 8-bit, 16-bit, 32-bit architectures.

Q: Restricted amount of memory: how did you deal with that?
A: Don't yet have a way to assess the maximum stack use of a program. But we think it is solvable.

Q: You construct correct-by-construction code that is a subset of C, e.g. you don't do pointer arithmetic.
A: Yes.

Q: Do all arrays have a size known at compile time?
A: Yes.

Q: Type error messages were a problem?
A: Yes.

Q: You don't allow undefined behavior. How do you prevent overflows?
A: This is not done at the type level; instead we insert assertions in the C code (at the moment ~2500 assertions). Hope to analyze and discharge these assertions statically in the future.

Q: How about Rust?
A: Big fan. Rust was in its infancy when the project started two years ago. But when using Haskell we get type checking for our macros.

Concurrent NetCore: From Policies to Pipelines

Cole Schlesinger (presenting), Michael Greenberg, David Walker (Princeton University)

Review on networking: have an enterprise network with hosts, connected to a switch. In a SDN, the controller tells the switch which rules to use to deal with packets. Openflow 1.0 uses prioritised rules for this. Has predicates and actions. But switch hardware is not that simple: lots of tables in a switch pipeline. New configurable designs available now.

Openflow 2.0: splits interaction between controller and switch into two phases: 1. configuration (think flashing an FPGA); 2. population of the tables.

Contributions: core calculus for packet processing pipelines and controller policies; hardware models for three switch architectures; compilation algorithms for deploying general controller polices to target switch architectures.

Example: Broadcom "trident" switch. Each component can be thought of as function from packets sets of packets. (empty: drop, singleton: forward, multiple: e.g. flooding).

Have forwarding table; sequencing; if-statements over predicates; rewrite component.

Intel "flexpipe" supports reconfigurability. Each table can match on and write to every field. Also supports concurrency; introduce concurrency operator to model this.

What about the controller? Easiest for programmer to define virtual pipeline that describes the requirements.

New operator: packet duplication -- copy and run packet through multiple components.

Example: compilation to the barefoot "RMT" architecture -- three passes: 1. consolidate packet duplication (output ports must be known at the time of packet duplication, also need to add metadata to model processing as it would have taken place in the virtual pipeline); 2. refactor field modification; 3. place generated tables into physical tables (use commutativity of concurrency and other tricks -- the algorithm is a combination of brute force and dynamic programming).

Metatheory. Predicates form a Boolean algebra; grammar given for Predicates and Policies. Have a type system and small-step semantics.

Typing judgement of the concurrency operator prevents race conditions. Can prove normalization and confluence, in combination giving strong normalization (determinacy). Adequacy can be shown too.

Future work: implement the compiler and transformer on a real chipset.

Q: Does your system support encryption of packets inside the network, like IPSEC?
A: Yes and no -- you could think of encryption as an atomic action but it has not been fully worked out.

Q: Plus operator?
A: Plus operator is duplication. Semantics: both packets are sequentially processed by the remaining pipeline.

Session 2: Static Analysis (Chair: Ken Friis Larse)

SeLINQ: Tracking Information Across Application-Database Boundaries

Daniel Schoepe, Daniel Hedin, Andrei Sabelfeld (Chalmers University of Technology)

Motivation: confidential data is entrusted to many application which are composed of many components. Information leakage often occurs at boundaries, e.g. SQL injections. Need to look at confidentiality as end-to-end security to prevent attaches.

Contributions: track information flow for code using database queries. Security type system ensures noninterference and proof of soundness. Compiles to executable F# code; realistic demo.

Approach: Traditionally, attacks often occur at component boundary, e.g. SQL injections. Tierless approach: correct communication across boundary ensured at compile-time. Example: write everything in F#, some of which compiles to SQL.

Intuition: untrusted program shouldn't leak private data to third parties. Need to track flow of information through the program.

Noninterference: private inputs must not influence public outputs. Attacker doesn't learn anything about secrets by observing public behavior of the program.

Language-integrated query -- build queries via meta-programming. Example: LINQ in F#. Guarantees user input is sanitized and SQL is syntactically correct.

Idea: types annotated with security levels (L for public, H for private).

Security policy specified by giving a typing to each database. Code example given.

Introduce low-equivalence relation for values that are indistinguishable for the attacker. Defined structurally based on type; components with level L have to be equal.

Each secure program should be noninterfering with itself.

Security type system has soundness: if program is well typed then it is noninterfering.

Implementation: type checker (based on constraint solving and unification)and compiler implemented in Haskell using BNFC. Language compiles to executable F# code; compilation consists mostly of removing security types. Type system and soundness proofs allow user-defined algebraic datatypes.

Case study: near-realistic movie rental database. Want to protect names and exact addresses of customers and staff. Examples given of valid and invalid queries.

Summary: light-weight framework for tracking information flow; security type system ensures noninterference; proof-of-concept implementation given; extension to ADTs; case study.

Future work: extend to client code (JS).

Q: Can I make the security policy express things like "only certain people should be able to access this"
A: No, can't do this at the moment.

Q: Could this be embedded into a computational framework where one could assist the system in showing that some access is in fact safe when the type system's approximation cannot show it?
A: That would be an interesting extension.

Q: Do you deal with aggregation?
A: This has not been considered yet.

Type-Based Parametric Analysis of Program Families

Sheng Chen (presenting), Martin Erwig (Oregon State University)

Can often think of programs as "program families" which by configuration can be refined to single programs. But build & install may fail in unexpected ways. Trying all combinations of options is infeasible.

Idea: use variational analysis directly on the program family / variational program. Introduce a framework for doing variational analysis in the type system.

Notation for choice in variational programs introduced.

Program by search.

Creating variational analyses: 1. add variation data structures, e.g. lists become variational lists (VL, variation in vales and list structures); 2. adapt analysis, e.g. analysis of sort becomes analysis of VL sort; 3. correctness proof (tedious); 4. performance analysis

Example: variational typing rules shown.

This framework consists of annotations (sets), types and constraints. All three have extension points. Application and Choice typing rules are shown.

A constraint solving example is shown involving merging and splitting.

Instantiating the framework for variational 0CFA: no extensions required for annotations, types and constraints but rules need to be extended.

Correctness evaluation of the framework. Implemented Flyweight Java version of the framework and proved it correct.

Performance evaluation shows that the framework performs almost as good as manually lifted 0CFA analysis.

Q: Is the aggressive (faster) approach powerful enough to check types?
A: Yes.

Q: Have you tried to do effect analysis in this framework?
A: The framework should be expressive enough to do this but it hasn't been done yet.

Session 3: Binding Structure (Chair: Tarmo Uustal)

Romeo: A System for More Flexible Binding-Safe Programming

Paul Stansifer (presenting), Mitchell Wand (Northeastern University)

Introduction: metaprogramming allows us to automatically generate programs. Name binding is a problem -- example given. Recent survey found that 8 out of 9 metaprogramming systems had name binding problems.

Binding safety: alpha-equivalent inputs should produce alpha-equivalent outputs. In theory this works: FreshML, MetaML. But these only support simple binding forms like λ. Real languages have more than λ, e.g. let* in Racket, "local" construct in SML.

Contributions: a binding-safe PL, Romeo, with a notation to express complex binding, clear big-step semantics, proof of correctness.

Explains the concepts of "imports" and "exports" in the context of name binding.

In Romeo, binding information is in the types.

What makes two let*s alpha-equivalent? Follow-up: when are two Romeo declarations alpha-equivalent? -- must have exactly the same behavior. Thus their exported binders must be the same.

Romeo's runtime enforces safety. It automatically avoids name clashes. Accidentally unbound names are a dynamic error ... which Romeo's optional deduction system can statically rule out.

Conclusion: Romeo is a binding-safe language that supports term languages with complex binding structure.

Q: What about de Bruijn indices? Effects in the metalanguage?
A: Romeo is side-effect free.

Q: Can I play with Romeo?
A: I'll try to convince my supervisor to release the source code. Currently the implementation is out of sync with the formalism.

Q: This looks similar to λ-m.
A: Lambda-m is not powerful enough. We give you syntax case.

C: Thanks for starting with a concrete example.
A: Thanks to my peers for the criticism!

Q: How does hygienic macro expansion relate to this?
A: If your macro gets larger and larger, there is less safety -- hygienic macros are all about protecting macros from each other.

Q: Complexity of expanding Romeo code?
A: O squared, so that's not good. Macro systems have the same problem though. Trick: perform actual substitution lazily, and we think the same technique could be used with Romeo also.

Q: How much expressiveness does Romeo have?
A: Can we express all Racket macros -- probably, we think most are just simple extensions of what we have right now.

Maximal Sharing in the Lambda Calculus with letrec

Clemens Grabmayer (presenting, VU University Amsterdam), Jan Rochel (Universiteit Utrecht)

Motivation: increasing sharing is desirable -- compact code, avoid duplicated work at run-time, enables us to check equality of unfolding semantics of programs.

1. interpretation of λ-letrec terms as term graphs; 2. bisimilarity and bisimulation collapse of λ term graphs; 3. readback of λ term graphs.

This method extends common subexpression elimination and is targeted at maximizing sharing statically.

Example of unfolding equivalence of fixpoint operator.

Example of graph interpretation given. First we generate a λ higher order term graphs. Transform into first-order term graph with scope vertices with backlinks, then into λ term graph.

Then we do a bisimulation check between λ term graphs.

The class of eager-scope λ term graphs is closed under functional bisimilarity.

Implementation: "maxsharing" on Hackage, uses Utrecht University Attribute Grammar Compiler.

Time complexity analysis of the individual steps follows.

Possible extensions: support for full functional languages (would require work on a core language with constructors and case statements); prevent space leaks caused by disadvantageous sharing.

Applications: use maximal sharing at run-time, possibly directly on supercombinator graphs; can be coupled with GC; code improvement (statically) to detect code duplication; checking function equivalence (relevant for proof assistants, theorem provers, dependently-typed PLs).

Q: Explicit weakening. Have you tried to implement reduction of the first order term graph?
A: It's possible. Used port graphs to implement reduction because that seemed easier.

Q: Does this subsume Kleene algebra?
A: Maybe? Interesting idea.

Session 4: Program Optimisation (Chair: John Launchbury)

Practical and Effective Higher-Order Optimizations

Lars Bergstrom (presenting, Mozilla Research), Matthew Fluet (Rochester Institute of Technology), John Reppy University of Chicago), Nora Sandler (University of Chicago), Matthew Le (Rochester Institute of Technology)

Control flow analysis (CFA) allows higher-order inlining. In practice, high performance function programs avoid functions.

Two subproblems: 1. what is the concrete function called from some computed function call? -- already handled by 0CFA; 2. does the potential inlining point have a compatible environment? -- "reflow", JITs.

Contribution: determine when higher-order inlining is safe. Key insight: don't check "is the environment the same?" but "could any of the free variables have been rebound between the closure capture location and the call site / potential inlining point?". This supersedes ad-hoc higher-order inlining tricks.

Technique: 1. normalize and number source code (annotations for variable binding sites); 2. build control-flow graph; 3. add CFA-informed edges; 4. Perform rebinding search on paths between closure capture location and potential inlining point

Safe and unsafe example given; control flow graphs shown.

Reflow analysis is strictly stronger. But this analysis has about 3% time cost of the entire compilation process.

Conclusion: start using closures in benchmarks!

Higher-order inlining heuristics still missing; correctness proof still incomplete.

Q (SPJ): You showed that the analysis is cheap but what do we get?
A: Benchmarks are in the paper.

Q: If I want to use this: is it clear for me where the optimization will kick in?
A: Well, yes, if you're the compiler writer (laughs from the audience). But this is a problem indeed.

Q: I usually pass closures to "map" and "fold", so these will have many call sites. Can this still be optimized in this framework?
A: "map" and "fold" are usually handled in a special way by the compiler anyway.

Q: Is it imaginable to prove that the optimization actually makes the program run faster?
A: Next presentation will try to answer this. But it's very difficult.

Q: What's the relationship to Milton's optimizations?
A: Not entirely sure. Is defunctionalization or CFA-style analysis better? An interesting question.

Worker/Wrapper/Makes It/Faster

Jennifer Hackett (presenting), Graham Hutton (University of Nottingham)

Context: lots of research into optimization focuses on correctness: is meaning preserved?; comparatively little work on improvement -- ensuring programs become "better".

Example: list reversal function. How can we prove that the improved program is both faithful to the original version and also more efficient?

The Worker/Wrapper transformation: Original program has type A, worker has type B, wrapper has type B -> A.

Formally, given

f : A -> A, g : B -> B, abs : B -> A, rep : A -> B

and

abs . rep = id, g . rep = rep . f

then

fix f = abs (fix g)

Reasoning about efficiency. Need some way to compare the cost of two programs. Naive approach: count steps taken to evaluate terms to some normal form. But this fails in a lazy context! Need precongruence.

Precongruence: a preorder and a congruence -- this excludes negative interaction between improvements at different points in the program.

Improvement theory: counting steps, also have a notion of improvement within a constant factor.

Improvement induction principle shown. "tick" idea. It's like guarded coinduction.

Shows the Worker/Wrapper theorem for improvement theory (typeless).

Structure of improvement proofs is essentially the same as that of correctness proofs.

Conclusions: we can make rigorous arguments about the efficiency of lazy programs; make performance guarantees for a general-purpose optimization.

Further work: structured recursion; turn it into a typed theory (advantage: only need to worry about contexts where the types make sense); look at improvement of space usage and other resources; quantify improvement (how much faster? which optimization should be applied if there are alternatives?); automate it.

Q: Do you have completeness results?
A: It's not clear whether Worker/Wrapper improvement proofs can be translated from fixpoint style to improvement theory.

Q: Do you have any examples where Worker/Wrapper transformation makes things slower?
A: Of course we have come across these, but we wouldn't put that in the paper! (Laughs from the audience). [Tree reversal example follows]

Q: Problem in GHC is the cost of building closures. How is this reflected in your framework?
A: Read early papers on improvement theory.

Q: How about local improvements?
A: This work does not look at the surroundings.

Q: Would things change dramatically if you applied this to a call by value language?
A: There has been some work on that.

Session 5: Context Dependence (Chair: Yukiyoshi Kameyama)

Compositional Semantics for Composable Continuations: From Abortive to Delimited Control

Paul Downen, Zena M. Ariola (University of Oregon)

Classical control: callcc is the classic control operator, going back to Scheme. Corresponds to classical logic (the way the λ-calculus corresponds to intuitionist logic).

Delimited control: delimit the scope of effects; continuations compose like functions. These little changes make delimited control vastly more expressive than classical control.

λ-calculus + callcc operator and λμ-calculus are equivalent. Relaxing λμ syntactically gives Λμ which allows delimited control! Λμ in turn is equivalent to lambda + shift0 + reset0.

Classical control in detail. Operational semantics of callcc as an extension of CBV lambda-calculus given. Alternatively, we can understand callcc using an equational theory. But: it's weaker than operational semantics. Some programs can be evaluated to a value but equational theory for callcc cannot reach a value! How can we know that we have the whole context?

Calling a continuation never returns -- it "jumps". A jump is the same when inside a larger evaluation context so it delimits the usable extent of a continuation.

λμ syntactically distinguishes jumps as "commands".

Relaxing the syntax. Λμ collapses term/command distinction. It is interpreted with the same rules only that we have more expressive meta-variables. No new constructs, no new rules. As typed calculi, Λμ is considered equivalent to Parigot's λμ. What happens if you ignore types?

shift and reset: common basis for delimited control. Continuations return and can be composed like functions.

λ-calculus + shift + reset is a true subset of Λμ. What's missing?

Difference between shift and shift0: the latter removes its surrounding delimiter.

λ-calculus + shift0 + reset0 is equivalent to Λμ.

Λμ as a framework for delimited control gives provable observational guarantees about the operators, e.g. idempotency of reset.

More in the paper: can get nice reasoning principles using call-by-value, call-by-name, call-by-need; equational correspondence with compositional transformations.

Q: What do you mean by equality in this presentation?
A: Observational equality -- this is about tools for reasoning about programs rather than expressivity of the calculi.

Q: λμ has a well known type system but to introduce delimited control you had to relax the syntax. Is there a type system for this?
A: (?)

Q: Classical logic, intuitionist logic -- delimited logic?
A: YES! Should investigate this.

Coeffects: A Calculus of Context-Dependent Computation

Tomas Petricek (presenting), Dominic Orchard, Alan Mycroft (University of Cambridge)

Coeffects is the dual of Effects. Done!

This talk is about the context in a type judgement.

Effect systems: Γ :- e : τ & σ
Coeffect : Γ & x :- e : τ

Interesting properties of Γ ("view from the extreme left"): variable related -- liveness, bounded linear logic, data-flow programming, provenance; environment related -- implicit parameters, type classes, distributed computing, platform versioning.

Coeffect calculus for bounded reuse: annotate each variable in the environment with the number of its uses in the expression, essentially tracking the number of variable uses.

Variable, abstraction and contraction rules shown.

Another possibility: coeffect calculus with historical values. Replace addition with max operation in the contraction rule.

Environment related: coeffect calculus for implicit parameters. Example: implicit parameter for time zone and time. We can calculate how many parameters are still required (i.e. not given by the current environment). The corresponding rule is non-deterministic.

What's in the paper? Unified system: coeffect scalar structure (a generalized semiring structure) and shape-indexed coeffects (context splitting and merging, per-variable or whole-context). Also have monadic and comonadic semantics -- not simple as λ-calculus is asymmetric (many to one)!

Why do coeffects matter? Generalize interesting systems (liveness, data-flow, implicit parameters, type classes) and (indexed) comonads are cool.

Q: What happens if you have (co)monads in the classical calculus?
A: Interesting question.

Program Chair's Report

Quick paper stats:

28 accepted papers out of 97 submissions.

2/9 functional pearls and 2/3 experience reports accepted.

299 reviews -- at least three per paper.

72 hour author response period (as usual).

It was decided not to allow papers from members of the program committee.

Comparison with previous years: number of papers approximately stable, share of accepted papers even more so.

Functional pearls are accepted less than normal papers.

Author's location: USA, UK, France, Japan, Netherlands, Sweden, Germany, other.

UK has highest acceptance rate, then USA.

Papers submitted only a couple of hours before the deadline were less likely to be accepted -- before that there seems to be no correlation between submission time and acceptance probability.

Big thanks to the program committee!

Deputy Lord Mayer of Gothenburg: invitation for drinks, short history of the city.

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.