syslog
27Mar/180

At LSE workshop on data science mar 27/28

Posted by Jon Crowcroft

see
http://www.lse.ac.uk/Statistics/Events/Workshop-on-Data-Science-Theory-and-Practice/Programme-at-a-glance
lots of good speakers! will see if slides will be available - e.g. first speaker's blog
http://inverseprobability.com/2018/02/06/natural-and-artificial-intelligence

19Feb/180

AAAI/AIES’18 Trip Report

Posted by Jianxin Zhao

Recently I’m honoured to get the opportunity to present our work “Privacy-preserving Machine Learning Based Data Analytics on Edge Devices” at the AIES'18 conference, which is co-located with AAAI'18, one of the top conference in the field of AI and Machine Learning. Here is a brief review of some of the papers and trends that I find interesting from this conference.

Activity detection is obviously a hot topic. New building blocks are invented. “Action Prediction from Videos via Memorizing Hard-to-Predict Samples” aims to improve the prediction accuracy, since one challenge is that different actions may share similar early-phase pattern. The proposed solution is a new CNN plus augmented LSTM network block. “A Cascaded Inception of Inception Network with Attention Modulated Feature Fusion for Human Pose Estimation” proposes a new “Inception-of-Inception” block to solve current limitations in preserving low level features, adaptively adjusting the importance of different levels of features, and modelling the human perception process. The research focus is also on reduce the computation overhead. “R-C3D: Region Convolutional 3D Network for Temporal Activity Detection” aims to reduce the time of activity detection by sharing convolutional features between the proposal and the classification pipelines. “A Self-Adaptive Proposal Model for Temporal Action Detection based on Reinforcement Learning” proposes that agent can learn to find actions through continuously adjusting the temporal bounds in a self-adaptive way to reduce required computation.

Face identification is also a widely discussed topic. “Dual-reference Face Retrieval” propose a mechanism to enable recognise face at a specific age range. The solution is to take another reference image at certain age range, then search similar face of similar age. Person re-identification associates various person images, captured by different surveillance cameras, to the same person. Its main challenge is the large quantity of noisy video sources. In “Video-based Person Re-identification via Self Paced Weighting”, the authors claims that not every frame in videos should be treated equally. Their approach reduces noises and improves the detection accuracy. In “Graph Correspondence Transfer for person re-identification”, the authors try to solve the problem of spatial misalignment caused by large variations in view angles and human poses.

To improve Deep Neural Networks, many researchers seek to transfer the learned knowledge to new environment. “Region-based Quality Estimation Network for Large-scale Person Re-identification” is another paper on person re-identification. It proposes a training method to learn the lost information from other regions and thus performs good with input of low quality. “Multispectral Transfer Network: Unsupervised Depth Estimation for all day vision” estimate depth image from a single thermal image. “Less-forgetful learning for domain expansion in DNN” enhances DNN so that it can remember previously learned information when learning new data from a new domain. Another line of research is to enhance training data generation. “Mix-and-Match Tuning for Self-Supervised Semantic Segmentation” reduces dataset required for training segmentation network. “Hierarchical Nonlinear Orthogonal Adaptive-Subspace Self-Organizing Map based Feature Extraction for Human Action Recognition” aims to solve of the problem that feature extraction need large-scale labelled data for training. Its solution is to adaptively learn effective features from data without supervision.

One computation theme in these research work is that to reduce the computation overhead. “Recurrent Attentional Reinforcement Learning for Multi-label Image Recognition” achieves it by locating the redundant computation in the region proposal in image recognition. “Auto-Balanced Filter Pruning for Efficient Convolutional Neural Networks” compresses network module by throwing away a large part of filters in the proposed two-pass training approach. Another trend is to combine multiple input sources to improve accuracy. “Action Recognition with Coarse-to-Fine Deep Feature Integration and Asynchronous Fusion” combine multiple video streams to achieve more precise feature extraction of different granularity. “Multimodal Keyless Attention Fusion for Video Classification” combines multiple single-modal models such as rgb, flow, and sound models to solve the problem that CNN and RNN models are difficult to be combined to for joint end-to-end training directly on large-scale datasets. “Hierarchical Discriminative Learning for Visible Thermal Person Re-Identification” improves person re-identification by cross-compare normal and thermal video streams.

It is not a surprise that not many system-related papers are presented at this conference. “Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design” focuses one the challenge of float-point arithmetic overhead in transplant CNNs on FPGA. “AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training” proposes a gradient compression technique to reduce the communication bottleneck in distributed training.

The research from industry takes large portion in this conference. IBM presents a series of papers and demos. For example, the “Dataset evolver” is an interactive Jupyter notebook-based tool to support data scientists perform feature engineering for classification tasks, and “DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers” proposes to automatically take design flow diagrams and tables from existing research and translate them to abstract computational graphs, then to Keras/Caffe implementations. A full list of IBM’s work at AAAI can be seen here. Google presents “Modeling Individual Labelers Improves Classification” and “Learning to Attack: Adversarial Transformation Networks”, and Facebook shows “Efficient Large-scale Multi-modal classification”. Both companies focus on a specific application field compared to IBM wide spectrum of research. Many research work from industry are closely related to application, such as Alibaba’s ”A multi-task learning approach for improving product title compression with User search log data.” Though I’m curious to find that financial companies are not found at the conference.

On the other hand, the top universities tend to focus on theoretical work. “Interpreting CNN Knowledge Via An Explanatory Graph” from UCLA aims to explain a CNN model and improve its transparency. Tokyo University presents “Constructing Hierarchical Bayesian Network with Pooling” and “Alternating circulant random features for semigroup kernels”. CMU presents “Brute-Force Facial Landmark Analysis with A 140,000-way classifier”. Together with ETH Zurich, MIT shows “Streaming Non-monotone submodular maximization: personalized video summarization”. However, the work of UC Berkeley seems to be absent from this conference.

Adversarial learning is one key topic in different vision-related research areas. For example, “Adversarial Discriminative Heterogeneous Face Recognition”, “Extreme Low resolution activity recognition with Multi-siamese embedding learning”, and “End-to-end united video dehazing and detection”. One of the tutorials “Adversarial Machine Learning” gives an excellent introduction to the state of art on this topic. Prof. Zoubin Ghahramani from Uber gives a talk on his vision about Probabilistic AI, which is also one of the trends at this conference.

Best paper of this year goes to “Memory-Augmented Monte Carlo Tree Search” from University of Alberta, and best student paper to “Counterfactual Multi-Agent Policy Gradients” from, ahem, the other place.

These papers is only a scratch of all the papers contained in the AAAI’18 conference, and mostly on the Computer Vision topic that is my personal interest. If you are interested, please refer to the full list of accepted papers.

Tagged as: No Comments
8Nov/170

SOSP’17 Trip Report

Posted by Jianxin Zhao

I'm glad to have the opportunity to present our poster at SOSP, one of the top conference in the field of system research. Here is a brief review of the papers that I find interesting from this conference.

Debugging

One of this year's best papers goes to "DeepXplore", a whitebox framework for systematically testing real-world deep learning (DL) systems. Instead of system, the focus of this paper is more on machine learning IMO. This framework, or algorithm, tries to automatically find corner cases to invalidate your deep learning method. For example, suppose you have a image classification DL model. Even it works perfectly in 99% cases, there are always corner cases that makes your model behave abnormally, e.g. recognise a panda as, say, a dinosaur. And obviously a user cannot find these potentially numerous cases and label them manually one by one.

What this algorithm proposes is to use a series of models of the same purpose, and use gradient descent to find a input example that can maximise the disagreement among all these models. It also try to maximise the activation of an inactive neuron to push it above a threshold?—?the newly proposed "neuron coverage" idea. The whole problem is formalised as a optimisation problem. It is proved to be more efficient than random testing and adversarial testing. I'm very interested in the execution time to find out an corner case input —only seconds as reported! Amazing.

Resource Management

The paper "Monotasks" is about performance reason-ability of data analytics system. By "reason-ability" I mean questions like what hardware/software configuration should a user use and possible reasons to poor system performance. Currently data analytics systems such as Spark provide fin-grained pipelining to parallelise use of CPU, network, and disk within each task. But this kind of pipelining is the exact reason that we cannot reason the system performance, because tasks have non-uniform resource use, concurrent tasks on a machine may contend, and resource use may occurs outside the control of the system.

The basic idea of monotasks is that jobs are decomposed into units of work that each use exactly one unit of the CPU, network and disk resources, so that the usage is uniform can predictable. These work units form a DAG. Each kind of resource has a scheduler. The decomposition happens when the jobs arrive on the worker machine.

What I find most interesting in this paper is the idea of "going back": eliminate the use of pipelining, sacrifice part of performance for its reasonability. And also the implementation doesn't require modification of user code.

Security

The other best paper "Efficient Server Audit" revisit the classic topic of execution integrity with a newly-defined efficient server audit problem: since your have no visibility into AWS, how can you assure that AWS (executor)is executing the actual application you've written? Specifically, The verification algorithm (verifier) is given an accurate trace of the executor's inputs and delivered outputs. The executor gives the verifier reports, but these are untrusted. The verifier must somehow use the reports to determine whether the outputs in the trace are consistent with having actually executed the program. The problem is to design the verifier and the reports.

In the proposed solution, for different requests, same control flow of executor are grouped, and the verifier re-executes each control flow group in a batch. Re-executed read operations are fed based on the most recent write entry in the logs, and the verifier checks logged write operations opportunistically, with the assurance that alleged operations can be ordered consistent with observed requests and responses. Finally, the verifier compares each request's produced output from executor to the request's output in the trace all control flow groups.

Data Analytics

Stores for temporal analytics is crucial to many important data analytic applications, such as recommend system, financial analysis, Smart Home, ect. "SummaryStore" is for large scale time-series analysis. This approximate time-series store aims at providing high query accuracy while keeping extremely cost-effective storage.

Mainly three techniques are used here: 1)it maintains compact summaries through aggregates, samples, sketches, and other probabilistic data structures (e.g. Bloom Filter) instead of raw data; 2)based on the observation that newer data often conveys more information, it defines a novel time-decayed approximation scheme; 3) some special events are separately stored as is in raw format rather than being summarised. The appendix proves comparison of different decay functions and the query error bound estimation.

Privacy

A common theme in the Privacy session is "shuffle".

Atom is an anonymous messaging system that both scales horizontally like Tor, and also provides clear security properties under precise assumptions like DCnet-based systems. The assumption here is global adversary and malicious servers. The main technique here are onion messages and random output permutation.

Interestingly, the next paper Stadium focus the exact opposite application area: one-to-one communication, compared with the broadcast style of Atom. The main problem this work wants to solve is how to hide communication metadata. It scale horizontally. This works is based on the SOSP'15 paper Vuvuzela, which problem is that every server has to handle all messages. The challenge here is how to distribute to untrusted servers. The main design is to combine collaborative noise generation, message shuffling, and verifiable parallel mixnet processing pipeline.

Prochlo from Google propose an ESA system architecture?—?encode, shuffle,analyse?—?for large-scale online monitoring of client software behaviour, which may entail systematic collection of user data. One interesting point is that to enhance the proposed ESA, this work proposes Stash Shuffle, an algorithm based on Intel's SGX. One of the question raised at the conference is that "Why trusting Intel is better than trusting Google?" :)

Scalability

Facebook's SVE system is deployed in production for uploading and processing videos at large scale, with low latency to support interactive applications, reliability, and robustness. The key idea here is very simple: process a video while upload another one in the pipeline at the same time, but it has shown good performance compared to Facebook's previous MES system. This is indeed one of the advantages of "being large scale".

Kernels

Here is a paper that is quite related to Unikernel: "My VM is lighter than your container". This paper "find and eliminate bottlenecks (such as image size) when launching large numbers of lightweight VMs(both unikernels and minimal Linux VMs)". The results shows a 4ms boot time for a unikernel based VM while a docker container takes 150ms to boot. This system provides tools to automatically build minimised image, and also replace the Xen control plane so that front-end and back-end drivers can communicate directly through shared memory. This paper reminds me of the "Mosaic" from EuroSys'17, which also tries to fit more computation units in one computer.

Other Sessions

Obviously we cannot (or more Specifically I cannot) cover the essence of a sosp paper with one to two paragraphs, and there are many other great papers that are not mentioned here yet. If you're interested, please visit the program to see the full list of accepted papers.

One more thing...

SOSP this year also host ACM Student Research Competition. 16 are selected from 42 submitted abstracts, and 6 are further selected to do a 5-min presentation. The presentations are surprisingly interesting. Please see the list of winners (actually all 6 participants, divided into graduate and undergraduate groups) here.

Also, this year's female participation rate is 6.6% for authors and 7% for Program Committee, both slightly lower than previous 3 years' numbers.

14Jan/150

Programming Languages Mentoring Workshop (PLMW)

Posted by hh360

Good morning from POPL 2015 in Mumbai, India.

Throughout this blog, * denotes a notation heavy segment in the talk. These can be difficult to summaries quickly without typesetting. See the authors paper online for now and I'll try to find speaker slides at the later time.

PLMW Organizers - Intro

This is the 4rd PLMW workshop and we are very pleased to see you all. Mentoring workshops are now being much more common. This year we have funded 75 students. We would like the thank the speakers, sponsors and organisers (including Annabel :)

Video from Derek

POPL proceedings are now available online


 

Nate Foster (Cornell University) - You and Your Graduate Research

Prelude:

- doing a PhD is not easy, many do not finish. This is a survivors story
- some of my comments many be US specific
- acknowledgements (including Jorge Cham from PhD comics :))

Motivation

THis is useful from everyone, not just undergraduates.

Why you shouldn't do it?
- money: nick shows a graph of salaries in university, football coaching is clearly the right way to go for a big salary.
- start-ups: Tech start-ups are now a top topic, Oculus VR example and you don't need a PhD for this
- respect: Education is highly valued and there's respect from degrees, but no one will call you doctor exact your mom.
- to stay in school: Undergraduate was fun so lets stay in school, a PhD program is totally different.
- being a professor: Let all become a professor, demonstration of the numbers of academics verse other careers

Why you should?
- opens opportunities
- cool jobs in industry: leadership, work on cool problems, you can work on large systems and with real users
- freedom: applies on any level, freedom to choose interesting problems, opportunities to work on the big problems

What is a PhD?

Comparing degrees at different levels:
- high school: basics for life
- undergrad: broad knowledge in a field
- professional: advanced knowledge and practical skills
- PhD: advanced knowledge and a research contribution.

PhD is open edge

PhD is a transformation - we start with intelligent people who aren't researchers. Its a apprentice based scheme to being a researcher. Its not an easy or painless process.

PhD Success

pick an institution
The community that your in will shape your work,
Very important factors include advisor, opportunities and peers. Typically less important factors are finances, institution and location.

Dive into Research
Don't be distracted by other things e.g. classes and teaching
Pick great problems

"It is better to do the right problem the wrong way than the wrong problem the right way" - Hamming

Stay Engaged
Working with others means it easier, as you can motivate each other.
Know when its time to switch topics - good research are versatile and able to switch quickly
Independence - you need to know when the time is right to go rogue and ignore your adviser
Reaching escape velocity to graduate


 

Peter Müller (ETH Zürich) - Building automatic program verifiers

verification: Given a program P and a specification S, prove that all execution of P satisfies S
automatic verification: Develop an algorithm that decide whether P satisfies S

We can't prove halting => we are finished :)

Lets start again ...

Peter demonstrates of verification tool called Viper, using the typical account balance and transfer example. The verifier gives two error, we add preconditions and the verifier successes. We then use the verifier for thread safety, again fixing a concurrency bug with locking.

Automatic verification is not b/w. We can have semi-automatic verification: we can guide the verifier. The complexity of the code varies greatly, most success in the area focuses on a small area. We build verifiers from weaker guarantees. Verification can working with modularity (or maybe it will not). Varying complexity of properties.

7 Stage process

Define the research goals
What is the state of the art?
Find examples the area in the problem space that is not currently handled (well) by current verifiers.

Find Reasoning Principle
How to explain the correctness of the code to a friend? For the loop, we would use a loop invariant.
This is very different to techniques like model checking, I wouldn't say to friend "I have checked 15 billion possible evaluation paths and are all valid".

Break Down Arguments
Decompose the correctness argument: what do I need to prove for each program and what can I reuse between programs. How can I modularise the program.

Design Specification Language
Designing a language to express intended behaviour, we must know who the user will be and their level of experience. Find the right compromise between expressibility and simplicity, that is right for the intended audience.

Design Program Logic
Determine which properties need to be checked and which properties may be assumed.

Automate Proof Search
Utilize existing infrastructure like SAT and SMT solvers. Develop decision procedures for aspects not already covered by existing infrastructure.

Evaluate the Solution
Firstly an experimental evaluation, does the solution work on the example code you were executing. Then he meta-theory like the soundness proof.

Research Direction
Either develop now technical for new languages and feature (e.g. the recent interest in event-bases programming due to android.) or work in general infrastructure for utilise in many areas.


 

Frank Pfenning (Carnegie Mellon University) - Proof theory and its role in programming language research

I will show you why every PL researcher needs a bit of proof theory.

How do we write correct programs ?
We don't :)

In practice, programming and informal reasoning go hand in hand, we use mental logical assertions, e.g. after calling sort the list is sorted. It's vital to decompose the problem into part so we can reason locally.

We need to develop programming language with the logic of the programmer in mind. Think about your least favourite programming language and why the operational or logical model of the languages doesn't agree with you. Reasoning is an integral part of programming. We need to co-design the language and logic for reasoning about it.

Logic is computation so the key is to design coherent logical and operations semantics.

Examples:

Computation first...
- runtime code generation => IS4
- partial eval => temporal logic
- dead code ele => model logic
- distribution computation => IS5
- message-passing
- concurrency => linear logic
- generic effects => lax logic

... and the logics first
- lax logic => ??
- temporal logic => ??
- epistemic logic => ??
- ordered logic => ??

Key ingredients.
Understand the difference between judgements and propositions. The basic style of proof systems e.g. natural deduction, sequent calculus, axiomatic proof system and binary entailment.

Example: Hypothetical Judgement*

Example: Runtime Code Generation

We have the source expression at runtime. We distinguish ordinary variable which are bound to values and expression values which are bound to source code.*


 

Stephanie Weirich (University of Pennsylvania) - How to write a good research paper

Stephanie is giving the popular talk "How to write a good research paper" from Simon Peyton Jones.

Start by writing the paper. It focuses us. Write a paper about any idea no matter now insignificant it seems. Then you develop the idea.

Identify your key idea, the paper main goal is to convey your idea. Be explicit about the main idea for the paper, the reviews should need to be a detective.

The paper flow:
- Here is a problem
- It's an interesting problem
- It's an unsolved problem
- Here is my idea
- My idea works

The introduction - describe the problem and what your paper contributes towards to problem. Don't describe the problem to broadly to quickly. It's vital to nail the exactly contributions.

The related work - it belongs after the main body of the paper, not straight after the introduction. Be nice in the related work. Be honest about the weaknesses of your approach.

Use simple, direct language, putting the reader first. Listen to readers.


Damien Pous (CNRS, LIP, ENS Lyon) - Coinductive techniques, from automata to coalgebra

Checking language equivalence of finite automata

Demonstration of the naive algorithm to compare equivalence by walking through the stages and comparing. This algorithm is looking for bi simulation. This algorithm has quadratic complexity. Instead we can used HK (Hopcroft and Kerp) algorithm.

[I got a bit lost in the theory of coalgebras for the rest of the talk, sorry]*

5Sep/140

OCaml 2014

Posted by Leonhard Markert

Welcome

Many submissions this year so a few had to be squeezed into "short talk" slots.

Session 1: Runtime system

Multicore OCaml, by Stephen Dolan (presenting), Leo White, Anil Madhavapeddy (University of Cambridge)

Stephen Dolan presenting Multicore OCaml

Stephen Dolan presenting Multicore OCaml

Still work in progresss. Concurrency v. parallelism. Concurrency: for writing programs (e.g. handle 10000 connections at once). Parallelism is for performance (e.g. making use of 8 cores).

LWT and Async give good support for concurrency (monads are a bit awkward).

Direct-style threading library: vmthreads and systhreads are not very efficient.

Parallelism: sad story. Can use multiple processes with manual copying ...

Unifying the two? -- concurrent programs are easily parallelized: should we use the same primitives?Java, C# and others do but it's a bad idea. At scale, death by context-switching.

Fibers for concurrency (cheap! -- have millions) and Domains for parallelism (expensive! -- have ~#core ones)

Concurrency primitives that are proposed are non-monadic. MVar: a blocking variable. Can use MVars to define an "async" function.

Demo using a Fibonacci function: One recursive call is done using "async". Parallelize it to 48 cores -- get diminishing returns as expected, but the speedup is impressive.

Q: Do you keep spawning even when no domains are free?
A: Yes. Spawning creates a new fiber which is extremely cheap (minor allocation of 30 words).

Q: Can you manipulate domains from within the program?
A: Yes.

Multiple domains run in parallel; fibers are balanced between domains. Creating domains is heavyweight.

OCaml is very fast for immutable data and functional programs -- try to keep this unchanged. Mutability is more complicated in a multithreaded system! Use a descendant of Doligez-Leroy.

Each domain has a private minor heap; no pointers exist between minor heaps. The major heap is shared. The major heap can point into minor heaps. Can do completely independent minor collections. Shared heap: mostly-concurrent, inspired by VCGC.

C API -- some minor changes required (sed can fix most!), e.g. caml_modify(&Field(x,i), y) becomes caml_modify_field(x, i, y). Atomicity guarantees of current GC are preserved.

Status: bootstraps, but GC needs more testing, tuning and benchmarking. Bytecode only at the moment. Weak references, finalizers etc. are still missing.

Q: Are you exposing a raw memory model?
A: We'll probably settle on enforcing an ordering of stores by using a memory fence on platforms which don't support this natively.

Q: Your collector stops all of the mutator sets. Is there any way of doing collection in parallel with mutators?
A: Have collection in parallel, then stop the world to verify (?)

Q: If fiber does a blocking system call, does that block the entire domain?
A: Yes at the moment, but there are plans to change that.

Q: Can what you are doing be used in monadic libraries in LWT or Async?
A: Yes but that would probably not a good idea because users of these libraries don't assume that their code is running concurrently.

Q: Is it possible to make sure that a particular fiber is in the same or is not in the same domain as other fibers?
A: Priorities, affinities to domains etc. are very interesting and we are just starting to provide these.

Ephemerons meet OCaml GC, by François Bobot (CEA)

François Bobot presenting "Ephemerons meet OCaml GC"

François Bobot presenting "Ephemerons meet OCaml GC"

Memoization -- computing a function with a cache while avoiding memory leaks: if a key is not needed anymore, we want to remove the entry from the cache. Particular case: heap-allocated keys not needed anymore means not reachable (apart from the cache).

Naive solution: use traditional dictionary data structure (has table, balanced tree etc.) -- problem: no key-value combination is ever released until the function is released!

Weak pointers -- a value not yet reclaimed can be accessed via a weak pointer. GC can release a value that is only pointed to by weak pointers.

Finalizers: can attach one or several to a value. So next idea; don't use K directly to index in K. But what if K and V are the same? Gives circular dependency, and nothing ever gets released.

Next idea: keep table in key! (Haskell: can do something like that with System.Mem.Weak) But we can still get cycles.

Ephemerons (Hayes 1997): value v can be reclaimed if its key k or the ephemeron can be reclaimed.

Implementation: OCaml runtime modified (Github pull request 22). Adds a new phase, cleaning, between mark and sweep.

Can use ephemerons to implement weak tables.

Session 2: Tools and libraries

Introduction to 0install, by Thomas Leonard (University of Cambridge)

Thomas Leonard presenting 0install

Thomas Leonard presenting 0install

Converted 0install from Python to OCaml. Will give intro to 0install -- decentralized, cross-platform package manager -- and report his experience porting Python code to OCaml code.

0install (www.0install.net) created in 2003. Make one package that works everywhere. Packages can come directly from upstream (or from distribution); switch to latest version easily.

Example: 0install add opam http://tools.ocaml.org/opam.xml

Q: How does it deal with libraries?
A: Libraries are shared.

Porting to OCaml.

Python problems -- too slow (mobile platforms), too unreliable -- no static type checking, Python 3 trouble, PyGTK breakage...

Idea to port to a different language -- lots of languages compared (for 0install, only startup time matters)

OCaml: Bad -- top-level no readline support, syntax errors hard to find, unhelpful errors; Good -- fast!, stable, reliable.

Suggestion: OCaml for Python, Java, ... users on the website.

Porting process: used old Python and new OCaml parts together; communication via JSON. Literal translation first; refactor later in OCaml.

Results: similar LOC, 10x faster, reliable, great community!

Q: Can 0install deal with multiple versions of the same package?
A: For OCaml packages, use OPAM.

Q: How often did you run into type errors when porting to OCaml?
A: Porting the SAT solver was a little tricky. None and null are handled better in the OCaml code and all kinds of network errors are handled now.

Q: Are you still using objects?
A: Currently getting rid of objects and making things more immutable.

Transport Layer Security purely in OCaml (*), by Hannes Mehnert (presenting, University of Cambridge), David Kaloper Meršinjak (University of Nottingham)

Hannes Mehnert presenting a new TLS implementation in OCaml

Hannes Mehnert presenting a new TLS implementation in OCaml

Current state: Mirage OS -- memory safe, abstract, modular. But for security call unsafe insecure C code??

Motivation: protocol logic encapsulated in declarative functional core; side effects isolated in frontends; concise, useful, well-designed API (should be easy in comparison to OpenSSL!)

TLS: secure channel between two nodes. Most widely used security protocol family. Flexible algorithm: negotiation of key exchange, cipher and hash.

Detailed properties: authentication, secrecy, integrity, confidentiality, forward secrecy. Divided into protocol layers: handshake, change cipher spec, alert, application data, heartbeat subprotocol.

Authentication (X.509): client has set of trust anchors (CA certificates); server has certificate signed by a CA; during handshake client receivers server certificate chain; client verifies that server certificate is signed by a trust anchor.

Handshake demo using the OCaml TLS implementation.

Stats: 10kloc (compare OpenSSL 350kloc); interoperable. Missing features: client authentication, session resumption, ECC ciphersuites. Roughly 5x slower than OpenSSL. Took ~3 months to implement.

Future: implement missing features, improve performance, test suites, integration with Mirage.

More at http://openmirage.org/blog/introducing-ocaml-tls

Q: Who will use this?
A: The University of Cambridge.

C: The team that discovered the Heartbleed bug ran their test framework against this library, and found five bugs in Linux but none in this library.

Q: What about timing and interaction with GC?
A: Best practice is not to use and data-dependent branches and to use the same memory access patterns. No data dependent allocations. But the preferred way is to do this in the language.

Q: How about tests?
A: We have a test suite but we'd like to automatically generate tests and run them against other implementations to be able to do comparisons.

OCamlOScope: a New OCaml API Search (*), by Jun Furuse (Standard Chartered Bank, Singapore)

Jun Furuse presenting OCamlScope

Jun Furuse presenting OCamlScope

Haskell has type classes, purity, laziness where OCaml is different. But what's really different is library search: Hoogle. Can search by name, by type, or both.

OCamlBrowser, OCaml API Search are very limited so Jun built OCamlScope.

OCamlBrowser works only for locally compiled code, uses OCaml typing code. OCaml API search was based on OCamlBrowser but has been discontinued.

Previously there were many problems with trying to build an OCaml API search. Now cmt/cmti files give compiled AST with locations, and OPAM gives unified installations; compiler-libs make it easier to access OCaml internals.

OCaml Scope: remote (hosted); search by edit distance.

525k entries (values, types, constructors) -- 115 OPAM packages, 185 OCamlFind packages. Note: need to deal with two different package systems! Scraping cmt/cmti with OPAM, create module hierarchy with OCamlFind; must detect OPAM - OCamlFind package relationships.

Future work: real alias analysis (instead of ad-hoc grouping).

Find it at https://ocamloscope.herokuapp.com

Q: Do you plan to integrate this with an editor?
A: That would be nice, sure.

C: Could use something like your edit distance approach to improve error message and have the compiler give suggestions.

Session 3: OCaml News

The State of OCaml (invited), Xavier Leroy (INRIA Paris-Rocquencourt).

Xavier Leroy presenting the OCaml 4.02 release

Xavier Leroy presenting the OCaml 4.02 release

Today: What's new in OCaml 4.02 (September 2014)? Many new features; 66 bugs fixed, 18 feature wishes.

1. Unified matching on values and exceptions

Classic programming problem: in let x = a in b, we want to catch exceptions raised by a but not those raised by b. In OCaml 4.02, the match .. with .. construct is extended to analyse both the value of a and exceptions raised during a's evaluation.

let rec readfile ic accu =
  match input_line ic with
  | "" -> readfile ic accu
  | l -> readfile ic (l :: accu)
  | exception End_of_file -> List.rev accu

Compilation preserve tail-call optimization.

2. Module aliases

Common practice: bind an existing module to another name.

module A = struct
  module M = AnotherModule
  ...
end

This binding is traditionally treated like any other definition. This can cause subtle type errors with applicative functors; accesses sometimes go through more indirections; linking problems.

In OCaml 4.02, these constructs are treated specially during type-checking, compilation and linking.

Discussion of the application of this to libraries.

3. Mutability of strings

For historical reasons, the string type is mutable in place and has two broad uses. It can be used to represent text, or as an I/O buffer (mutable). OCaml programmers usually keep these two uses distinct. But mistakes happen, and user code may be malicious.

In the Lafosec study (ANSSI), there are a few examples of OCaml code where one can mutate someone else's string literals, or make a Hashtable key disappear.

New solution by Damien Doligez: in 4.02, there are two base types -- string for text, and bytes for byte arrays. By default, the two are synonymous but incompatible if option -safe-string is given.

In default mode, all existing code compiles but get warnings when using String functions that mutate in place.

In -safe-string mode, the values of type string are immutable (unless unsafe coercions are used); I/O code and imperative constructions of strings need to be rewritten to use type bytes and library functions from Bytes.

Other novelties: explicitly generative functors; annotations over OCaml terms; external preprocessers that rewrite the typed AST (-ppx option); open datatypes, extensible a posteriori with additional constructors.

Performance improvements: more aggressive constant propagation, dead code elimination, common subexpression elimination, pattern-matching, printf (based on GADTs); representation of exceptions without arguments

New toplevel directives to query the environment; new port to 64-bit ARM; source reorganization: Camlp4 and Labltk split off the core distribution and now live as independent projects.

What's next? Bug fix release 4.02.1, then various language experiments in progress; more optimizations, ephemerons, GDB support, tweaks to support OPAM better.

Q: Safe Haskell is starting to catch on in the Haskell community.
A: "Safe strings" are a first step in this direction. Safe OCaml is definitely something we're thinking about.

The OCaml Platform v1.0, by Anil Madhavapeddy (presenting, C), Amir Chaudhry (C), Jeremie Diminio (JS), Thomas Gazagnaire (C), Louis Gesbert (OCamlPro), Thomas Leonard (C), David Sheets (C), Mark Shinwell (JS), Leo White (C), Jeremy Yallop (C); (C = University of Cambridge, JS = Jane Street)

Anil Madhavapeddy presenting the OPAM package manager and platform

Anil Madhavapeddy presenting the OPAM package manager and platform

The platform: tooling, quantitative metrics, agility to judge the impact of language changes. Ultimate goal: grow a sustainable open source community.

OPAM 1.2: "The Platform Release" -- solver errors are explained in plain English rather than boolean formulae, more expressive queries (reverse dependencies and recursive); can clone the source code and repo file for any OPAM package.

Steady growth in the number of released OPAM packages. Growth in number of contributors is much slower.

New workflow in OPAM 1.2 (See http://opam.ocaml.org/blog/opam-1-2-pin/) -- flexible clone, pin, configuration, sharing.

OCaml Platform? OPAM Platform! Tools built around OPAM that provide a modular workflow for developing, publishing and maintaining OCaml source code, both online and offline.

OPAM = OCaml PlAtformM!

OPAM 1.2 restructured: everything built on OpamLib. On top of that library, have "opam", "opam-publish", "opam-doc" (with opam-units) tools.

OPAM documentation -- goal: documentation unified across packages that handles cross-referencing and module inclusion well. It's hard!

Use only the Typed AST; comments are transformed into attributes in the typed AST; these are used by external tools.

Preview: http://ocaml.github.io/platform-dev a working prototype.

Timeline: Sept online release, Nov use it locally, Dec build custom website for other repositories.

Tooling: OCamlJS now supports complete compiler REPL in JS; GDB integration.

Polish: easier to package and install; binary releases on lots of platforms; documentation rewritten.

One More Thing: Assemblage -- an very alpha eDSL to describe OCaml projects. OCaml as host language with Merlin auto-completion. Can introspect the project description to generate build rules. Very lightweight, integrates easily.

Q: Do you have anything to help with ARM cross-compilation?
A: Global build "glue" still required to support cross-compilation (Assemblage will help with this)

Q: Any platform is a (false?) promise to define a set of supported libraries. What's OPAM's take on this?
A: We as the developers of the platform don't want to be the people who define what the current set of packages considered as "stable" is. We just build the tools.

Session 4: Language

A Proposal for Non-Intrusive Namespaces in OCaml, by Pierrick Couderc (I), Fabrice Le Fessant (I+O), Benjamin Canou (O), Pierre Chambart (O); (I = INRIA, O = OCamlPro)

The namespace proposal presentation

The namespace proposal presentation

In OCaml, we cannot use multiple modules that have the same name.

Common practice: use long names, e.g. LibA (Misc, Map) as LibA (LibA_Misc, LibA_Map).

Packs: for the developer -- no code change, simply a matter of options; user -- use path to distinguish modules. Cons: too many recompilations, dependencies, large executables!

In 4.02, we can use module aliases. Option -no-alias-deps.

Advantages: can deceive ocamldep for better dependencies and namespaces can be used transparently.

Proposed solution: namespaces and "as" imports. Can import specific modules, or all modules, or all modules except certain ones from a namespace.

Namespaces are not closed -- adding a module a posteriori is possible.

Comparison with module aliases: +extensibility, +simple build system, +better dependencies, +expressivity. But -new syntax.

Work in progress: big functors -- using packs to generate functors.

Conclusion: mechanism of namespaces integrated in the language, solves compilation issues. Working prototype for 4.02: github.com/pcouderc/ocaml_namespaces

C: Openness vs. functorization.
A: With big functors, one idea would be to close namespaces but that would not be ideal.

C: There are already module aliases and then there will be namespaces. Mixing the idea of big functors with namespaces does not seem like a good idea.

Q: How about qualified imports?
A: Interesting idea.

Q: Why do you need "in namespace" if you're already separating by directory structure?
A: It's not enough. We need to separate compilation units.
C: But directory names could be used a link time.

C: The benefits must be very clear because there already are lots of ways of importing "stuff".

Improving Type Error Messages in OCaml (*), by Arthur Charguéraud (INRIA & Université Paris Sud)

Arthur Charguéraud presenting his improved type error messages

Arthur Charguéraud presenting his improved type error messages

Type errors: dozens of papers, no implementation. For beginners and experts, type errors can be very frustrating.

Result: a patch to the type-checker, providing alternative error messages for ill-typed toplevel definitions.

Example: missing unit argument to read_int -- report "You probably forgot to provide '()' somewhere"; refs and bangs; missing "rec" -- report "You are probably missing the 'rec' keyword".

Example: missing else branch -- laughs from the audience about the terrible type error message.

Other common errors where the error messages were improved: subexpressions, ill-typed applications, binary operators, incompatible branches, higher-order function application, occurs check errors.

Also works for optional and named arguments.

Not included GADTs, module type checking.

Try it online: tinyurl.com/ocamleasy

Q: Why can't we have this today??
A: Need more testing, especially with higher order functions.

Github Pull Requests for OCaml development: a field report (*), by Gabriel Scherer (INRIA).

Experiment of using Github pull requests for 8 months.

Old way: Mantis.

users report bugs, request features, propose patches
sometimes a developer works on a PR
a release each year, with some bugs fixed and some new features

Serious bugs get fixed, features are mostly ignored. Patches got ignore too!

Results of the experiment: 18 new contributions (patches) from people who probably wouldn't have sent their patches in the old system, 18 new reviews! So Github attracted a new crowd of contributors.

Some Github pull requests were handled very quickly (small fixes and changes). Developer meetings help taking decisions. Pull requests are most effective during initial development: before the feature freeze (after that the team is busy with getting the release out).

Negative results: to be effective, external users should be told more about the development cycle. Github was used mostly by people used to git.

Q: Reviews and contributions?
A: 0 reviews on Mantis, 18 on Github; ~40 patches on Mantis, 18 on Github.

Q: How about switching to git?
A: Don't believe that is going to happen. There is so much information on Mantis right now! Continuous integration is nice though.

Joint Poster Session (with ML Family workshop)

Irminsule; a branch-consistent distributed library database, by Thomas Gazagnaire (C), Amir Chaudhry (C), Anil Madhavapeddy (C), Richard Mortier (University of Nottingham), David Scott (Citrix System), David Sheets (C), Gregory Tsipenyuk (C), Jon Crowcroft (C); (C = University of Cambridge)

What if you had a distributed database with git-like semantics? -- E.g. commit history

Problem: merges!

Have an Obj and a Git backend. Implementations of various persistent data structures with merge function: prefix trees, mergeable queues, mergeable ropes.

A Case for Multi-Switch Constraints in OPAM, by Fabrice Le Fessant (INRIA)

Fabrice Le Fessant making the case for multi-switch constraints in OPAM

Fabrice Le Fessant making the case for multi-switch constraints in OPAM

OPAM: the official way to install OCaml packages. Builds a CUDF universe with packages available for the switch, and the dependency constraints between these packages. Send the universe to the CUDF solver, then apply the solution to the switch.

Multi-switch constraints: add a switch prefix to each package name. allow dependency constraints between packages with different switch prefixes

Use cases: cross-compilation (build and host switches), multi-switch packages, all-switch commands, per-switch repositories, external dependencies, applications-specific switches.

No implementation yet -- just an idea for discussion.

LibreS3: design, challenges, and steps toward reusable libraries, by Edwin Török (Skylable Ltd.)Slides Poster.

Edwin Török presenting the LibreS3 library

Edwin Török presenting the LibreS3 library

Concepts: monads, S3 server -- proprietary Amazon service; FOSS replacements exist.

Architecture: don't choose one particular library/framework; be careful with acquiring resources: with_resource; some code in LibreS3 has been generalized (any-cache, any-http)

Found some interesting bugs in OCaml and libraries while developing LibreS3. Debugging monadic code is hard! Stack traces from monadic concurrency are incomplete.

More: goo.gl/jmFOcn

Nullable Type Inference (#), by Michel Mauny and Benoit Vaugon (ENSTA-ParisTech)

Nullable Type Inference presentation

Nullable Type Inference presentation

Nullable type t? includes NULL and values of type t. Provide type inference algorithm featuring HM polymorphism that statically guarantees that NULL cannot be used as a regular value (algorithm's soundness has been proved).

Nullable types are used in practice: Hack (Facebook), Swift (Apple)

Applications

Coq of OCaml, by Guillaume Claret (Université Paris Diderot)

OCaml: FP with imperative features, many libraries and programs.

Coq: mainly used as a proof language, purely functional (only terminating programs!), dependent types, limited number of libraries.

Coq to OCaml: extraction mechanism developed by Pierre Letouzey -- removes proof terms, complete.

OCaml to Coq: CFML: deep embedding -- how to do shallow embedding? How to import imperative programs? How to keep the resulting code small?

Use a monadic translation to represent imperative features in Coq.

Usage: prove formal properties on OCaml programs, augment number of Coq libraries for programming.

Effects descriptor: a set of atomic effects. Inference: infer types using the OCaml compiler, then bottom-up analysis; fixpoint for mutually recursive definitions. Then represent effects as monads in Coq.

There is one monad per descriptor of effects -- how do we compose them? Impossible in general, but doable when we restrict the shape of the monads.

Monads exist to model global references, exceptions, i/o, non-termination.

Supported language fragment: pure lambda-calculus kernel, mutually recursive functions, inductive types, records, ADTs, ...

Compilation passes: 1. import the syntax tree typed by the OCaml compiler; 2. infer effects; 3. do monadic transformation; 4. pretty-print to Coq syntax.

Challenges: generate human-readable code, support real OCaml programs, import basic libraries, support functors (not currently implemented).

Conclusions: hard to make compiler work on real examples; functors are complex!

Q: The granularity of effects matters a lot when you use this to prove properties of your OCaml code.
A: Yes.

Q: How can I use this to prove properties about recursion -- e.g. whether they are tail-recursive?
A: (?)

Q: Do you handle local references?
A: Not at this moment.

Q: You only support part of the Pervasives library. Which parts do you not support?
A: Can't currently pass effectful functions as arguments, so for example List.iter is not supported in the usual use case.

High Performance Client-Side Web Programming with SPOC and Js of ocaml (*), by Mathias Bourgoin and Emmmanuel Chailloux (Université Pierre et Marie Curie)

Mathias Bourgoin presenting SPOC

Mathias Bourgoin presenting SPOC

This is the OCaml Users and Developers Workshop 2014 in Gothenburg.

Parallel OCaml? Lots of libraries. One is SPOC = Stream Processing OCaml (with OpenCL).

GPGPU programming. Two main frameworks: Cuda and OpenCL. Use different languages to write kernels (Assembly or C/C++ subset) and to manage kernels (more or less any general purpose language can be used here).

SPOC compiles to Cuda or OpenCL. It contains the Sarek DSL and a runtime.

WebSpoc: compile OCaml with SPOC and then to JS with js_of_ocaml.

Demo: Using SPOC from within a browser to do image manipulation.

WebSpoc helps develop complex web apps with intensive computations. Good for GPGPU courses.

Future work: add a JS memory manager dedicated to SPOC vectors; add bindings to WebGL.

Demo online: http://mathiasbourgoin.github.io/SPOC/tutos/OCaml2014.html

SPOC and Sarek are available in OPAM.

Q: Is there potential to run this code on the GPU of a mobile phone?
A: Not yet.

Using Preferences to Tame your Package Manager, Roberto Di Cosmo (presenting, D+I), Pietro Abate (D), Stefano Zacchiroli (D), Fabrice Le Fessant (I), Louis Gesbert (OCamlPro); (D = Université Paris Diderot, I = INRIA)

Ten years of research on package management -- EDOS and MANCOOSI, bridging research communities. Used as foundation of OPAM.

LOTS of package managers out there! Binary, source, language specific, application specific, decentralized, functional approach.

What's inside? Two sides: people maintaining packages (server side) -- maintain a coherent set of packages. Also "client side" -- fetch and authenticate metadata and packages, resolve dependencies, user preferences.

Dependency solving: installability of a single package and co-installability of a set of packages are NP-complete problems.

Application: find uninstallable packages in a repository: just call a SAT solver on each package in the repository! ("dose" library is specialized for this task)

ows.irill.org is the OPAM Weather Service -- shows which packages can be installed together.

Finding a solution is NP-complete but installing and upgrading is more demanding -- how many ways are there to install a package? Exponentially many.

Towards modular package managers. 0. stop coding a petty solver for every new component base system; 1. use a common upgrade description format; 2. provide means for expressing user choice.

User preferences: built from four ingredients. Package selectors, measurements, maximisation/minimisation, aggregation.

Example for minimisation: -count(removed) -- we want a solution where the number of removed packages is minimised.

Example profiles:

"Paranoid": -count(removed),-count(changed)
"Trendy": -count(removed),-notuptodate(solution),-count(new)

Slightly more exotic:

"Minimal system": -sum(solution,installedsize),-count(solution)
"Noah's ark": +count(solution)
"Fast bootstrap": -sum(solution,compiletime)

Repairing a broken system configuration:

"Fixup simple": -count(changed)
"Fixup trendy": -count(changed),-count(down),-notuptodate(solution)

This is all possible with OPAM 1.2! (Check opam --help, look for --criteria)

You can help: try different profiles, test expressivity of the preference language, help debug OPAM.

Conclusion: package managers are complex; a very hard part is dependency solving! Modern package managers must share common components, in particular dependency solvers and the user preference language.

Q: Does this keep track of explicitly installed packages?
A: Yes, OPAM knows which packages were installed directly (the "root set") and which were installed in order to satisfy dependencies.

Q: Why don't other package managers use this technology now?
A: Package managers are core parts of any system so people tend to be very resilient to change.

Simple, efficient, sound-and-complete combinator parsing for all context-free grammars, using an oracle (*), by Tom Ridge (University of Leicester)

The P3 combinator parsing library. Can handle all context-free grammars (CFGs). Scannerless or can use a lexer. Good theoretical basis, but slow.

Example given.

Memoized counting -- you can't do this with any other parser!

Compute actions over all good parse trees; there are exponentially many such parse trees, but this doesn't have to take exponential time!

Disambiguation: directly encode in code with using "option".

Supports modular combination, e.g. "helper parsers".

Comparison with Happy: it's faster, in some cases ridiculously so.

Long term aim: general parser, verified correctness and performance, usable in the real world.

Q: What's the use case?
A: It's extremely flexible, but slower than deterministic parsers.