syslog
8Nov/170

SOSP’17 Trip Report

Posted by Jianxin Zhao

I'm glad to have the opportunity to present our poster at SOSP, one of the top conference in the field of system research. Here is a brief review of the papers that I find interesting from this conference.

Debugging

One of this year's best papers goes to "DeepXplore", a whitebox framework for systematically testing real-world deep learning (DL) systems. Instead of system, the focus of this paper is more on machine learning IMO. This framework, or algorithm, tries to automatically find corner cases to invalidate your deep learning method. For example, suppose you have a image classification DL model. Even it works perfectly in 99% cases, there are always corner cases that makes your model behave abnormally, e.g. recognise a panda as, say, a dinosaur. And obviously a user cannot find these potentially numerous cases and label them manually one by one.

What this algorithm proposes is to use a series of models of the same purpose, and use gradient descent to find a input example that can maximise the disagreement among all these models. It also try to maximise the activation of an inactive neuron to push it above a threshold?—?the newly proposed "neuron coverage" idea. The whole problem is formalised as a optimisation problem. It is proved to be more efficient than random testing and adversarial testing. I'm very interested in the execution time to find out an corner case input —only seconds as reported! Amazing.

Resource Management

The paper "Monotasks" is about performance reason-ability of data analytics system. By "reason-ability" I mean questions like what hardware/software configuration should a user use and possible reasons to poor system performance. Currently data analytics systems such as Spark provide fin-grained pipelining to parallelise use of CPU, network, and disk within each task. But this kind of pipelining is the exact reason that we cannot reason the system performance, because tasks have non-uniform resource use, concurrent tasks on a machine may contend, and resource use may occurs outside the control of the system.

The basic idea of monotasks is that jobs are decomposed into units of work that each use exactly one unit of the CPU, network and disk resources, so that the usage is uniform can predictable. These work units form a DAG. Each kind of resource has a scheduler. The decomposition happens when the jobs arrive on the worker machine.

What I find most interesting in this paper is the idea of "going back": eliminate the use of pipelining, sacrifice part of performance for its reasonability. And also the implementation doesn't require modification of user code.

Security

The other best paper "Efficient Server Audit" revisit the classic topic of execution integrity with a newly-defined efficient server audit problem: since your have no visibility into AWS, how can you assure that AWS (executor)is executing the actual application you've written? Specifically, The verification algorithm (verifier) is given an accurate trace of the executor's inputs and delivered outputs. The executor gives the verifier reports, but these are untrusted. The verifier must somehow use the reports to determine whether the outputs in the trace are consistent with having actually executed the program. The problem is to design the verifier and the reports.

In the proposed solution, for different requests, same control flow of executor are grouped, and the verifier re-executes each control flow group in a batch. Re-executed read operations are fed based on the most recent write entry in the logs, and the verifier checks logged write operations opportunistically, with the assurance that alleged operations can be ordered consistent with observed requests and responses. Finally, the verifier compares each request's produced output from executor to the request's output in the trace all control flow groups.

Data Analytics

Stores for temporal analytics is crucial to many important data analytic applications, such as recommend system, financial analysis, Smart Home, ect. "SummaryStore" is for large scale time-series analysis. This approximate time-series store aims at providing high query accuracy while keeping extremely cost-effective storage.

Mainly three techniques are used here: 1)it maintains compact summaries through aggregates, samples, sketches, and other probabilistic data structures (e.g. Bloom Filter) instead of raw data; 2)based on the observation that newer data often conveys more information, it defines a novel time-decayed approximation scheme; 3) some special events are separately stored as is in raw format rather than being summarised. The appendix proves comparison of different decay functions and the query error bound estimation.

Privacy

A common theme in the Privacy session is "shuffle".

Atom is an anonymous messaging system that both scales horizontally like Tor, and also provides clear security properties under precise assumptions like DCnet-based systems. The assumption here is global adversary and malicious servers. The main technique here are onion messages and random output permutation.

Interestingly, the next paper Stadium focus the exact opposite application area: one-to-one communication, compared with the broadcast style of Atom. The main problem this work wants to solve is how to hide communication metadata. It scale horizontally. This works is based on the SOSP'15 paper Vuvuzela, which problem is that every server has to handle all messages. The challenge here is how to distribute to untrusted servers. The main design is to combine collaborative noise generation, message shuffling, and verifiable parallel mixnet processing pipeline.

Prochlo from Google propose an ESA system architecture?—?encode, shuffle,analyse?—?for large-scale online monitoring of client software behaviour, which may entail systematic collection of user data. One interesting point is that to enhance the proposed ESA, this work proposes Stash Shuffle, an algorithm based on Intel's SGX. One of the question raised at the conference is that "Why trusting Intel is better than trusting Google?" :)

Scalability

Facebook's SVE system is deployed in production for uploading and processing videos at large scale, with low latency to support interactive applications, reliability, and robustness. The key idea here is very simple: process a video while upload another one in the pipeline at the same time, but it has shown good performance compared to Facebook's previous MES system. This is indeed one of the advantages of "being large scale".

Kernels

Here is a paper that is quite related to Unikernel: "My VM is lighter than your container". This paper "find and eliminate bottlenecks (such as image size) when launching large numbers of lightweight VMs(both unikernels and minimal Linux VMs)". The results shows a 4ms boot time for a unikernel based VM while a docker container takes 150ms to boot. This system provides tools to automatically build minimised image, and also replace the Xen control plane so that front-end and back-end drivers can communicate directly through shared memory. This paper reminds me of the "Mosaic" from EuroSys'17, which also tries to fit more computation units in one computer.

Other Sessions

Obviously we cannot (or more Specifically I cannot) cover the essence of a sosp paper with one to two paragraphs, and there are many other great papers that are not mentioned here yet. If you're interested, please visit the program to see the full list of accepted papers.

One more thing...

SOSP this year also host ACM Student Research Competition. 16 are selected from 42 submitted abstracts, and 6 are further selected to do a 5-min presentation. The presentations are surprisingly interesting. Please see the list of winners (actually all 6 participants, divided into graduate and undergraduate groups) here.

Also, this year's female participation rate is 6.6% for authors and 7% for Program Committee, both slightly lower than previous 3 years' numbers.

5Nov/130

Live-blog from SOSP 2013 – Day 2

Posted by Ionel Gog

The second day of the 24th ACM Symposium on Operating Systems Principles has just started.  Several of us are here and will be live-blogging the talks and Q&A sessions.

4Nov/130

Live-blog from SOSP 2013

Posted by Malte Schwarzkopf

The 24th ACM Symposium on Operating Systems Principles is about to kick off in beautiful Nemacolin Woodlands this morning. Several of us are here and will be live-blogging the talks and Q&A sessions. We've got exciting papers on multi-core scalability, fast transactions, synchronization and many other topics coming up, and that is just the first session!

 

26Apr/120

NSDI 2012 Day 3

Posted by Jon Crowcroft

10 New Architectures and Platformsjail break

 

 

 

26Apr/120

NSDI 2012 Day 2

Posted by Jon Crowcroft

Thursday - Sessions 5 6 7 8
NSDI 2012

5 Privacy