syslog
27Mar/18Off

At LSE workshop on data science mar 27/28

Posted by Jon Crowcroft

see
http://www.lse.ac.uk/Statistics/Events/Workshop-on-Data-Science-Theory-and-Practice/Programme-at-a-glance
lots of good speakers! will see if slides will be available - e.g. first speaker's blog
http://inverseprobability.com/2018/02/06/natural-and-artificial-intelligence

19Feb/18Off

AAAI/AIES’18 Trip Report

Posted by Jianxin Zhao

Recently I’m honoured to get the opportunity to present our work “Privacy-preserving Machine Learning Based Data Analytics on Edge Devices” at the AIES'18 conference, which is co-located with AAAI'18, one of the top conference in the field of AI and Machine Learning. Here is a brief review of some of the papers and trends that I find interesting from this conference.

Activity detection is obviously a hot topic. New building blocks are invented. “Action Prediction from Videos via Memorizing Hard-to-Predict Samples” aims to improve the prediction accuracy, since one challenge is that different actions may share similar early-phase pattern. The proposed solution is a new CNN plus augmented LSTM network block. “A Cascaded Inception of Inception Network with Attention Modulated Feature Fusion for Human Pose Estimation” proposes a new “Inception-of-Inception” block to solve current limitations in preserving low level features, adaptively adjusting the importance of different levels of features, and modelling the human perception process. The research focus is also on reduce the computation overhead. “R-C3D: Region Convolutional 3D Network for Temporal Activity Detection” aims to reduce the time of activity detection by sharing convolutional features between the proposal and the classification pipelines. “A Self-Adaptive Proposal Model for Temporal Action Detection based on Reinforcement Learning” proposes that agent can learn to find actions through continuously adjusting the temporal bounds in a self-adaptive way to reduce required computation.

Face identification is also a widely discussed topic. “Dual-reference Face Retrieval” propose a mechanism to enable recognise face at a specific age range. The solution is to take another reference image at certain age range, then search similar face of similar age. Person re-identification associates various person images, captured by different surveillance cameras, to the same person. Its main challenge is the large quantity of noisy video sources. In “Video-based Person Re-identification via Self Paced Weighting”, the authors claims that not every frame in videos should be treated equally. Their approach reduces noises and improves the detection accuracy. In “Graph Correspondence Transfer for person re-identification”, the authors try to solve the problem of spatial misalignment caused by large variations in view angles and human poses.

To improve Deep Neural Networks, many researchers seek to transfer the learned knowledge to new environment. “Region-based Quality Estimation Network for Large-scale Person Re-identification” is another paper on person re-identification. It proposes a training method to learn the lost information from other regions and thus performs good with input of low quality. “Multispectral Transfer Network: Unsupervised Depth Estimation for all day vision” estimate depth image from a single thermal image. “Less-forgetful learning for domain expansion in DNN” enhances DNN so that it can remember previously learned information when learning new data from a new domain. Another line of research is to enhance training data generation. “Mix-and-Match Tuning for Self-Supervised Semantic Segmentation” reduces dataset required for training segmentation network. “Hierarchical Nonlinear Orthogonal Adaptive-Subspace Self-Organizing Map based Feature Extraction for Human Action Recognition” aims to solve of the problem that feature extraction need large-scale labelled data for training. Its solution is to adaptively learn effective features from data without supervision.

One computation theme in these research work is that to reduce the computation overhead. “Recurrent Attentional Reinforcement Learning for Multi-label Image Recognition” achieves it by locating the redundant computation in the region proposal in image recognition. “Auto-Balanced Filter Pruning for Efficient Convolutional Neural Networks” compresses network module by throwing away a large part of filters in the proposed two-pass training approach. Another trend is to combine multiple input sources to improve accuracy. “Action Recognition with Coarse-to-Fine Deep Feature Integration and Asynchronous Fusion” combine multiple video streams to achieve more precise feature extraction of different granularity. “Multimodal Keyless Attention Fusion for Video Classification” combines multiple single-modal models such as rgb, flow, and sound models to solve the problem that CNN and RNN models are difficult to be combined to for joint end-to-end training directly on large-scale datasets. “Hierarchical Discriminative Learning for Visible Thermal Person Re-Identification” improves person re-identification by cross-compare normal and thermal video streams.

It is not a surprise that not many system-related papers are presented at this conference. “Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design” focuses one the challenge of float-point arithmetic overhead in transplant CNNs on FPGA. “AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training” proposes a gradient compression technique to reduce the communication bottleneck in distributed training.

The research from industry takes large portion in this conference. IBM presents a series of papers and demos. For example, the “Dataset evolver” is an interactive Jupyter notebook-based tool to support data scientists perform feature engineering for classification tasks, and “DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers” proposes to automatically take design flow diagrams and tables from existing research and translate them to abstract computational graphs, then to Keras/Caffe implementations. A full list of IBM’s work at AAAI can be seen here. Google presents “Modeling Individual Labelers Improves Classification” and “Learning to Attack: Adversarial Transformation Networks”, and Facebook shows “Efficient Large-scale Multi-modal classification”. Both companies focus on a specific application field compared to IBM wide spectrum of research. Many research work from industry are closely related to application, such as Alibaba’s ”A multi-task learning approach for improving product title compression with User search log data.” Though I’m curious to find that financial companies are not found at the conference.

On the other hand, the top universities tend to focus on theoretical work. “Interpreting CNN Knowledge Via An Explanatory Graph” from UCLA aims to explain a CNN model and improve its transparency. Tokyo University presents “Constructing Hierarchical Bayesian Network with Pooling” and “Alternating circulant random features for semigroup kernels”. CMU presents “Brute-Force Facial Landmark Analysis with A 140,000-way classifier”. Together with ETH Zurich, MIT shows “Streaming Non-monotone submodular maximization: personalized video summarization”. However, the work of UC Berkeley seems to be absent from this conference.

Adversarial learning is one key topic in different vision-related research areas. For example, “Adversarial Discriminative Heterogeneous Face Recognition”, “Extreme Low resolution activity recognition with Multi-siamese embedding learning”, and “End-to-end united video dehazing and detection”. One of the tutorials “Adversarial Machine Learning” gives an excellent introduction to the state of art on this topic. Prof. Zoubin Ghahramani from Uber gives a talk on his vision about Probabilistic AI, which is also one of the trends at this conference.

Best paper of this year goes to “Memory-Augmented Monte Carlo Tree Search” from University of Alberta, and best student paper to “Counterfactual Multi-Agent Policy Gradients” from, ahem, the other place.

These papers is only a scratch of all the papers contained in the AAAI’18 conference, and mostly on the Computer Vision topic that is my personal interest. If you are interested, please refer to the full list of accepted papers.

Tagged as: No Comments
8Nov/17Off

SOSP’17 Trip Report

Posted by Jianxin Zhao

I'm glad to have the opportunity to present our poster at SOSP, one of the top conference in the field of system research. Here is a brief review of the papers that I find interesting from this conference.

Debugging

One of this year's best papers goes to "DeepXplore", a whitebox framework for systematically testing real-world deep learning (DL) systems. Instead of system, the focus of this paper is more on machine learning IMO. This framework, or algorithm, tries to automatically find corner cases to invalidate your deep learning method. For example, suppose you have a image classification DL model. Even it works perfectly in 99% cases, there are always corner cases that makes your model behave abnormally, e.g. recognise a panda as, say, a dinosaur. And obviously a user cannot find these potentially numerous cases and label them manually one by one.

What this algorithm proposes is to use a series of models of the same purpose, and use gradient descent to find a input example that can maximise the disagreement among all these models. It also try to maximise the activation of an inactive neuron to push it above a threshold?—?the newly proposed "neuron coverage" idea. The whole problem is formalised as a optimisation problem. It is proved to be more efficient than random testing and adversarial testing. I'm very interested in the execution time to find out an corner case input —only seconds as reported! Amazing.

Resource Management

The paper "Monotasks" is about performance reason-ability of data analytics system. By "reason-ability" I mean questions like what hardware/software configuration should a user use and possible reasons to poor system performance. Currently data analytics systems such as Spark provide fin-grained pipelining to parallelise use of CPU, network, and disk within each task. But this kind of pipelining is the exact reason that we cannot reason the system performance, because tasks have non-uniform resource use, concurrent tasks on a machine may contend, and resource use may occurs outside the control of the system.

The basic idea of monotasks is that jobs are decomposed into units of work that each use exactly one unit of the CPU, network and disk resources, so that the usage is uniform can predictable. These work units form a DAG. Each kind of resource has a scheduler. The decomposition happens when the jobs arrive on the worker machine.

What I find most interesting in this paper is the idea of "going back": eliminate the use of pipelining, sacrifice part of performance for its reasonability. And also the implementation doesn't require modification of user code.

Security

The other best paper "Efficient Server Audit" revisit the classic topic of execution integrity with a newly-defined efficient server audit problem: since your have no visibility into AWS, how can you assure that AWS (executor)is executing the actual application you've written? Specifically, The verification algorithm (verifier) is given an accurate trace of the executor's inputs and delivered outputs. The executor gives the verifier reports, but these are untrusted. The verifier must somehow use the reports to determine whether the outputs in the trace are consistent with having actually executed the program. The problem is to design the verifier and the reports.

In the proposed solution, for different requests, same control flow of executor are grouped, and the verifier re-executes each control flow group in a batch. Re-executed read operations are fed based on the most recent write entry in the logs, and the verifier checks logged write operations opportunistically, with the assurance that alleged operations can be ordered consistent with observed requests and responses. Finally, the verifier compares each request's produced output from executor to the request's output in the trace all control flow groups.

Data Analytics

Stores for temporal analytics is crucial to many important data analytic applications, such as recommend system, financial analysis, Smart Home, ect. "SummaryStore" is for large scale time-series analysis. This approximate time-series store aims at providing high query accuracy while keeping extremely cost-effective storage.

Mainly three techniques are used here: 1)it maintains compact summaries through aggregates, samples, sketches, and other probabilistic data structures (e.g. Bloom Filter) instead of raw data; 2)based on the observation that newer data often conveys more information, it defines a novel time-decayed approximation scheme; 3) some special events are separately stored as is in raw format rather than being summarised. The appendix proves comparison of different decay functions and the query error bound estimation.

Privacy

A common theme in the Privacy session is "shuffle".

Atom is an anonymous messaging system that both scales horizontally like Tor, and also provides clear security properties under precise assumptions like DCnet-based systems. The assumption here is global adversary and malicious servers. The main technique here are onion messages and random output permutation.

Interestingly, the next paper Stadium focus the exact opposite application area: one-to-one communication, compared with the broadcast style of Atom. The main problem this work wants to solve is how to hide communication metadata. It scale horizontally. This works is based on the SOSP'15 paper Vuvuzela, which problem is that every server has to handle all messages. The challenge here is how to distribute to untrusted servers. The main design is to combine collaborative noise generation, message shuffling, and verifiable parallel mixnet processing pipeline.

Prochlo from Google propose an ESA system architecture?—?encode, shuffle,analyse?—?for large-scale online monitoring of client software behaviour, which may entail systematic collection of user data. One interesting point is that to enhance the proposed ESA, this work proposes Stash Shuffle, an algorithm based on Intel's SGX. One of the question raised at the conference is that "Why trusting Intel is better than trusting Google?" :)

Scalability

Facebook's SVE system is deployed in production for uploading and processing videos at large scale, with low latency to support interactive applications, reliability, and robustness. The key idea here is very simple: process a video while upload another one in the pipeline at the same time, but it has shown good performance compared to Facebook's previous MES system. This is indeed one of the advantages of "being large scale".

Kernels

Here is a paper that is quite related to Unikernel: "My VM is lighter than your container". This paper "find and eliminate bottlenecks (such as image size) when launching large numbers of lightweight VMs(both unikernels and minimal Linux VMs)". The results shows a 4ms boot time for a unikernel based VM while a docker container takes 150ms to boot. This system provides tools to automatically build minimised image, and also replace the Xen control plane so that front-end and back-end drivers can communicate directly through shared memory. This paper reminds me of the "Mosaic" from EuroSys'17, which also tries to fit more computation units in one computer.

Other Sessions

Obviously we cannot (or more Specifically I cannot) cover the essence of a sosp paper with one to two paragraphs, and there are many other great papers that are not mentioned here yet. If you're interested, please visit the program to see the full list of accepted papers.

One more thing...

SOSP this year also host ACM Student Research Competition. 16 are selected from 42 submitted abstracts, and 6 are further selected to do a 5-min presentation. The presentations are surprisingly interesting. Please see the list of winners (actually all 6 participants, divided into graduate and undergraduate groups) here.

Also, this year's female participation rate is 6.6% for authors and 7% for Program Committee, both slightly lower than previous 3 years' numbers.

23Apr/16Off

Eurosys 2016

Posted by Jon Crowcroft

some papers caught my eye include:
STRADS: A Distributed Framework for Scheduled Model Parallel Machine Learning.
looks quite clever - not sure how it would work for a bayesian inferencer, but made me think

Increasing Large-Scale Data Center Capacity by Statistical Power
uses MORE servers to reduce power - clever control theory approach - Baidu traces to eval is real, large system

A High Performance File System for Non-Volatile Main Memory.
seems solid

Crayon: Saving Power through Shape and Color Approximation on Next-Generation Displays.
neat, but niche - OLED laptop displays consume less power if you render stuff cleverly - nice bit of human factors driven algorithm design to minimise impact on perceived image quality - gets 56% power saving on tablet with little subjective impact

A Study of Modern Linux API Usage and Compatibility: What to Support When You're Supporting.
Best paper award - nice talk - fun....

BB: Booting Booster for Consumer Electronics with Modern OS.
basically, Samsuung's smart TV boots a lot faster coz they hacked it a lot. (I have one, and replaced an LG with it, and its true:)

TFC: Token Flow Control in Data Center Networks.
is basically isarhythmic flow control (an idea from donald davies 1960s packet switched networking) done right:)

JUGGLER: A Practical Reordering Resilient Network Stack for Datacenters.
uses offload engine and other stuff to do a very solid job of dealing with putting packets in right order for TCP (where out-of-order delivery was caused by load balancers)

Flash Storage Disaggregation.
what it says on the tin

Shared Address Translation Revisited. evil question about reverse page->structure mapping in linux - how to figure out which process to go to with shared stuff...

POSIX Abstractions in Modern Operating Systems: The Old, the New, and the Missing. - hopelessly optimistic but engaging speaker:)

All findable via
http://eurosys16.doc.ic.ac.uk/program/program/

Filed under: Uncategorized No Comments
28Aug/15Off

Sigcomm 2015

Posted by Jon Crowcroft

A number of us attended ACM Sigcomm 2015conference in London, which was a very well managed affair - hopefully next year (in Brazil will be as good

two things of note here
1/ Heidi Howard won the Student Research Competition
2/ There was an interesting debate around netethics, which George Danezis, et al, blogged