Keynote - Mobile Computing: the Next Decade and Beyond
The keynote was given by Prof. Mahadev Satyanarayanan, "Satya", (Carnegie Mellon University, MobiSys Outstanding Contributor Award). A quick look at the abstract of his talk, can be enough to see his merits.
He thinks that research on mobile computing is socially demanded. New systems and apps are motivated by the fact that the number of sales of mobile devices in 2011 overtook the sales of PCs for the first time. In his opinion, mobile computing is a common ground between distributed systems, wireless networking, context-awareness, energy awareness and adaptive systems. He highlighted the enduring challenges in this area in the last years:
- - Weight, power, size constraints (e.g. tiny I/O devices).
- - Communication uncertainty: bandwidth, latency and money. We still struggle with intermittent connectivity.
- - Finite energy. Computing, sensing and transmitting data cost energy.
- - Scarce user attention: low human performance. Users are prone to make errors and they are becoming less patient.
- - Lower privacy, security and robustness. Mobile handsets have more attack vectors and can suffer physical damage more easily.
After that, he mentioned three future emerging themes, some of them related to several ongoing projects in Cambridge:
- Mobile devices are rich sensors. They support a wide range of rich sensors and they access nearby data opportunistically (content-based search can be more energy-efficient, so looks like there's some ground for CCN here). In fact, applications can be context and energy-aware. He mentioned some of the applications from yesterday's first session as examples.
- Cloud-mobile convergence. Mobile computing allows freedom. It enables access to anything, anytime, anywehere. However, this increases complexity. On the other hand, Cloud computing provides simplicity by centralization (one source has it all). The question is: can we combine the freedom of mobility with the simplicity of cloud computing? Cloud computing evolved a lot since its first conception in 1986 (he mentioned Andrew File System as the first cloud service ever). He also highlighted that the key technology/enabler is virtualization and an example is his research about Cloudlets. Virtual Machines allow ubiquity of state and behavior so they can perfectly re-create the state anywhere, anytime. Moreover, moving clouds closer to the end-user can minimise the impact of network latency. He also talked about an still quite unexplored space: the importance of offloading computation from the cloud to local devices (the other way has been quite well explored already).
- Resource-rich mobile apps. From my perspective, this is very related to the first example. He talked about applications incorporating face recognition or the role of mobile handsets to enable applications for mobile cognitive assistance.
Session 4. When and Where
This session was more about indoors localisation. The first presentation was: Indoor location sensing using geo-magnetism (J. Chung (MIT), M. Donahoe (MIT), I. Kim (MIT), C. Schmandt (MIT), P. Razavi (MIT), M. Wiseman (MIT)). In this paper, the authors try to provide an interesting approach to the classic problem of indoors location. In their project, they use magnetic field distortion fingerprints to identify the location of the user. They used their own gadget: a rotating tower with a magnetic sensor to obtain the magnetic fingerprint on a building (sampled every 2 feet). They proved that the magnetic field on their building hasn't changed in 6 months (they haven't checked whether there are changes at different times of the day or not) so the fingerprint doesn't have to be updated frequently. They implemented their own portable gadget with 4 magnetic sensors for the evaluation. The error is <1m in 65% of the cases so it's more precise (but more costly) than WiFi solutions. The main source of errors are moving objects (e.g. elevator).
The next paper is similar but in this case it leverages audio fingerprints: Indoor Localization without Infrastructure using the Acoustic Background Spectrum(S. Tarzia (Northwestern Univ.), P. Dinda (Northwestern Univ.), R. Dick (Univ. of Michigan), G. Memik (Northwestern Univ.)) -NOTE: This app is available in Apple's app store: BatPhone. The benefit of this system is that this does not require specialized hardware and it passively listens to background sounds and after it analyses the spectrum. It doesn't require any infrastructure support. They achieved a 69% accuracy for 33 rooms using sound alone. As many other fingerprint-based localization mechanism, it requires supervised learning techniques. To guess the current location, they find the "closest" fingerprint in a database of labeled fingerprints. In the future work list, they plan to use a Markov movement model to improve the accuracy and also they plan to add other sensors to increase accuracy as in SurroundSense.
Exploiting FM Radio Data System for Adaptive Clock Calibration in Sensor Networks was a quite impressive and neat piece of work. Time synchronization is important for various applications (event ordering, coordination, and there are new wireless interfaces such as Qualcomm's Flashlink that take advantage of a central clock to synchronise devices). In fact, time synchronization is usually based on message passing between devices. They exploit FM radio data system (RDS) for clock calibration. Some of its advantages are its excellent coverage and it's availability all over the world. They implemented their own FM hardware receiver, that was integrated with sensor network platforms on TinyOS. It also solves some of the coverage limitations of GSM networks. Their results show that RDS clock is highly stable and city-wide available and the power consumption is very low (so the cost, 2-3$). The calibration error is also ridiculously low even if the length of the calibration period is in the order of hours. Very neat.
The last presentation was a joint work between Univeristy of Michigan and AT&T Labs: AccuLoc: Practical Localization of Performance Measurements in 3G Networks. Cellular operators need to distinguish the performance of each geographic area in their 3G networks to detect and resolve local network problems. They claim that the “last mile” radio link between 3G base stations and end-user devices is essential for the user experiences. They take advantage of some previous papers that demonstrate that users' mobility is predictable and they exploit this fact to cluster cell sectors that accurately report network performance at the IP level. Those techniques allow them to characterize and identify problems in network performance: clustering cells allows capturing RTT spikes better.
Session 5. Security and Privacy
Caché: Caching Location-Enhanced Content to Improve User Privacy
S. Amini (CMU), J. Lindqvist (CMU), J. Hong (CMU), J. Lin (CMU), E. Toch (Tel Aviv Univ.), N. Sadeh (CMU). The idea is to periodically pre-fetch potentially useful location content so applications can retrieve content from a local cache on the mobile device when it is needed. Location content will be only revealed to third-party providers like "a region" instead of a precise location. Somehow similar to SpotMe.
The second presentation was ProxiMate: Proximity-based Secure Pairing using Ambient Wireless Signals by S. Mathur (AT&T Labs), R. Miller (Rutgers Univ.), A. Varshavsky (AT&T Labs), W. Trappe (Rutgers Univ.), N. Mandayam (Rutgers Univ.). This is about enabling security between devices in wireless environments that do not have a trusted relationship between them based on proximity. It tries to reduce the security issues of low power communications (susceptible to eavesdropping, or even to be sniffed from a mile away as Bluetooth). This takes advantage of code-offsets to generate a common cryptographic key directly from their shared time wireless environment. Quite complex to understand in the presentation. It provides security against computationally unbounded adversary. Complexity is O(n) while Diffie-Hellman is O(n^3).
Security versus Energy Tradeoffs in Host-Based Mobile Malware Detection
J. Bickford (Rutgers Univ.), H. Lagar-Cavilla (AT&T Labs), A. Varshavsky (AT&T Labs), V. Ganapathy (Rutgers Univ), L. Iftode (Rutgers Univ.). This interesting paper explores the security-energy tradeoffs in mobile malware detection. It requires periodically scanning the attack target but it can decrease the battery life two times faster. This work is a energy-optimized version of two security tools. The way it conserves energy is by adapting the frequency of checks and by defining what to check (scan fewer code/data objects). They are trying to provide a high-level security with a low power consumption. They are specially looking a rootkits (sophisticated malware requiring complex detection algorithms). In order to be detected, it's necessary to run the user OS on a hypervisor to check all the kernel data changes. This technique can provide a 100% security but a poor energy efficiency. In order to find the tradeoff, they target what they call the sweet-spot to generate a balanced security. With this technique they can detect 96% of the rootkit attacks.
Analyzing Inter-Application Communication in Android by E. Chin (UC Berkeley), A. Felt (UC Berkeley), K. Greenwood (UC Berkeley), D. Wagner (UC Berkeley). Malicious Apps can take advantage of Android's resources by registering a listener to an specific provider (This abstraction is called Intent in Android). An application can register implicit intents so they not for an specific receiver (i.e. application or service). They described several attacks that can be possible because sending implicit intents in android makes communication public: both the intent and the public receiver can be public for an attacker. Consequently, there are several attacks such as spoofing, man-in-the-middle, etc. A malicious app can also inject fake data to applications or collect information about the system. They evaluated the system called ComDroid with 20 applications. They claim that this can be fixed by either developers or by the platform.
Session 6. Wireless Protocols
This session tries to cover some optimisations for wireless protocols. The first presentation was Avoiding the Rush Hours: WiFi Energy Management via Traffic Isolation by J. Manweiler (Duke Univ.), R. Choudhury (Duke Univ.). This paper measured the power consumption of WiFi interfaces on Nexus One handsets and they found that the WiFi energy cost grows linearly with the number of access points available (dense neighborhoods). This system tries to force APs to collaborate and to coordinate their beacons. This approach only requires changing the APs firmware. Mobile clients can reduce the energy wasted in idle/overhear mode. This system (called SleepWell) forces APs to maintain a map of their neighboring peers (APs) to re-schedule efficiently their beacon timings. However, clients are synchronized to AP clocks. To solve this issue, the AP notifies the client that a beacon is going to be deferred so the client knows when it must wake up. As a result, the client can extend the period of time that it remains in deep sleep mode.
The next paper was Opportunistic Alignment of Advertisement Delivery with Cellular Basestation Overloads, by R. Kokku (NEC Labs), R. Mahindra (NEC Labs), S. Rangarajan (NEC Labs) and H. Zhang (NEC Labs). This paper tries to align cellular base-stations overload with the delivery of advertising content to the clients. The goal is to do not compromise the user-perceived quality of experience while making cellular network operations profitable with advertisements (e.g. embedded in videos). The overload can lead to reduce the available bandwidth per user. Their assumption is that cellular operators can control advertisement delivery, so it's possible to adapt the quality (lower rate) of some advertisements to an specific set of users. Their system called Opal considers two groups of users: regular users that receive their traffic share, and targeted users that receive advertisements during base station overloads. Opal initially maps all users to the regular group and it dynamically decides which users will be migrated between groups based on a long term fairness metric. The system is evaluated on WiMax and with simulations. In the future they're trying to target location-based advertising.
The final presentation was Revisiting Partial Packet Recovery in 802.11 Wireless LANs by J. Xie (Florida State Univ.), W. Hu (Florida State Univ.), Z. Zhang (Florida State Univ.). Packets in WiFi links can be partially received. In order to be recovered, all the packet has to be retransmitted so it has an energy and computational overhead. One solution is based on dividing the packets in smaller blocks so only the missed ones are retransmitted (like keeping a TCP window). Other technique is based on error-correction (e.g. ZipTx). Those techniques can have an important overhead on the CPU and they can be complementary. The novelty of their approach is including Target Error Correction and dynamically selecting the optimal repair method that minimizes the number of bytes sent and the CPU overhead.
.... and now the conference banquet :-)
Last week at ICDS and today at Eurecom, I presented our work on location privacy. Here is the basic idea -
By sharing their location on mobile social-networking services, mobile phone users benefit from a variety of new services working on *aggregate* location data such as receiving road traffic estimations and finding the best nightlife "hotspots" in a city. However, location sharing has caused outcries over privacy issues - you cannot really trust private companies with your private location data ;) That's why we have recently proposed a a piece of software for privacy-conscious individuals and called it SpotME (here is the paper). This software can run directly on a mobile phone and reports, in addition to actual locations, a very large number of erroneous (fake) locations. Fake locations: are carefully chosen by a so-called randomised algorithm, guarantee that individuals cannot be localized with high probability, yet they have little effect on services offered to car drivers in Zurich and to subway passengers in London. For technical details, please have a go at the paper ;)
MobiSys started this morning with 3 sessions about mobile applications and services, energy-efficient management of displays and crowd-sourcing apps. Researchers affiliated to 26 different institutions were within the co-authors of the papers. The most successful ones are Duke University (4 papers), At&T (4 papers), Univ. Michigan (3 papers) and Univ. Southern California (3 papers). The keynote was given by Edward W. Felten from the Federal Trade Commission about how the FTC works.
Session 1. Services and Use Cases
The first presentation was a quite cool idea from Duke University called: TagSense: A Smartphone-based Approach to Automatic Image Tagging. Their system proposed a system for automatically tagging pictures by exploiting all the sensors and contextual information available on modern smartphones: WiFi ad-hoc network, Compass, Light sensors (to identify whether the handset is indoors or outdoors), Microphone, Accelerometer (movement of the user), Gyroscope and GPS (location). When the camera application is launched, it creates a WiFi ad-hoc network with all the nearby devices and they exchange contextual information to add rich metadata to the picture captured. One of the challenges they tackled was about discerning if the user was moving, posing, facing the camera, etc. They implemented a prototype on Android and they evaluated it with more than 200 pics. The paper compares the accuracy of automatic tagging results with the metadata that was manually added in Picassa and iPhoto. With this system, the number of tags missed is reduced considerably. Nevertheless, the system left some open research challenges such as user authentication and a system performance evaluation.
A second paper by also by Duke University researchers was Using Mobile Phones to Write in Air (it was an extension of a HotMobile paper in 2009). In this case, the idea is about using accelerometers to allow writing in the air using the phone as a pen. The accelerometer records the movement and they display the text on the screen after being processed on a server running Matlab. Some of the research challenges that they had to face were about filtering high frequency components from human hand vibrations (removed with a low-pass filter), recognizing the symbols (pre-loaded pattern recognition, it reminds me of how MS Kinect works), identifying pen lifting gestures and also dealing with hand rotation while writing (accelerometers only measure linear acceleration, wii uses a gyroscope to solve this issue). The system seems to work nicely and they said that it has been tested in patients unable to write manually.
The following presentation was Finding MiMo: Tracing a Missing Mobile Phone using Daily Observations from Yonsei University. This system allows finding lost/stolen mobile handsets in indoors environments. The authors claim that it solves some of the limitations of services such as Apple Mobile Me, which can be constrained by the availability of network coverage and battery capacity limitations. They support an adaptive algorithm for sensing and they also leverage several indoors location techniques.
Odessa: Enabling Interactive Perception Applications on Mobile Devices by M. Ra (Univ. of Southern California), A. Sheth (Intel Labs), L. Mummert (Intel Labs), P. Pillai (Intel Labs), D. Wetherall (Univ. of Washington) and R. Govindan (Univ. of Southern California), is about off-loading computation to the cloud to solve face, objects, pose and gesture recognition problems. Their system adapts at runtime and decides when and how to offload computation efficiently to the server based on the availability of resources (mainly network). They found that off-loading and parallelism choices should be dynamic, even for a given application, as performance depends on scene complexity as well as environmental factors such as the network and device capabilities. This piece of work is related with previous projects such as Spectra, NWSLite and Maui.
Session 2. Games and Displays
The first paper, entitled Adaptive Display Power Management for Mobile Games was a piece of work by Balan's group at the Singapore Management University. This problem tries to minimise the impact of interactive apps such as games that require having a power-hungry resource like the display active for long periods of time while trying to do not impact on the user experience. As an example, the show how while playing a youtube video, 45-50% of the energy consumption is taken by the display, cellular network takes 35-40% and the CPU 4-15%. This system dynamically combines screen brightness to reduce the energy consumption with non-linear gamma correction techniques per frame to compensate the negative effect of the brightness reduction. They also conducted a user study with 5 students to understand human thresholds for brightness compensation.
Switchboard: A Matchmaking System for Multiplayer Mobile Games by J. Manweiler (Duke Univ.), S. Agarwal (Microsoft Research), M. Zhang (Microsoft Research), R. Choudhury (Duke Univ.), P. Bahl (Microsoft Research), tries to predict the network conditions of mobile users to provide a good mobile gaming experience to the users. They presented a centralised service that monitors the latency between the game players to matchmaking them in mobile games. They tackled some scalability issues such as grouping users in viable game sessions based on their network properties.
Chameleon: A Color-Adaptive Web Browser for Mobile OLED Displays by M. Dong (Rice Univ.) and L. Zhong (Rice Univ.), take advantage of the well known observation about the impact of colors displayed on OLED screens. The energy consumption can vary from 0.5 W (almost black screen) to 2W (white screen). The power consumption of a OLED display increases linearly with the number of pixels, whle the energy consumption per pixel depends on the different leds that are active. In fact, 65% of the pixels on most of the common websites are white and this unnecessarily imposes a higher energy consumption on mobile handsets. Generally, green and red pixels are more energy-efficient than blue ones in most of the handsets so they propose transforming the colour of GUI objects on the display to make it more energy efficient in a similar fashion to Google Black. The 3 phases of their transformation are "color counting" (finding histogram of the GUI components), "color mapping" and "color painting". They also allow the user to use different color transformations for different websites.
Session 3. Crowdsourcing
In this session, some interesting applications were proposed such as Real-Time Trip Information Service for a Large Taxi Fleet by Balan (Singapore Mgmt Univ.). This application collects information about taxis availability and it finds routes/similar trips for the customers based on starting point, ending point, distance and time. They described how they had to find and eliminate sources of errors (e.g. weather) and how they used dynamic clustering (KD-Trees) to solve the problem. The second application was AppJoy: Personalized Mobile Application Discovery by B. Yan (Univ. of Massachusetts, Lowell) and G. Chen (Univ. of Massachusetts, Lowell). This is basically a recommendation engine for mobile apps according to user download history, ratings and passive information about how often users run those applications. They claim that the users that installed apps via AppJoy interacted with those apps more. They want to extend it to a context-aware recommendation engine. Finally, SignalGuru: Leveraging Mobile Phones for Collaborative Traffic Signal Schedule Advisory by E. Koukoumidis (Princeton Univ.), L. Peh (MIT) and M. Martonosi (Princeton Univ.), is a traffic signaling advisory system. It identifies traffic lights using the camera and tries to predict when they will turn red/green. They claim that this can considerably save an important amount of fuel to the drivers (20%) so it reduces the carbon footprint. The predictions are achieved by leveraging crowd-sourcing so cars collaborate and share information to identify those transitions. This system also uses sensors such as accelerometer and gyro-based image detection.