syslog
30Jun/110

MobiSys’11. Day 2

Posted by Narseo

Keynote - Mobile Computing: the Next Decade and Beyond

The keynote was given by Prof. Mahadev Satyanarayanan, "Satya", (Carnegie Mellon University, MobiSys Outstanding Contributor Award). A quick look at the abstract of his talk, can be enough to see his merits.

He thinks that research on mobile computing is socially demanded. New systems and apps are motivated by the fact that the number of sales of mobile devices in 2011 overtook the sales of PCs for the first time. In his opinion, mobile computing is a common ground between distributed systems, wireless networking, context-awareness, energy awareness and adaptive systems. He highlighted the enduring challenges in this area in the last years:

    - Weight, power, size constraints (e.g. tiny I/O devices).
    - Communication uncertainty: bandwidth, latency and money. We still struggle with intermittent connectivity.
    - Finite energy. Computing, sensing and transmitting data cost energy.
    - Scarce user attention: low human performance. Users are prone to make errors and they are becoming less patient.
    - Lower privacy, security and robustness. Mobile handsets have more attack vectors and can suffer physical damage more easily.

After that, he mentioned three future emerging themes, some of them related to several ongoing projects in Cambridge:

    Mobile devices are rich sensors. They support a wide range of rich sensors and they access nearby data opportunistically (content-based search can be more energy-efficient, so looks like there's some ground for CCN here). In fact, applications can be context and energy-aware. He mentioned some of the applications from yesterday's first session as examples.
    Cloud-mobile convergence. Mobile computing allows freedom. It enables access to anything, anytime, anywehere. However, this increases complexity. On the other hand, Cloud computing provides simplicity by centralization (one source has it all). The question is: can we combine the freedom of mobility with the simplicity of cloud computing? Cloud computing evolved a lot since its first conception in 1986 (he mentioned Andrew File System as the first cloud service ever). He also highlighted that the key technology/enabler is virtualization and an example is his research about Cloudlets. Virtual Machines allow ubiquity of state and behavior so they can perfectly re-create the state anywhere, anytime. Moreover, moving clouds closer to the end-user can minimise the impact of network latency. He also talked about an still quite unexplored space: the importance of offloading computation from the cloud to local devices (the other way has been quite well explored already).
    Resource-rich mobile apps. From my perspective, this is very related to the first example. He talked about applications incorporating face recognition or the role of mobile handsets to enable applications for mobile cognitive assistance.

Session 4. When and Where

This session was more about indoors localisation. The first presentation was: Indoor location sensing using geo-magnetism (J. Chung (MIT), M. Donahoe (MIT), I. Kim (MIT), C. Schmandt (MIT), P. Razavi (MIT), M. Wiseman (MIT)). In this paper, the authors try to provide an interesting approach to the classic problem of indoors location. In their project, they use magnetic field distortion fingerprints to identify the location of the user. They used their own gadget: a rotating tower with a magnetic sensor to obtain the magnetic fingerprint on a building (sampled every 2 feet). They proved that the magnetic field on their building hasn't changed in 6 months (they haven't checked whether there are changes at different times of the day or not) so the fingerprint doesn't have to be updated frequently. They implemented their own portable gadget with 4 magnetic sensors for the evaluation. The error is <1m in 65% of the cases so it's more precise (but more costly) than WiFi solutions. The main source of errors are moving objects (e.g. elevator).

The next paper is similar but in this case it leverages audio fingerprints: Indoor Localization without Infrastructure using the Acoustic Background Spectrum(S. Tarzia (Northwestern Univ.), P. Dinda (Northwestern Univ.), R. Dick (Univ. of Michigan), G. Memik (Northwestern Univ.)) -NOTE: This app is available in Apple's app store: BatPhone. The benefit of this system is that this does not require specialized hardware and it passively listens to background sounds and after it analyses the spectrum. It doesn't require any infrastructure support. They achieved a 69% accuracy for 33 rooms using sound alone. As many other fingerprint-based localization mechanism, it requires supervised learning techniques. To guess the current location, they find the "closest" fingerprint in a database of labeled fingerprints. In the future work list, they plan to use a Markov movement model to improve the accuracy and also they plan to add other sensors to increase accuracy as in SurroundSense.

Exploiting FM Radio Data System for Adaptive Clock Calibration in Sensor Networks was a quite impressive and neat piece of work. Time synchronization is important for various applications (event ordering, coordination, and there are new wireless interfaces such as Qualcomm's Flashlink that take advantage of a central clock to synchronise devices). In fact, time synchronization is usually based on message passing between devices. They exploit FM radio data system (RDS) for clock calibration. Some of its advantages are its excellent coverage and it's availability all over the world. They implemented their own FM hardware receiver, that was integrated with sensor network platforms on TinyOS. It also solves some of the coverage limitations of GSM networks. Their results show that RDS clock is highly stable and city-wide available and the power consumption is very low (so the cost, 2-3$). The calibration error is also ridiculously low even if the length of the calibration period is in the order of hours. Very neat.

The last presentation was a joint work between Univeristy of Michigan and AT&T Labs: AccuLoc: Practical Localization of Performance Measurements in 3G Networks. Cellular operators need to distinguish the performance of each geographic area in their 3G networks to detect and resolve local network problems. They claim that the “last mile” radio link between 3G base stations and end-user devices is essential for the user experiences. They take advantage of some previous papers that demonstrate that users' mobility is predictable and they exploit this fact to cluster cell sectors that accurately report network performance at the IP level. Those techniques allow them to characterize and identify problems in network performance: clustering cells allows capturing RTT spikes better.

Session 5. Security and Privacy

Caché: Caching Location-Enhanced Content to Improve User Privacy
S. Amini (CMU), J. Lindqvist (CMU), J. Hong (CMU), J. Lin (CMU), E. Toch (Tel Aviv Univ.), N. Sadeh (CMU). The idea is to periodically pre-fetch potentially useful location content so applications can retrieve content from a local cache on the mobile device when it is needed. Location content will be only revealed to third-party providers like "a region" instead of a precise location. Somehow similar to SpotMe.

The second presentation was ProxiMate: Proximity-based Secure Pairing using Ambient Wireless Signals by S. Mathur (AT&T Labs), R. Miller (Rutgers Univ.), A. Varshavsky (AT&T Labs), W. Trappe (Rutgers Univ.), N. Mandayam (Rutgers Univ.). This is about enabling security between devices in wireless environments that do not have a trusted relationship between them based on proximity. It tries to reduce the security issues of low power communications (susceptible to eavesdropping, or even to be sniffed from a mile away as Bluetooth). This takes advantage of code-offsets to generate a common cryptographic key directly from their shared time wireless environment. Quite complex to understand in the presentation. It provides security against computationally unbounded adversary. Complexity is O(n) while Diffie-Hellman is O(n^3).

Security versus Energy Tradeoffs in Host-Based Mobile Malware Detection
J. Bickford (Rutgers Univ.), H. Lagar-Cavilla (AT&T Labs), A. Varshavsky (AT&T Labs), V. Ganapathy (Rutgers Univ), L. Iftode (Rutgers Univ.). This interesting paper explores the security-energy tradeoffs in mobile malware detection. It requires periodically scanning the attack target but it can decrease the battery life two times faster. This work is a energy-optimized version of two security tools. The way it conserves energy is by adapting the frequency of checks and by defining what to check (scan fewer code/data objects). They are trying to provide a high-level security with a low power consumption. They are specially looking a rootkits (sophisticated malware requiring complex detection algorithms). In order to be detected, it's necessary to run the user OS on a hypervisor to check all the kernel data changes. This technique can provide a 100% security but a poor energy efficiency. In order to find the tradeoff, they target what they call the sweet-spot to generate a balanced security. With this technique they can detect 96% of the rootkit attacks.

Analyzing Inter-Application Communication in Android by E. Chin (UC Berkeley), A. Felt (UC Berkeley), K. Greenwood (UC Berkeley), D. Wagner (UC Berkeley). Malicious Apps can take advantage of Android's resources by registering a listener to an specific provider (This abstraction is called Intent in Android). An application can register implicit intents so they not for an specific receiver (i.e. application or service). They described several attacks that can be possible because sending implicit intents in android makes communication public: both the intent and the public receiver can be public for an attacker. Consequently, there are several attacks such as spoofing, man-in-the-middle, etc. A malicious app can also inject fake data to applications or collect information about the system. They evaluated the system called ComDroid with 20 applications. They claim that this can be fixed by either developers or by the platform.

Session 6. Wireless Protocols

This session tries to cover some optimisations for wireless protocols. The first presentation was Avoiding the Rush Hours: WiFi Energy Management via Traffic Isolation by J. Manweiler (Duke Univ.), R. Choudhury (Duke Univ.). This paper measured the power consumption of WiFi interfaces on Nexus One handsets and they found that the WiFi energy cost grows linearly with the number of access points available (dense neighborhoods). This system tries to force APs to collaborate and to coordinate their beacons. This approach only requires changing the APs firmware. Mobile clients can reduce the energy wasted in idle/overhear mode. This system (called SleepWell) forces APs to maintain a map of their neighboring peers (APs) to re-schedule efficiently their beacon timings. However, clients are synchronized to AP clocks. To solve this issue, the AP notifies the client that a beacon is going to be deferred so the client knows when it must wake up. As a result, the client can extend the period of time that it remains in deep sleep mode.

The next paper was Opportunistic Alignment of Advertisement Delivery with Cellular Basestation Overloads, by R. Kokku (NEC Labs), R. Mahindra (NEC Labs), S. Rangarajan (NEC Labs) and H. Zhang (NEC Labs). This paper tries to align cellular base-stations overload with the delivery of advertising content to the clients. The goal is to do not compromise the user-perceived quality of experience while making cellular network operations profitable with advertisements (e.g. embedded in videos). The overload can lead to reduce the available bandwidth per user. Their assumption is that cellular operators can control advertisement delivery, so it's possible to adapt the quality (lower rate) of some advertisements to an specific set of users. Their system called Opal considers two groups of users: regular users that receive their traffic share, and targeted users that receive advertisements during base station overloads. Opal initially maps all users to the regular group and it dynamically decides which users will be migrated between groups based on a long term fairness metric. The system is evaluated on WiMax and with simulations. In the future they're trying to target location-based advertising.

The final presentation was Revisiting Partial Packet Recovery in 802.11 Wireless LANs by J. Xie (Florida State Univ.), W. Hu (Florida State Univ.), Z. Zhang (Florida State Univ.). Packets in WiFi links can be partially received. In order to be recovered, all the packet has to be retransmitted so it has an energy and computational overhead. One solution is based on dividing the packets in smaller blocks so only the missed ones are retransmitted (like keeping a TCP window). Other technique is based on error-correction (e.g. ZipTx). Those techniques can have an important overhead on the CPU and they can be complementary. The novelty of their approach is including Target Error Correction and dynamically selecting the optimal repair method that minimizes the number of bytes sent and the CPU overhead.

.... and now the conference banquet :-)

30Jun/110

Mobisys’11. Day 1

Posted by Narseo

MobiSys started this morning with 3 sessions about mobile applications and services, energy-efficient management of displays and crowd-sourcing apps. Researchers affiliated to 26 different institutions were within the co-authors of the papers. The most successful ones are Duke University (4 papers), At&T (4 papers), Univ. Michigan (3 papers) and Univ. Southern California (3 papers). The keynote was given by Edward W. Felten from the Federal Trade Commission about how the FTC works.

Session 1. Services and Use Cases

The first presentation was a quite cool idea from Duke University called: TagSense: A Smartphone-based Approach to Automatic Image Tagging. Their system proposed a system for automatically tagging pictures by exploiting all the sensors and contextual information available on modern smartphones: WiFi ad-hoc network, Compass, Light sensors (to identify whether the handset is indoors or outdoors), Microphone, Accelerometer (movement of the user), Gyroscope and GPS (location). When the camera application is launched, it creates a WiFi ad-hoc network with all the nearby devices and they exchange contextual information to add rich metadata to the picture captured. One of the challenges they tackled was about discerning if the user was moving, posing, facing the camera, etc. They implemented a prototype on Android and they evaluated it with more than 200 pics. The paper compares the accuracy of automatic tagging results with the metadata that was manually added in Picassa and iPhoto. With this system, the number of tags missed is reduced considerably. Nevertheless, the system left some open research challenges such as user authentication and a system performance evaluation.

A second paper by also by Duke University researchers was Using Mobile Phones to Write in Air (it was an extension of a HotMobile paper in 2009). In this case, the idea is about using accelerometers to allow writing in the air using the phone as a pen. The accelerometer records the movement and they display the text on the screen after being processed on a server running Matlab. Some of the research challenges that they had to face were about filtering high frequency components from human hand vibrations (removed with a low-pass filter), recognizing the symbols (pre-loaded pattern recognition, it reminds me of how MS Kinect works), identifying pen lifting gestures and also dealing with hand rotation while writing (accelerometers only measure linear acceleration, wii uses a gyroscope to solve this issue). The system seems to work nicely and they said that it has been tested in patients unable to write manually.

The following presentation was Finding MiMo: Tracing a Missing Mobile Phone using Daily Observations from Yonsei University. This system allows finding lost/stolen mobile handsets in indoors environments. The authors claim that it solves some of the limitations of services such as Apple Mobile Me, which can be constrained by the availability of network coverage and battery capacity limitations. They support an adaptive algorithm for sensing and they also leverage several indoors location techniques.

Odessa: Enabling Interactive Perception Applications on Mobile Devices by M. Ra (Univ. of Southern California), A. Sheth (Intel Labs), L. Mummert (Intel Labs), P. Pillai (Intel Labs), D. Wetherall (Univ. of Washington) and R. Govindan (Univ. of Southern California), is about off-loading computation to the cloud to solve face, objects, pose and gesture recognition problems. Their system adapts at runtime and decides when and how to offload computation efficiently to the server based on the availability of resources (mainly network). They found that off-loading and parallelism choices should be dynamic, even for a given application, as performance depends on scene complexity as well as environmental factors such as the network and device capabilities. This piece of work is related with previous projects such as Spectra, NWSLite and Maui.

Session 2. Games and Displays

The first paper, entitled Adaptive Display Power Management for Mobile Games was a piece of work by Balan's group at the Singapore Management University. This problem tries to minimise the impact of interactive apps such as games that require having a power-hungry resource like the display active for long periods of time while trying to do not impact on the user experience. As an example, the show how while playing a youtube video, 45-50% of the energy consumption is taken by the display, cellular network takes 35-40% and the CPU 4-15%. This system dynamically combines screen brightness to reduce the energy consumption with non-linear gamma correction techniques per frame to compensate the negative effect of the brightness reduction. They also conducted a user study with 5 students to understand human thresholds for brightness compensation.

Switchboard: A Matchmaking System for Multiplayer Mobile Games by J. Manweiler (Duke Univ.), S. Agarwal (Microsoft Research), M. Zhang (Microsoft Research), R. Choudhury (Duke Univ.), P. Bahl (Microsoft Research), tries to predict the network conditions of mobile users to provide a good mobile gaming experience to the users. They presented a centralised service that monitors the latency between the game players to matchmaking them in mobile games. They tackled some scalability issues such as grouping users in viable game sessions based on their network properties.

Chameleon: A Color-Adaptive Web Browser for Mobile OLED Displays by M. Dong (Rice Univ.) and L. Zhong (Rice Univ.), take advantage of the well known observation about the impact of colors displayed on OLED screens. The energy consumption can vary from 0.5 W (almost black screen) to 2W (white screen). The power consumption of a OLED display increases linearly with the number of pixels, whle the energy consumption per pixel depends on the different leds that are active. In fact, 65% of the pixels on most of the common websites are white and this unnecessarily imposes a higher energy consumption on mobile handsets. Generally, green and red pixels are more energy-efficient than blue ones in most of the handsets so they propose transforming the colour of GUI objects on the display to make it more energy efficient in a similar fashion to Google Black. The 3 phases of their transformation are "color counting" (finding histogram of the GUI components), "color mapping" and "color painting". They also allow the user to use different color transformations for different websites.

Session 3. Crowdsourcing

In this session, some interesting applications were proposed such as Real-Time Trip Information Service for a Large Taxi Fleet by Balan (Singapore Mgmt Univ.). This application collects information about taxis availability and it finds routes/similar trips for the customers based on starting point, ending point, distance and time. They described how they had to find and eliminate sources of errors (e.g. weather) and how they used dynamic clustering (KD-Trees) to solve the problem. The second application was AppJoy: Personalized Mobile Application Discovery by B. Yan (Univ. of Massachusetts, Lowell) and G. Chen (Univ. of Massachusetts, Lowell). This is basically a recommendation engine for mobile apps according to user download history, ratings and passive information about how often users run those applications. They claim that the users that installed apps via AppJoy interacted with those apps more. They want to extend it to a context-aware recommendation engine. Finally, SignalGuru: Leveraging Mobile Phones for Collaborative Traffic Signal Schedule Advisory by E. Koukoumidis (Princeton Univ.), L. Peh (MIT) and M. Martonosi (Princeton Univ.), is a traffic signaling advisory system. It identifies traffic lights using the camera and tries to predict when they will turn red/green. They claim that this can considerably save an important amount of fuel to the drivers (20%) so it reduces the carbon footprint. The predictions are achieved by leveraging crowd-sourcing so cars collaborate and share information to identify those transitions. This system also uses sensors such as accelerometer and gyro-based image detection.

29Jun/110

MobiArch’11

Posted by Narseo

I just attended and presented a paper about ErdOS architecture in MobiArch'11, a workshop that this year was colocated with MobiSys. There were 7 presentations and they covered topics such as Multipath TCP, energy efficiency at different layers on a mobile handset and MANETs.

The first session was completely focused on multipath TCP. Cristopher Pluntke from UCL showed how Multipath TCP can reduce the energy consumption of mobile devices by trying to combine the best of cellular networks and WiFi worlds. In particular, their complementary power modes. The main component of their system is an scheduler (a Markov Decision Process) that takes into exploits a fairly accurate power model of both WiFi and Cellular interfaces to efficiently switch between these two interfaces to minimise the energy cost. As the author highlighted, this energy model is hardware dependent. Nevertheless, my feeling is that this system tries to make an optimal and efficient usage of all the available wireless interfaces without necessarily taking advantage of them simultaneously. In his own words, both the scheduler and the model can be extended, so they can consider other aspects such as network latency and the SNR of the link in order to take better decisions. The second presentation in this session was done by Costin Raiciu (University Politehnica Bucharest). In his paper Opportunistic Mobility with Multipath TCP, he suggests that the best layer to handle mobility is at the transport layer and Multipath TCP can play an important role to solve some of the issues related with mobility. One of the arguments of the papers is that, in addition to achieve a better throughput and support both IPv4 and IPv6 links, it can also provide energy savings. They carefully evaluated the overhead of supporting MPTCP on the mobile handset in terms of CPU, network and memory.

The second session was mainly about MANETs and how they can be integrated into a future mobile Internet architecture. The first talk was by David Bild (University of Michigan) about Using predictable mobility patterns to support scalable and secure MANETs of Hanheld devices. The paper looks more like a positioning paper and, in his opinion, location-centric networking can provide secure communications, specially in the case of personal communications between friends and family. His argument is supported by Barabasi's study about the predictability of human mobility so location can solve some of the open security issues in MANETs. The second paper was entitled GSTAR: Generalized Storage-Aware Routing for MobilityFirst in the Future Mobile Internet, presented by Samuel Nelson (Rutgers University). This paper is part of MobilityFirst Project and it clearly reminded me of a combination of Haggle-project with Content-centric networks. The authors consider that the fixed-host/server model that has dominated the Internet since its conception needs to evolve and it must consider mobility as a core component. Consequently, they suggest that giving support to DTNs/ad-hoc nets and Content-Centric networks will be necessary. Some of the problems they aim to address are host and network mobility (how can entities stay reachable), varying levels of wireless link quality (higher-level protocols respond), varying levels of connectivity (can disconnection be handled within the network itself?) and multi-homing. They propose that the naming system (also for content) must be human readable and context-based. They also aim to include intelligence in the network by providing it with resources and by increasing the possible routing options such as seamless routing protocols for local scale routing.

The last paper in this session was not completely in the scope of MANETs. Hossein Falaki (UCLA) presented SystemSens, an interesting tool for monitoring usage in smartphone research deployments. Despite its similarities with Device Analyzer (the work by Daniel Wagner and Andrew Rice, DTG group), SystemSens is considered mainly as a debugging tool. This application (runs as a background service and does not require rooting the handsets) can help developers to better understand the impact of their applications on a real systems on the wild before making a final deployment. It monitors variables that range from battery usage to CPU load and network state. The traces are initially logged locally on a SQL DB and then uploaded to a server. Developers can consequently identify unexpected behavior from the users and application and they can also identify bugs.

The final session was about energy efficiency. Hao Liu from Tsinghua University presented TailTheft, a paper related to TailEnder and TOP. This approach is not exclusively based on reducing the number of tails (i.e. the FACH power state on cellular networks) and on reducing the duration of the tail as the previous ones. In this case, the authors propose creating virtual tails to allow the system prefetching and deferring transfers via a dual queue scheduling algorithm. Applications can, in fact, predict its future transmission with a reasonable accuracy so this solution is conceived as an application-layer optimisation.  

Finally, the last presentation of the workshop was our ErdOS project. In our case, we propose a different approach to save energy at the operating system level by extending the duration of time that resources can remain in low power modes with two techniques:
a) by predicting when they are going to be accessed by applications and by predicting the state of the computing resources based on contextual information.
b) by enabling opportunistic access to resources available in co-located devices using low power interfaces such as bluetooth.

7Jun/110

Temporal Complex Network Measures for Mobile Malware Containment

Posted by John Tang

Picture the scene: you've bought a shiny new smartphone and have been customising it all weekend by installing various apps from the app store, however the following week you encounter a run of bad luck...

...first your house is burgled when you're at work, next your credit card is maxed out, your friends have been receiving spam text messages from you and to top it off, weeks later, some of your colleagues have had the same experience; what is going on?

Little beknown to you, within one of these seemingly innocuous apps lurks a piece of mobile malware (mobware) which has access to a wealth of personal information which an attacker can access remotely.  

18May/110

IMDEA Workshop on Internet Science

Posted by Jon Crowcroft

See here for titles/speakers and slides...

So far Pablo's talk had some v. interesting stuff about scaling the twitter service - clever work on solving hotspots andoverloads in memcache/mysql setup - reminded me of previous work on trying to get the IMDB system to scale - seems like these inverted databases are a pain in general, so a fundamental solution would be welcome...for those of you working on social net analysis, worry about (particularly un-self-declared) spambots in twitter - see Mowbray's talk - plus looking at Vattay'sstats talk is worthwhile
anyhow, I was reading this Future Internet Roadmap and decided that Private Green Clouds is defintely the way to go (andwe are there yet, so that is good:-)
here's the barking bit: why not put a data center in every car?

rationale:1. future cars will be electric.2. its proposed that future electricity generation will incorporate a lot of micro-generation(certainly solar here in spain, and wind in uk etc etc)3. the power distribution net is not fit for "uplink" electricity in large amount, so...4. micro-generation is largely intermittent (esp. wind, but obviously solar is at least on/off day/nite)5. hence we need to do local distribution of micro-generated power6. or else we need to store micro-generated electricty
power solution=> use electric cars as storage; to get an idea of scale, (see Mackay's book) cars could store about 30% of UK generated power -when we get to 100% of the carpool of the UK being electric...
so then where we plug cars in, why not also have a dataport too then instead of using meagre compute resource in someone's house, have a big-fat data center in the car(s) in a street - they can run off stored power when local production exceeds demand (or predicted (say nighttime) production/stored exceeds local and car demand...

the numbers should work very well...you can easily smooth day/night variation, but also short term wind variation...
before you all shout, one problem is that the batteries are really designed for a relatively small number of discharge cycles - however, some technologies (hydrogen fuel cell etc) would fixthat
so this needs 2 things.1 a smaller unit for data center2 a plan to do fiber-to-the-charging-point....