syslog
30Jun/110

MobiSys’11. Day 2

Keynote - Mobile Computing: the Next Decade and Beyond

The keynote was given by Prof. Mahadev Satyanarayanan, "Satya", (Carnegie Mellon University, MobiSys Outstanding Contributor Award). A quick look at the abstract of his talk, can be enough to see his merits.

He thinks that research on mobile computing is socially demanded. New systems and apps are motivated by the fact that the number of sales of mobile devices in 2011 overtook the sales of PCs for the first time. In his opinion, mobile computing is a common ground between distributed systems, wireless networking, context-awareness, energy awareness and adaptive systems. He highlighted the enduring challenges in this area in the last years:

    - Weight, power, size constraints (e.g. tiny I/O devices).
    - Communication uncertainty: bandwidth, latency and money. We still struggle with intermittent connectivity.
    - Finite energy. Computing, sensing and transmitting data cost energy.
    - Scarce user attention: low human performance. Users are prone to make errors and they are becoming less patient.
    - Lower privacy, security and robustness. Mobile handsets have more attack vectors and can suffer physical damage more easily.

After that, he mentioned three future emerging themes, some of them related to several ongoing projects in Cambridge:

    Mobile devices are rich sensors. They support a wide range of rich sensors and they access nearby data opportunistically (content-based search can be more energy-efficient, so looks like there's some ground for CCN here). In fact, applications can be context and energy-aware. He mentioned some of the applications from yesterday's first session as examples.
    Cloud-mobile convergence. Mobile computing allows freedom. It enables access to anything, anytime, anywehere. However, this increases complexity. On the other hand, Cloud computing provides simplicity by centralization (one source has it all). The question is: can we combine the freedom of mobility with the simplicity of cloud computing? Cloud computing evolved a lot since its first conception in 1986 (he mentioned Andrew File System as the first cloud service ever). He also highlighted that the key technology/enabler is virtualization and an example is his research about Cloudlets. Virtual Machines allow ubiquity of state and behavior so they can perfectly re-create the state anywhere, anytime. Moreover, moving clouds closer to the end-user can minimise the impact of network latency. He also talked about an still quite unexplored space: the importance of offloading computation from the cloud to local devices (the other way has been quite well explored already).
    Resource-rich mobile apps. From my perspective, this is very related to the first example. He talked about applications incorporating face recognition or the role of mobile handsets to enable applications for mobile cognitive assistance.

Session 4. When and Where

This session was more about indoors localisation. The first presentation was: Indoor location sensing using geo-magnetism (J. Chung (MIT), M. Donahoe (MIT), I. Kim (MIT), C. Schmandt (MIT), P. Razavi (MIT), M. Wiseman (MIT)). In this paper, the authors try to provide an interesting approach to the classic problem of indoors location. In their project, they use magnetic field distortion fingerprints to identify the location of the user. They used their own gadget: a rotating tower with a magnetic sensor to obtain the magnetic fingerprint on a building (sampled every 2 feet). They proved that the magnetic field on their building hasn't changed in 6 months (they haven't checked whether there are changes at different times of the day or not) so the fingerprint doesn't have to be updated frequently. They implemented their own portable gadget with 4 magnetic sensors for the evaluation. The error is <1m in 65% of the cases so it's more precise (but more costly) than WiFi solutions. The main source of errors are moving objects (e.g. elevator).

The next paper is similar but in this case it leverages audio fingerprints: Indoor Localization without Infrastructure using the Acoustic Background Spectrum(S. Tarzia (Northwestern Univ.), P. Dinda (Northwestern Univ.), R. Dick (Univ. of Michigan), G. Memik (Northwestern Univ.)) -NOTE: This app is available in Apple's app store: BatPhone. The benefit of this system is that this does not require specialized hardware and it passively listens to background sounds and after it analyses the spectrum. It doesn't require any infrastructure support. They achieved a 69% accuracy for 33 rooms using sound alone. As many other fingerprint-based localization mechanism, it requires supervised learning techniques. To guess the current location, they find the "closest" fingerprint in a database of labeled fingerprints. In the future work list, they plan to use a Markov movement model to improve the accuracy and also they plan to add other sensors to increase accuracy as in SurroundSense.

Exploiting FM Radio Data System for Adaptive Clock Calibration in Sensor Networks was a quite impressive and neat piece of work. Time synchronization is important for various applications (event ordering, coordination, and there are new wireless interfaces such as Qualcomm's Flashlink that take advantage of a central clock to synchronise devices). In fact, time synchronization is usually based on message passing between devices. They exploit FM radio data system (RDS) for clock calibration. Some of its advantages are its excellent coverage and it's availability all over the world. They implemented their own FM hardware receiver, that was integrated with sensor network platforms on TinyOS. It also solves some of the coverage limitations of GSM networks. Their results show that RDS clock is highly stable and city-wide available and the power consumption is very low (so the cost, 2-3$). The calibration error is also ridiculously low even if the length of the calibration period is in the order of hours. Very neat.

The last presentation was a joint work between Univeristy of Michigan and AT&T Labs: AccuLoc: Practical Localization of Performance Measurements in 3G Networks. Cellular operators need to distinguish the performance of each geographic area in their 3G networks to detect and resolve local network problems. They claim that the “last mile” radio link between 3G base stations and end-user devices is essential for the user experiences. They take advantage of some previous papers that demonstrate that users' mobility is predictable and they exploit this fact to cluster cell sectors that accurately report network performance at the IP level. Those techniques allow them to characterize and identify problems in network performance: clustering cells allows capturing RTT spikes better.

Session 5. Security and Privacy

Caché: Caching Location-Enhanced Content to Improve User Privacy
S. Amini (CMU), J. Lindqvist (CMU), J. Hong (CMU), J. Lin (CMU), E. Toch (Tel Aviv Univ.), N. Sadeh (CMU). The idea is to periodically pre-fetch potentially useful location content so applications can retrieve content from a local cache on the mobile device when it is needed. Location content will be only revealed to third-party providers like "a region" instead of a precise location. Somehow similar to SpotMe.

The second presentation was ProxiMate: Proximity-based Secure Pairing using Ambient Wireless Signals by S. Mathur (AT&T Labs), R. Miller (Rutgers Univ.), A. Varshavsky (AT&T Labs), W. Trappe (Rutgers Univ.), N. Mandayam (Rutgers Univ.). This is about enabling security between devices in wireless environments that do not have a trusted relationship between them based on proximity. It tries to reduce the security issues of low power communications (susceptible to eavesdropping, or even to be sniffed from a mile away as Bluetooth). This takes advantage of code-offsets to generate a common cryptographic key directly from their shared time wireless environment. Quite complex to understand in the presentation. It provides security against computationally unbounded adversary. Complexity is O(n) while Diffie-Hellman is O(n^3).

Security versus Energy Tradeoffs in Host-Based Mobile Malware Detection
J. Bickford (Rutgers Univ.), H. Lagar-Cavilla (AT&T Labs), A. Varshavsky (AT&T Labs), V. Ganapathy (Rutgers Univ), L. Iftode (Rutgers Univ.). This interesting paper explores the security-energy tradeoffs in mobile malware detection. It requires periodically scanning the attack target but it can decrease the battery life two times faster. This work is a energy-optimized version of two security tools. The way it conserves energy is by adapting the frequency of checks and by defining what to check (scan fewer code/data objects). They are trying to provide a high-level security with a low power consumption. They are specially looking a rootkits (sophisticated malware requiring complex detection algorithms). In order to be detected, it's necessary to run the user OS on a hypervisor to check all the kernel data changes. This technique can provide a 100% security but a poor energy efficiency. In order to find the tradeoff, they target what they call the sweet-spot to generate a balanced security. With this technique they can detect 96% of the rootkit attacks.

Analyzing Inter-Application Communication in Android by E. Chin (UC Berkeley), A. Felt (UC Berkeley), K. Greenwood (UC Berkeley), D. Wagner (UC Berkeley). Malicious Apps can take advantage of Android's resources by registering a listener to an specific provider (This abstraction is called Intent in Android). An application can register implicit intents so they not for an specific receiver (i.e. application or service). They described several attacks that can be possible because sending implicit intents in android makes communication public: both the intent and the public receiver can be public for an attacker. Consequently, there are several attacks such as spoofing, man-in-the-middle, etc. A malicious app can also inject fake data to applications or collect information about the system. They evaluated the system called ComDroid with 20 applications. They claim that this can be fixed by either developers or by the platform.

Session 6. Wireless Protocols

This session tries to cover some optimisations for wireless protocols. The first presentation was Avoiding the Rush Hours: WiFi Energy Management via Traffic Isolation by J. Manweiler (Duke Univ.), R. Choudhury (Duke Univ.). This paper measured the power consumption of WiFi interfaces on Nexus One handsets and they found that the WiFi energy cost grows linearly with the number of access points available (dense neighborhoods). This system tries to force APs to collaborate and to coordinate their beacons. This approach only requires changing the APs firmware. Mobile clients can reduce the energy wasted in idle/overhear mode. This system (called SleepWell) forces APs to maintain a map of their neighboring peers (APs) to re-schedule efficiently their beacon timings. However, clients are synchronized to AP clocks. To solve this issue, the AP notifies the client that a beacon is going to be deferred so the client knows when it must wake up. As a result, the client can extend the period of time that it remains in deep sleep mode.

The next paper was Opportunistic Alignment of Advertisement Delivery with Cellular Basestation Overloads, by R. Kokku (NEC Labs), R. Mahindra (NEC Labs), S. Rangarajan (NEC Labs) and H. Zhang (NEC Labs). This paper tries to align cellular base-stations overload with the delivery of advertising content to the clients. The goal is to do not compromise the user-perceived quality of experience while making cellular network operations profitable with advertisements (e.g. embedded in videos). The overload can lead to reduce the available bandwidth per user. Their assumption is that cellular operators can control advertisement delivery, so it's possible to adapt the quality (lower rate) of some advertisements to an specific set of users. Their system called Opal considers two groups of users: regular users that receive their traffic share, and targeted users that receive advertisements during base station overloads. Opal initially maps all users to the regular group and it dynamically decides which users will be migrated between groups based on a long term fairness metric. The system is evaluated on WiMax and with simulations. In the future they're trying to target location-based advertising.

The final presentation was Revisiting Partial Packet Recovery in 802.11 Wireless LANs by J. Xie (Florida State Univ.), W. Hu (Florida State Univ.), Z. Zhang (Florida State Univ.). Packets in WiFi links can be partially received. In order to be recovered, all the packet has to be retransmitted so it has an energy and computational overhead. One solution is based on dividing the packets in smaller blocks so only the missed ones are retransmitted (like keeping a TCP window). Other technique is based on error-correction (e.g. ZipTx). Those techniques can have an important overhead on the CPU and they can be complementary. The novelty of their approach is including Target Error Correction and dynamically selecting the optimal repair method that minimizes the number of bytes sent and the CPU overhead.

.... and now the conference banquet :-)

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.