syslog
23Sep/110

Mobicom. Day 3

Posted by Narseo

3rd and final day... mainly about PHY/MAC layer and theory works

The day started with a Keynote by Farnan Jahanian (University of Michigan, NSF).  Jahanian talked about some opportunities behind cloud computing research. In his opinion, cloud computing can enable new solutions in fields such as health-care and also environmental issues. As an example, it can help to enforce a greener and more sustainable world and to predict natural disasters (e.g. the recent japanese tsunami) with the suport of a wider sensor network. His talk concluded with a discussion about some of the challenges regarding computer science research in the US (which seem to be endemic in other countries). He highlighted that despite the fact that the market demands more computer science graduates, few students are joining related programs at every level, including high school.

Session 7. MAC/PHY Advances.

No Time to Countdown: Migrating Backoff to the Frequency Domain, Souvik Sen and Romit Roy Choudhury (Duke University, USA); and Srihari Nelakuditi (University of South Carolina, USA)

Conventional WiFi networks perform channel contention in time domain. Such approach imposes a high channel wastage due to time back-off. Back2F is a new way of enabling channel contention in the frequency domain by considering OFDM subcarriers as randomised integer numbers (e.g. instead of picking up a randomised backoff length, they choose a randomly chosen subcarrier). This technique requires incorporating an additional listening antenna to allow WiFi APs to learn about the backoff value chosen by nearby access points and decide if their value is the smallest among all others generated by close-proximity APs. This knowledge is used individually by each AP to schedule transmissions after every round of contention. Nevertheless, by incorporating a second round of contention, the APs colliding in the first one will be able to compete again in addition to a few more APs. The performance evaluation was done on a real environment. The results show that the collision probability decreases considerable with Back2F with two contention rounds. Real time traffic such as Skype experiences a throughput gain but Back2F is more sensitive to channel fluctuation.

Harnessing Frequency Diversity in Multicarrier Wireless Networks, Apurv Bhartia, Yi-Chao Chen, Swati Rallapalli, and Lili Qiu (University of Texas at Austin, USA)

Wireless multicarrier communication systems are based on spreading data over multiple subcarriers but SNR varies in each subcarrier. In this presentation, the authors propose a join integration of three solutions to reduce the side-effects:

  1. Map symbols to subcarriers according to their importance.
  2. Effectively recover partially corrupted FEC groups and facilitate FEC decoding.
  3. MAC-layer FEC to offer different degrees of protection to the symbols according to their error rates at the PHY layer

Their simulation and testbed results corroborate that a joint combination of all those techniques can increase the throughput in the order of 1.6x to 6.6x.

Beamforming on Mobile Devices: A first Study, Hang Yu, Lin Zhong, Ashutosh Sabharwal, David Kao (Rice University, USA)

Wireless links present two invariants: spectrum is scarce while hardware is cheap. The fundamental waste in cellular base stations is because of the antenna design. Lin Zhong proposed passive directional antennas to minimize this issue. They used directional antennas to generate a very narrow beam with a larger spatial coverage. They have proved that this solution is practical despite small form factor of smartphone's antenna, resistent to nodes rotation (only 2-3 dB lost if compared to a static node), and does not affect the battery life of the handsets, specially in the uplink as the antenna's beam is narrower. This technique allows calculating the optimal number of antennas for efficiency. The system was evaluated both indoors and outdoors in stationary/mobile scenarios.  The results show that it is possible to save a lot of power in the client by bringing down the power consumption as the number of antennas increases with this technique.

SESSION 8. Physical Layer

FlexCast: Graceful Wireless Video Streaming, S T Aditya and Sachin Katti (Stanford University, USA)

This is a scheme to adapt video streaming to wireless communications. Mobile video traffic is growing exponentially and users' experience is very poor because of channel conditions. MPEG-4 estimates the quality over long timescales but channel conditions change rapidly thus it has an impact on the video quality. However, current video codecs are not equipped to handle such variations since they exhibit an all or nothing behavior. They propose that quality is proportional to instantaneous wireless quality, so a receiver can reconstruct a video encoded at a constant bit rate by taking into account information about the instantaneous network quality.

A Cross-Layer Design for Scalable Mobile Video, Szymon Jakubczak and Dina Katabi (Massachusetts Institute of Technology, USA)

One of the best papers in Mobicom'11. Mobile video is limited by the bandwidth available in cellular networks, and lack of robustness to changing channel conditions. As a result, video quality must be adapted to the channel conditions of different receivers. They propose a cross-layer design for video that addresses both limitations. In their opinion the problem is that the compression an error protection convert real-valued pixels to bits and as a consequence, they destroy the numerical properties of original pixels. In analog TV this was not a problem since there is a linear relationship between the transmitted values and the pixels so a small perturbation in the channel was also transformed on a small perturbation on the pixel value (however, this was not efficient as this did not compress data).

SoftCast is as efficient as digital TV whilst also compressing data linearly (note that current compression schemes are not linear so this is why the numerical properties are lost). SoftCast transforms the video in the frequency domain with a transform called 3D DCT. In the frequency domain, most temporal and spatial frequencies are zeros so the compression sends only the non-zero frequencies. As it is a linear transform, the output presents the same properties. They ended the presentation with a demo that demonstrated the real gains of SoftCast compared to MPEG-4 when the SNR of the channel drops.

Practical, Real-time Full Duplex Wireless, Mayank Jain, Jung II Choi, Tae Min Kim, Dinesh Bharadia, Kanna Srinivasan, Philip Levis andSachin Katti (Stanford University, USA); Prasun Sinha (Ohio State University, USA); and Siddharth Seth (Stanford University, USA)

This paper presents a full duplex radio design using signal inversion (based on a balanced/unbalanced (Balun) transformer)and adaptive cancellation. The state of the art in RF full-duplex solutions is based on techniques such as antenna cancellation and they present several limitations (e.g. manual tuning, channel-dependent). This new design supports wideband and high power systems without imposing any limitation on bandwidth or power. The authors also presented a full duplex medium access control (MAC) design and they evaluated the system using a testbed of 5 prototype full duplex nodes. The results look promising so... now it's the time to re-design the protocol stack!

Session 9. Theory

Understanding Stateful vs Stateless Communication Strategies for Ad hoc Networks, Victoria Manfredi and Mark Crovella (Boston University, USA); and Jim Kurose (University of Massachusetts Amherst, USA)

There are many communication strategies depending on the network properties. This paper explores adapting forwarding strategies that decides when/what state communication strategy should be used based on network unpredictability and network connectivity. Three network properties (connectivity, unpredictability, and resource contention) determine when state is useful. Data state is information about data packets, it is valuable when network is not well-connected whilst control-state is preferred when the network is well connected. Their analytic results (based on simulations on Haggle traces and DieselNet) show that routing is the right strategy for control state, DTN forwarding for data-state (e.g. Haggle Cambridge traces) and packet forwarding for those which are in the data and control state simultaneously (e.g. Haggle Infocom traces).

Optimal Gateway Selection in Multi-domain Wireless Networks: A Potential Game Perspective, Yang Song, H. Y. Wong, and Kang-Won Lee (IBM Research, USA)

This paper tries to leverage a coalition of networks with multiple domains with heterogeneous groups. They consider a coalition network where multiple groups are interconnected via wireless links. Gateway nodes are designated by each domain to achieve a network-wide interoperability.  The challenge is minimising the intra-domain cost and the sum of backbone cost. They used a game-perspective approach to solve this problem to analyse the equilibrium inefficiency. They consider that this solution can be also used in other applications such as power control, channel allocation, spectrum sharing or even content distribution.

Fundamental Relationship between Node Density and Delay in Wireless Ad Hoc Networks with Unreliable Links, Shizhen Zhao, Luoyi Fu, and Xinbing Wang (Shanghai JiaoTong University, China); and Qian Zhang (Hong Kong University of Science and Technology, China)

Maths, percolation theory ... quite complex to put into words

Tagged as: No Comments
21Sep/110

Mobicom. Day 1

Posted by Narseo

Mobicom'11 is being held in the (always interesting) city of Las Vegas. In this first day, the talks were mainly about wireless technologies and different techniques to avoid congestions were proposed.

Plenary Session
Keynote Speaker: Rajit Gadh (Henry Samueli School of Engineering and Applied Science at UCLA)

Prof. Gadh talked about UCLA project “SmartGrid”, a topic which is gaining momentum in California.  This project is motivated by the fact that electricity comes from a grid that spread across a whole country and we are still using technology that has been deployed 100 years ago. The grid is rigid, fixed and large. In fact, Rajit Gadh thinks that there is a clear parallelism between data networks and power networks. Based on that observation, they aim to create a Smart Grid infrastructure with the following characteristics: self healing, active participation of consumers, capabilities to accommodate all the energy sources and storage options, eco-friendly, etc.. More information can be found in the project website.

SESSION 1. Enterprise Wireless
FLUID: Improving Throughputs in Entreprise Wireless LANs through Flexible Channelization, Shravan Rayanchu (University of Wisconsin-Madison, USA); Vivek Shrivastava (Nokia Research Center, Palo Alto); Suman Banerjee (University of Wisconsin-Madison, USA); and Ranveer Chandra (Microsoft, USA)

One of the problems in current 802.11 technologies is that channels width is fixed. However, many advantages arise by replacing fixed witch channels with flexible width ones. The goal of this paper is to build a model that can capture flexible channel conflicts, and then use this model to improve the overall throughput in a WLAN.

One of the problems in wireless channels is that depending on the interference, there are different approaches to avoid conflicts.  Nevertheless, the interference depends on the configuration of the channel. As an example, narrowing the width helps to reduce interference but they also tried to better understand the impact of the power levels.

They showed that given a SNR, it is possible that nodes can predict the delivery ratio for an specific channel width. As a result, the receiver can compute the SNR and predict the Delivery ratio as a function of the SNR autonomously. Given that, the problem of channel assignment and scheduling becomes into a flexible channel assignment and scheduling problem.

SmartVNC: An Effective Remote Computing Solution for Smartphones, Cheng-Lin Tsao, Sandeep Kakumanu, and Raghupathy Sivakumar (Georgia Tech University, USA)

In our opinion, this paper was a great example of how to improve the user experience with certain applications. In this case, they are trying to improve the UX of mobile VNC. This kind of service was designed for desktops and laptops so they do not take into account the nature of smartphones. The goal is allowing users to access a remote PC (in this case Windows) from a smart phone (Android) in a friendly way. They evaluated the UX of 22 users (experienced users, students between 20-30 y.o.) and 9 applications running on VNC. They defined different metrics such as the opinion score (the higher the complexity lesser the mean opinion score) and task effort (number of operations required for a task such as mouse clicks , key storekes etc). Given that, they correlated both metrics for those users running apps in VNC and the results showed that when the task effor is high, the UX is poorer.

They proposed aggregating repetitive sequences of operations in user activity to remove redundancy without being harmless. One of the main problems was that application macros (like in excel) are not completely application agnostic but they are extensible whilst others such as raw macros (e.g. autohotkey) are completely opposite.

They enabled Smart macros. For that, they record events and build macros and they enabled a tailored interface with collapsive overlays on the remote computing client, grouping macros by app, automatic zooming, etc.  For the applications they tested with those 22 users, they had a task effort reduction from 100 to 3 whilst the time to perform a task is also highly reduced. In the subjective evaluation, all the users showed their satisfaction with the new VNC. The talk was completed with a video recorded demo of the system.

FERMI: A FEmtocell Resource Management System for Interference Mitigation in OFDMA Networks, Mustafa Yasir Arslan (University of California Riverside, USA); Jongwon Yoon (University of Wisconsin-Madison, USA); Karthikeyan Sundaresan (NEC Laboratories America, USA); Srikanth V. Krishnamurthy (University of California Riverside, USA); Suman Banerjee (University of Wisconsin-Madison, USA); Mustafa Arslan

Femtocells: are small cellular base stations that use cable backhaul and they can extend the network coverage. In this scenario, interferences can be a problem but this problem differs from the ones that can be found in the WiFi literature. OFDMA (WiMax, LTE) uses sub-channels at the PHY and multiple users are scheduled in the same frame whilst WiFi uses OFDM (sequential units of symbols transmitted at an specific freq in time). Moreover, OFDMA presents a synchronous MAC (there's no carrier sensing like in WiFi). As a consequence, WiFi solutions cannot be applied to femtocells as interference leads to throughput loss and there are many clients coexisting in the same frame.

As a consequence, the solution must take into account both the time domain and the frequency domain. FERMI gathers load and interference related information. It operates at a coarse granularity (in the order of minutes) but this is not a drawback as interference does not change a lot in this time scale. Moreover, a per-frame solution is not feasible as the interference patterns change on each retransmission but aggregate interference and load change only at coarse time scales.

The system evaluation was done on a WiMax testbed and also on simulations. In both cases, they obtained a 50% throughput gain over pure sub-channel isolation solutions. The core results can be applicable to LTE as well.

SESSION 2. Wireless Access
WiFi-Nano: Reclaiming WiFi Efficiency through 800ns Slots, Eugenio Magistretti (Rice University, USA); Krishna Kant Chintalapudi (Microsoft Research, India);Bozidar Radunovic (Microsoft Research, U.K.); and Ramachandran Ramjee (Microsoft Research, India)

Wifi data rates have increased but throughput performance didn't see similar level of growth. Throughput is much lower than data-rate because of a high frame overhead. There’s a 45% overhead at 54Mbps but this overhead dominates at high bandwidth, around 80% in 300Mbps. This gets worst when multiple-links come at play.

This observation motivated WiFi nano, a technology that allows doubling the throughput of WiFi networks. Slot overhead can be reduced by 10x. Their solution proposes using nano slots to reduce slot duration to 9 microsec (that’s the standard one in 802.11a/n and it’s almost the minimum achievable). In addition, they exploit speculative preambles as preamble detection and transmission occur in parallel. As soon as the back-off expires, a node transmits the preamble but while transmitting preamble, it continues to detect incoming preambles even with self-interference. Their empirical results show that slightly longer preambles improve the throughput up to a 100% and frame aggregation can increase those figures even more. In fact, frame aggregation increases the efficiency as it grows from 17% to more than almost 80%.

XPRESS: A Cross-Layer Backpressure Architecture for Wireless Multi-Hop Networks, Rafael Laufer (University of California at Los Angeles, USA); Theodoros Salonidis; Henrik Lundgren and Pascal Leguyadec (Technicolor, Corporate Research Lab, France)

Multihop networks operate below capacity due to poor coordination across layers, and among transmitting nodes.  They propose using backpressure scheduling and cross-layering optimisations. At each slot, it selects optimal link set for transmission.  In their opinion, there are different challenges in multihop networks:

1- Time slots.
2- Link sets (e.g. knowing non-interfereng links)
3- Protocol overhead
4- Computation overhead
5- Link Scheduling.
6- Hardware constraints (e.g. memory limitations in wireless cards)

With XPRESS, all those challenges are addressed. XPRESS has two main components the MC (mesh controller) and the MAP (Mesh access point). MCs receive flow queues, computes schedule and disseminates schedule. On the other hand, MAP executes schedules and processes queues. The key challenge is computing the optimal schedule per slot. but this task takes a lot of time.

The MAP nodes use a x-layer protocol stack to compute the schedules. Apps running on the node go into the kernel who classifies the flows and allocates them on its own queue who is followed by a congestion controller. Then, the pipeline has a flow queue followed by a packet scheduller who puts into the proper link queue each packet. Somehow this reminds me of the work on Active Networks as they are dynamically change the behaviour of the network, in this case on a mesh-scenario. The proposed scheme achieves 63% and 128% gains over 802.11 24 Mbps and auto-rate schemes, respectively. They also performed an scalability evaluation.

CRMA: Collision-Resistant Multiple Access, Tianji Li, Mi Kyung Han, Apurva Bhartia, Lili Qiu, Eric Rozner, and Ying Zhang (University of Texas at Austin, USA); Brad Zarikoff (Hamilton Institute, Ireland)

FDMA, TDMA, FTDMA, CSMA are the traditional MAC protocols to avoid collisions. These techniques incur significant overhead so they move from collision avoidance to collision resistance based on a new encoding/decoding to allow mutliple signal to be transmitted.

In CRMA, every transmitter views the OFDM physical layer as multi orthogonal but sharable channels, and randomly selects a subset of the channels for transmission. When multiple transmissions overlap on a channel, these signals will naturally add up in the wireless medium.

In this system, ACKs are sent as data frames. However there’s a problem with misaligned collisions which are handled with cyclic prefixes (CP) so they force the collided symbols to fall in the same FFT window. On the other hand, overlapping transmissions are limited using exponential back-off.

The evaluation was done on a testbed experiment with CRMA on top of a default OFDM implementation in USRP. They also used Qualnet simulations to evaluate the efficiency of the networks.

Filed under: Uncategorized No Comments
1Jul/110

MobiSys’11. Day 3

Posted by Narseo

Tracking and Saving Energy

Today, there was only a morning session in MobiSys about location tracking and energy efficiency. The first presentation was Energy-Efficient Positioning for Smartphones using Cell-ID Sequence Matching by J. Paek (Univ. of Southern California), K. Kim (Deutsche Telekom), J. Singh (Deutsche Telekom) and R. Govindan (Univ. of Southern California). This paper is about providing energy-efficient location techniques and seems to be an extension from a previous paper presented in MobiSys'10. They try to combine the complementary features of GPS and Cell-ID. GPS is more accurate than Cell ID but it is more energy costly. During their talk they showed the inaccuracy and inconsistency of network-based location on urban environments. It presents a mean error in the order of 300m and given a location, network-based location can report different locations. Their system uses cell-ID sequence matching along with history of cells and GPS coordinates and they also use time-of-day as a hint. It opportunistically builds the history of users' routes and the transitions between cells. After that, they use the Smith-Waterman Algorithm for sequence matching between similar historic data (they look for a sub-sequence in the database that matches and they pick up the sequence that matches the best and they turn ON GPS when there's no good matching). This approach can save more than 90% of the GPS energy since GPS usage goes down as learning progresses. The only limitation of the system is that it is not able to detect small detours but the authors mention that this is not a big issue.

Energy-efficient Trajectory Tracking for Mobile Devices by M. Kjærgaard (Aarhus Univ.), S. Bhattacharya (Univ. of Helsinki), H. Blunck (Aarhus Univ.), P. Nurmi (Univ. of Helsinki). It's possible to retrieve location based on WiFi, GPS or GSM and many location-aware services often require trajectory tracking. This paper proposes new sensor management strategies and it's built on top of a previous paper published in Mobisys'09 called EnTracked. The system minimizes the use of GPS by predicting the time to sleep before next position sensing (not clear to me what happens both in terms of energy consumption and usability if the sensor returns to the cold-start phase or if it has to remain in higher power modes since they use an energy model for that) using sensors such as radio, accelerometer and compass. The system requires the collaboration of a server. They also performed a comparative analysis with previous systems also presented in MobiSys. Comparatively, this study presents the lowest energy consumption across all the systems.

Profiling Resource Usage for Mobile Applications: a Cross-layer Approach by F. Qian (Univ. of Michigan), Z. Wang (Univ. of Michigan), A. Gerber (AT&T Labs), Z. Mao (Univ. of Michigan), S. Sen (AT&T Labs), O. Spatscheck (AT&T Labs). The idea is to provide a good understanding to developers about how their apps can impact on the energy consumption on mobile handsets because of using cellular networks. They are looking at the different power states of UMTS and the time required to move between states, and how an app can make the cellular interface to move between them. Their system collects packet traces, users' input and packet-process correspondence. They associate each state with a constant power value that was measured using a power meter (no signal strength is taken into account) and this is used to perform a detailed analysis of TCP/HTTP. They try to see how bursts incur energy overheads. They performed some studies based on popular apps and web services.

Self-Constructive, High-Rate System Energy Modeling for Battery-Powered Mobile Systems by M. Dong (Rice Univ.), L. Zhong (Rice Univ.). This paper aims to build a high-rate virtual power meter without requiring external tools. It's looking at the rate of how fast the battery consumption decreases. Classic Power Models are based on linear regression techniques and they usually external multimeter to be generated, they require a good hardware knowledge to build the power model and it is also hardware dependent. Those factors limit the accuracy of the model. Sesame is a self-constructive and personalized power model that looks only at battery interfaces and it uses statistical learning (Principal Component Analysis). They measured the error reported by the low-rate battery interface (non-gaussian) in order to increase its accuracy. The computational overhead for making the measurement might be very high but it's able to generate energy models at 100Hz. It takes 15 hours to generate the models to achieve an average error of 15%. It's more accurate than Power Tutor and other tools available in the market.

... and that's all from DC!

Filed under: Uncategorized No Comments
30Jun/110

MobiSys’11. Day 2

Posted by Narseo

Keynote - Mobile Computing: the Next Decade and Beyond

The keynote was given by Prof. Mahadev Satyanarayanan, "Satya", (Carnegie Mellon University, MobiSys Outstanding Contributor Award). A quick look at the abstract of his talk, can be enough to see his merits.

He thinks that research on mobile computing is socially demanded. New systems and apps are motivated by the fact that the number of sales of mobile devices in 2011 overtook the sales of PCs for the first time. In his opinion, mobile computing is a common ground between distributed systems, wireless networking, context-awareness, energy awareness and adaptive systems. He highlighted the enduring challenges in this area in the last years:

    - Weight, power, size constraints (e.g. tiny I/O devices).
    - Communication uncertainty: bandwidth, latency and money. We still struggle with intermittent connectivity.
    - Finite energy. Computing, sensing and transmitting data cost energy.
    - Scarce user attention: low human performance. Users are prone to make errors and they are becoming less patient.
    - Lower privacy, security and robustness. Mobile handsets have more attack vectors and can suffer physical damage more easily.

After that, he mentioned three future emerging themes, some of them related to several ongoing projects in Cambridge:

    Mobile devices are rich sensors. They support a wide range of rich sensors and they access nearby data opportunistically (content-based search can be more energy-efficient, so looks like there's some ground for CCN here). In fact, applications can be context and energy-aware. He mentioned some of the applications from yesterday's first session as examples.
    Cloud-mobile convergence. Mobile computing allows freedom. It enables access to anything, anytime, anywehere. However, this increases complexity. On the other hand, Cloud computing provides simplicity by centralization (one source has it all). The question is: can we combine the freedom of mobility with the simplicity of cloud computing? Cloud computing evolved a lot since its first conception in 1986 (he mentioned Andrew File System as the first cloud service ever). He also highlighted that the key technology/enabler is virtualization and an example is his research about Cloudlets. Virtual Machines allow ubiquity of state and behavior so they can perfectly re-create the state anywhere, anytime. Moreover, moving clouds closer to the end-user can minimise the impact of network latency. He also talked about an still quite unexplored space: the importance of offloading computation from the cloud to local devices (the other way has been quite well explored already).
    Resource-rich mobile apps. From my perspective, this is very related to the first example. He talked about applications incorporating face recognition or the role of mobile handsets to enable applications for mobile cognitive assistance.

Session 4. When and Where

This session was more about indoors localisation. The first presentation was: Indoor location sensing using geo-magnetism (J. Chung (MIT), M. Donahoe (MIT), I. Kim (MIT), C. Schmandt (MIT), P. Razavi (MIT), M. Wiseman (MIT)). In this paper, the authors try to provide an interesting approach to the classic problem of indoors location. In their project, they use magnetic field distortion fingerprints to identify the location of the user. They used their own gadget: a rotating tower with a magnetic sensor to obtain the magnetic fingerprint on a building (sampled every 2 feet). They proved that the magnetic field on their building hasn't changed in 6 months (they haven't checked whether there are changes at different times of the day or not) so the fingerprint doesn't have to be updated frequently. They implemented their own portable gadget with 4 magnetic sensors for the evaluation. The error is <1m in 65% of the cases so it's more precise (but more costly) than WiFi solutions. The main source of errors are moving objects (e.g. elevator).

The next paper is similar but in this case it leverages audio fingerprints: Indoor Localization without Infrastructure using the Acoustic Background Spectrum(S. Tarzia (Northwestern Univ.), P. Dinda (Northwestern Univ.), R. Dick (Univ. of Michigan), G. Memik (Northwestern Univ.)) -NOTE: This app is available in Apple's app store: BatPhone. The benefit of this system is that this does not require specialized hardware and it passively listens to background sounds and after it analyses the spectrum. It doesn't require any infrastructure support. They achieved a 69% accuracy for 33 rooms using sound alone. As many other fingerprint-based localization mechanism, it requires supervised learning techniques. To guess the current location, they find the "closest" fingerprint in a database of labeled fingerprints. In the future work list, they plan to use a Markov movement model to improve the accuracy and also they plan to add other sensors to increase accuracy as in SurroundSense.

Exploiting FM Radio Data System for Adaptive Clock Calibration in Sensor Networks was a quite impressive and neat piece of work. Time synchronization is important for various applications (event ordering, coordination, and there are new wireless interfaces such as Qualcomm's Flashlink that take advantage of a central clock to synchronise devices). In fact, time synchronization is usually based on message passing between devices. They exploit FM radio data system (RDS) for clock calibration. Some of its advantages are its excellent coverage and it's availability all over the world. They implemented their own FM hardware receiver, that was integrated with sensor network platforms on TinyOS. It also solves some of the coverage limitations of GSM networks. Their results show that RDS clock is highly stable and city-wide available and the power consumption is very low (so the cost, 2-3$). The calibration error is also ridiculously low even if the length of the calibration period is in the order of hours. Very neat.

The last presentation was a joint work between Univeristy of Michigan and AT&T Labs: AccuLoc: Practical Localization of Performance Measurements in 3G Networks. Cellular operators need to distinguish the performance of each geographic area in their 3G networks to detect and resolve local network problems. They claim that the “last mile” radio link between 3G base stations and end-user devices is essential for the user experiences. They take advantage of some previous papers that demonstrate that users' mobility is predictable and they exploit this fact to cluster cell sectors that accurately report network performance at the IP level. Those techniques allow them to characterize and identify problems in network performance: clustering cells allows capturing RTT spikes better.

Session 5. Security and Privacy

Caché: Caching Location-Enhanced Content to Improve User Privacy
S. Amini (CMU), J. Lindqvist (CMU), J. Hong (CMU), J. Lin (CMU), E. Toch (Tel Aviv Univ.), N. Sadeh (CMU). The idea is to periodically pre-fetch potentially useful location content so applications can retrieve content from a local cache on the mobile device when it is needed. Location content will be only revealed to third-party providers like "a region" instead of a precise location. Somehow similar to SpotMe.

The second presentation was ProxiMate: Proximity-based Secure Pairing using Ambient Wireless Signals by S. Mathur (AT&T Labs), R. Miller (Rutgers Univ.), A. Varshavsky (AT&T Labs), W. Trappe (Rutgers Univ.), N. Mandayam (Rutgers Univ.). This is about enabling security between devices in wireless environments that do not have a trusted relationship between them based on proximity. It tries to reduce the security issues of low power communications (susceptible to eavesdropping, or even to be sniffed from a mile away as Bluetooth). This takes advantage of code-offsets to generate a common cryptographic key directly from their shared time wireless environment. Quite complex to understand in the presentation. It provides security against computationally unbounded adversary. Complexity is O(n) while Diffie-Hellman is O(n^3).

Security versus Energy Tradeoffs in Host-Based Mobile Malware Detection
J. Bickford (Rutgers Univ.), H. Lagar-Cavilla (AT&T Labs), A. Varshavsky (AT&T Labs), V. Ganapathy (Rutgers Univ), L. Iftode (Rutgers Univ.). This interesting paper explores the security-energy tradeoffs in mobile malware detection. It requires periodically scanning the attack target but it can decrease the battery life two times faster. This work is a energy-optimized version of two security tools. The way it conserves energy is by adapting the frequency of checks and by defining what to check (scan fewer code/data objects). They are trying to provide a high-level security with a low power consumption. They are specially looking a rootkits (sophisticated malware requiring complex detection algorithms). In order to be detected, it's necessary to run the user OS on a hypervisor to check all the kernel data changes. This technique can provide a 100% security but a poor energy efficiency. In order to find the tradeoff, they target what they call the sweet-spot to generate a balanced security. With this technique they can detect 96% of the rootkit attacks.

Analyzing Inter-Application Communication in Android by E. Chin (UC Berkeley), A. Felt (UC Berkeley), K. Greenwood (UC Berkeley), D. Wagner (UC Berkeley). Malicious Apps can take advantage of Android's resources by registering a listener to an specific provider (This abstraction is called Intent in Android). An application can register implicit intents so they not for an specific receiver (i.e. application or service). They described several attacks that can be possible because sending implicit intents in android makes communication public: both the intent and the public receiver can be public for an attacker. Consequently, there are several attacks such as spoofing, man-in-the-middle, etc. A malicious app can also inject fake data to applications or collect information about the system. They evaluated the system called ComDroid with 20 applications. They claim that this can be fixed by either developers or by the platform.

Session 6. Wireless Protocols

This session tries to cover some optimisations for wireless protocols. The first presentation was Avoiding the Rush Hours: WiFi Energy Management via Traffic Isolation by J. Manweiler (Duke Univ.), R. Choudhury (Duke Univ.). This paper measured the power consumption of WiFi interfaces on Nexus One handsets and they found that the WiFi energy cost grows linearly with the number of access points available (dense neighborhoods). This system tries to force APs to collaborate and to coordinate their beacons. This approach only requires changing the APs firmware. Mobile clients can reduce the energy wasted in idle/overhear mode. This system (called SleepWell) forces APs to maintain a map of their neighboring peers (APs) to re-schedule efficiently their beacon timings. However, clients are synchronized to AP clocks. To solve this issue, the AP notifies the client that a beacon is going to be deferred so the client knows when it must wake up. As a result, the client can extend the period of time that it remains in deep sleep mode.

The next paper was Opportunistic Alignment of Advertisement Delivery with Cellular Basestation Overloads, by R. Kokku (NEC Labs), R. Mahindra (NEC Labs), S. Rangarajan (NEC Labs) and H. Zhang (NEC Labs). This paper tries to align cellular base-stations overload with the delivery of advertising content to the clients. The goal is to do not compromise the user-perceived quality of experience while making cellular network operations profitable with advertisements (e.g. embedded in videos). The overload can lead to reduce the available bandwidth per user. Their assumption is that cellular operators can control advertisement delivery, so it's possible to adapt the quality (lower rate) of some advertisements to an specific set of users. Their system called Opal considers two groups of users: regular users that receive their traffic share, and targeted users that receive advertisements during base station overloads. Opal initially maps all users to the regular group and it dynamically decides which users will be migrated between groups based on a long term fairness metric. The system is evaluated on WiMax and with simulations. In the future they're trying to target location-based advertising.

The final presentation was Revisiting Partial Packet Recovery in 802.11 Wireless LANs by J. Xie (Florida State Univ.), W. Hu (Florida State Univ.), Z. Zhang (Florida State Univ.). Packets in WiFi links can be partially received. In order to be recovered, all the packet has to be retransmitted so it has an energy and computational overhead. One solution is based on dividing the packets in smaller blocks so only the missed ones are retransmitted (like keeping a TCP window). Other technique is based on error-correction (e.g. ZipTx). Those techniques can have an important overhead on the CPU and they can be complementary. The novelty of their approach is including Target Error Correction and dynamically selecting the optimal repair method that minimizes the number of bytes sent and the CPU overhead.

.... and now the conference banquet :-)

30Jun/110

Mobisys’11. Day 1

Posted by Narseo

MobiSys started this morning with 3 sessions about mobile applications and services, energy-efficient management of displays and crowd-sourcing apps. Researchers affiliated to 26 different institutions were within the co-authors of the papers. The most successful ones are Duke University (4 papers), At&T (4 papers), Univ. Michigan (3 papers) and Univ. Southern California (3 papers). The keynote was given by Edward W. Felten from the Federal Trade Commission about how the FTC works.

Session 1. Services and Use Cases

The first presentation was a quite cool idea from Duke University called: TagSense: A Smartphone-based Approach to Automatic Image Tagging. Their system proposed a system for automatically tagging pictures by exploiting all the sensors and contextual information available on modern smartphones: WiFi ad-hoc network, Compass, Light sensors (to identify whether the handset is indoors or outdoors), Microphone, Accelerometer (movement of the user), Gyroscope and GPS (location). When the camera application is launched, it creates a WiFi ad-hoc network with all the nearby devices and they exchange contextual information to add rich metadata to the picture captured. One of the challenges they tackled was about discerning if the user was moving, posing, facing the camera, etc. They implemented a prototype on Android and they evaluated it with more than 200 pics. The paper compares the accuracy of automatic tagging results with the metadata that was manually added in Picassa and iPhoto. With this system, the number of tags missed is reduced considerably. Nevertheless, the system left some open research challenges such as user authentication and a system performance evaluation.

A second paper by also by Duke University researchers was Using Mobile Phones to Write in Air (it was an extension of a HotMobile paper in 2009). In this case, the idea is about using accelerometers to allow writing in the air using the phone as a pen. The accelerometer records the movement and they display the text on the screen after being processed on a server running Matlab. Some of the research challenges that they had to face were about filtering high frequency components from human hand vibrations (removed with a low-pass filter), recognizing the symbols (pre-loaded pattern recognition, it reminds me of how MS Kinect works), identifying pen lifting gestures and also dealing with hand rotation while writing (accelerometers only measure linear acceleration, wii uses a gyroscope to solve this issue). The system seems to work nicely and they said that it has been tested in patients unable to write manually.

The following presentation was Finding MiMo: Tracing a Missing Mobile Phone using Daily Observations from Yonsei University. This system allows finding lost/stolen mobile handsets in indoors environments. The authors claim that it solves some of the limitations of services such as Apple Mobile Me, which can be constrained by the availability of network coverage and battery capacity limitations. They support an adaptive algorithm for sensing and they also leverage several indoors location techniques.

Odessa: Enabling Interactive Perception Applications on Mobile Devices by M. Ra (Univ. of Southern California), A. Sheth (Intel Labs), L. Mummert (Intel Labs), P. Pillai (Intel Labs), D. Wetherall (Univ. of Washington) and R. Govindan (Univ. of Southern California), is about off-loading computation to the cloud to solve face, objects, pose and gesture recognition problems. Their system adapts at runtime and decides when and how to offload computation efficiently to the server based on the availability of resources (mainly network). They found that off-loading and parallelism choices should be dynamic, even for a given application, as performance depends on scene complexity as well as environmental factors such as the network and device capabilities. This piece of work is related with previous projects such as Spectra, NWSLite and Maui.

Session 2. Games and Displays

The first paper, entitled Adaptive Display Power Management for Mobile Games was a piece of work by Balan's group at the Singapore Management University. This problem tries to minimise the impact of interactive apps such as games that require having a power-hungry resource like the display active for long periods of time while trying to do not impact on the user experience. As an example, the show how while playing a youtube video, 45-50% of the energy consumption is taken by the display, cellular network takes 35-40% and the CPU 4-15%. This system dynamically combines screen brightness to reduce the energy consumption with non-linear gamma correction techniques per frame to compensate the negative effect of the brightness reduction. They also conducted a user study with 5 students to understand human thresholds for brightness compensation.

Switchboard: A Matchmaking System for Multiplayer Mobile Games by J. Manweiler (Duke Univ.), S. Agarwal (Microsoft Research), M. Zhang (Microsoft Research), R. Choudhury (Duke Univ.), P. Bahl (Microsoft Research), tries to predict the network conditions of mobile users to provide a good mobile gaming experience to the users. They presented a centralised service that monitors the latency between the game players to matchmaking them in mobile games. They tackled some scalability issues such as grouping users in viable game sessions based on their network properties.

Chameleon: A Color-Adaptive Web Browser for Mobile OLED Displays by M. Dong (Rice Univ.) and L. Zhong (Rice Univ.), take advantage of the well known observation about the impact of colors displayed on OLED screens. The energy consumption can vary from 0.5 W (almost black screen) to 2W (white screen). The power consumption of a OLED display increases linearly with the number of pixels, whle the energy consumption per pixel depends on the different leds that are active. In fact, 65% of the pixels on most of the common websites are white and this unnecessarily imposes a higher energy consumption on mobile handsets. Generally, green and red pixels are more energy-efficient than blue ones in most of the handsets so they propose transforming the colour of GUI objects on the display to make it more energy efficient in a similar fashion to Google Black. The 3 phases of their transformation are "color counting" (finding histogram of the GUI components), "color mapping" and "color painting". They also allow the user to use different color transformations for different websites.

Session 3. Crowdsourcing

In this session, some interesting applications were proposed such as Real-Time Trip Information Service for a Large Taxi Fleet by Balan (Singapore Mgmt Univ.). This application collects information about taxis availability and it finds routes/similar trips for the customers based on starting point, ending point, distance and time. They described how they had to find and eliminate sources of errors (e.g. weather) and how they used dynamic clustering (KD-Trees) to solve the problem. The second application was AppJoy: Personalized Mobile Application Discovery by B. Yan (Univ. of Massachusetts, Lowell) and G. Chen (Univ. of Massachusetts, Lowell). This is basically a recommendation engine for mobile apps according to user download history, ratings and passive information about how often users run those applications. They claim that the users that installed apps via AppJoy interacted with those apps more. They want to extend it to a context-aware recommendation engine. Finally, SignalGuru: Leveraging Mobile Phones for Collaborative Traffic Signal Schedule Advisory by E. Koukoumidis (Princeton Univ.), L. Peh (MIT) and M. Martonosi (Princeton Univ.), is a traffic signaling advisory system. It identifies traffic lights using the camera and tries to predict when they will turn red/green. They claim that this can considerably save an important amount of fuel to the drivers (20%) so it reduces the carbon footprint. The predictions are achieved by leveraging crowd-sourcing so cars collaborate and share information to identify those transitions. This system also uses sensors such as accelerometer and gyro-based image detection.