syslog
12Nov/130

Live-blog from SenSys 2013 – Day 1

Posted by Aaron

The 11th ACM Conference on Embedded Networked Sensor Systems (SenSys 2013) just kicked off, held at the University of Rome 'La Sapienza' in Rome. From Cambridge, Cecilia is chairing the session "Sensing People" and the SenSys 2013 Doctoral Colloquium.

Some facts about SenSys 2013:

  • 250+ attendees
  • 1/3 are students
  • 40% have registered for both the conference and the workshops

For submission:

  • 21 papers got accepted out of 123 submissions (17%)
  • most of them received 3 to 8 reviews
  • 70 presentations in the poster and demo track - each one also got a 1-min madness talk during the main conference

There is an interesting opening talk by the chairs showing some stats about the submission this year (as far as I can recall)

  • register the paper earlier, and submit it late (close to the DL) - this holds for most of the accepted ones
  • most of the accepted papers are from America, around 50%
  • certain key words in the title will lead to rejection: nodes, networking, vehicle, etc. (I should have taken a photo while laughing at that slide.. :(

Since there is no electro-plug in the conference room. Most of notes are on the paper. Will find time to move them here.

Keynote by Shahram Izadi from MSR Cambridge: "The New Face of the User Interface"

Introduce the work carried out at MSR Cambridge by the Human-Computer-Interaction (HCI) group, including the 3D interaction of physical and digital objects, 3D model reconstruction using smartphones (using the Queens' College and the Mathematics bridge in the demo). They highlight how to design using low price component and smartphones.

Session 1 - Communication Systems

Chaos: Versatile and Efficient All-to-All Data Sharing and In-Network Processing at Scale

Olaf Landsiedel (Chalmers University of Technology, Sweden), Federico Ferrari (ETH Zurich, Switzerland), Marco Zimmerling (ETH Zurich, Switzerland)

Q: Mainly evaluated by simulation, will it be a gap if the design is applied to real environment?

A: Not yet tested in real environment, hence we don't know.

Let the Tree Bloom: Scalable Opportunistic Routing with ORPL

Simon Duquennoy (SICS Swedish ICT AB, Sweden), Olaf Landsiedel (Chalmers University of Technology, Sweden), Thiemo Voigt (SICS Swedish ICT AB and Uppsala University, Sweden)

Practical Error Correction for Resource-Constrained Wireless Networks: Unlocking the Full Power of the CRC

Travis Mandel (University of Washington), Jens Mache (Lewis & Clark College)

Use CRC to correct wireless transmission errors, solving the reliable issue by their design TVA, Transmit-Verify-Ack.

Very good presentation, and presenter ends his talk with a song about CRC :)

Session 2 - Sensing People

iSleep: Unobtrusive Sleep Quality Monitoring using Smartphones

Tian Hao (Michigan State University), Guoliang Xing (Michigan State University), Gang Zhou (College of William and Mary)

Using sound monitoring to infer sleeping quality.

Lifestreams: a modular sense-making toolset for identifying important patterns from everyday life

Cheng-Kang Hsieh (UCLA CSD), Hongsuda Tangmunarunkit (UCLA CSD), Faisal Alquaddoomi (UCLA CSD), John Jenkins (Cornell Tech), Jinha Kang (UCLA CSD), Cameron Ketcham (Cornell Tech), Brent Longstaff (UCLA CSD), Joshua Selsky (Cornell Tech), Betta Dawson (UCLA CSD), Dallas Swendeman (UCLA David Geffen School of Medicine Department of Psychiatry and Biobehavioral Sciences), Deborah Estrin (Cornell Tech), Nithya Ramanathan (UCLA CSD)

Probably the longest author list in SenSys history.

14GB data collected for 44 young mothers.

Q: How do you recruit patients?

A: Done by the partners in hospital.

Q: Any new findings from the data?

A: We detect some hidden behaviors not noticed by the patients, e.g., unusual long walking outdoors, due to anxiety.

Diary: From GPS Signals to a Text-Searchable Diary

Dan Feldman (MIT), Andrew Sugaya (MIT), Cynthia Sung (MIT), Daniela Rus (MIT)

22Sep/110

Mobicom. Day 2

Posted by Kiran Rachuri

Day 2 of MobiCom 2011 started with my talk on SociableSense. Fourteen papers were presented over four sessions, including two best papers.

SESSION: Applications

SociableSense: Exploring the Trade-offs of Adaptive Sampling and Computation Offloading for Social Sensing, Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, and Peter J. Rentfrow (University of Cambridge, United Kingdom)

Our work. Details at:

http://www.syslog.cl.cam.ac.uk/2011/07/15/efficient-social-sensing-based-on-smart-phones/

Overlapping Communities in Dynamic Networks: Their Detection and how they can help Mobile Applications, Nam P. Nguyen, Thang N. Dinh, Sindhura Tokala, and My T. Thai (University of Florida, USA)

A better understanding of mobile networks in terms of overlapping communities, underlying structure, organisation helps in developing efficient applications such as routing in MANETs, worm containment, and sensor reprogramming in WSNs. So, the detection of network communities is important, however, they are large and dynamic, and overlapping communication.  Can community detection be performed in a quick and efficient way.

They propose a two phase limited input dependent framework to address this. Phase 1: basic communities detection (basic communities are dense parts of the networks). Phase 2: update network communities when changes are introduced, i.e., handle: adding a node/edge, and removing a node/edge.  The evaluation is based on MIT reality mining data.  They evaluate the proposed scheme with respect to two applications: routing in MANETs and worm containment.

Detecting Driver Phone Use Leveraging Car Speakers, Jie Yang and Simon Sdhom> (Stevens Institute of Technology, USA); Gayathri Chandrasekaranand Tam Vu (Rutgers University, USA); Hongbo Liu (Stevens Institute of Technology, USA);Nicolae Cecan (Rutgers University, USA); Yingying Chen (Stevens Institute of Technology, USA);Marco Gruteser and Richard P. Martin(Rutgers University, USA)

(Joint Best Paper Award)

80% of people talk on cell phone while driving. The consequences of this might be dangerous (18% accidents). They claim that hands-free devices do not help because of the effects in the cognitive load on the driver. Several mobile apps in the market trying to solve that. (zoom safer ïzup, cellsafety). Recent measures:

-hard blocking: jammers, blocking calls etc

-soft interaction: delay calls, route to voice mail, automatic reply

Current apps that actively prevent cell phone use in vehicle only detect the phone is in vehicle or not through: GPS, handover, signal strength, speedometer etc. None of them have capability to find whether phone is used by driver or passenger. They use an acoustic ranging approach to solve this problem.  They identify the position of the cell phone based on the car speakers and mobile phone, and based on speakers emitting different sounds at different times. Cell phone mic has wider range of frequency range: so beep frequency to outside user hearing range.  Evaluation shows that the accuracy of detection is over 90%.

I Am the Antenna: Accurate Outdoor AP Location Using Smartphones, Zengbin Zhang, Xia Zhou, Weile Zhang, Yuanyang Zhang, Gang Wang, Ben Y. Zhao, and Haitao Zheng (University of Calfornia at Santa Barbara, USA)

The density of APs in the environment is very high. How to find the location of an AP?  Conventional AP location methods:

- Directional antenna: Fast, very accurate but expensive

- Signal map: Simple but time consuming

- RSS gradient: Low accuracy, low measurement overhead but low accuracy

Their solution is based on the effect  of user orientation degree to an AP on RSS. The body of the user can affect the SNR (they observed around 13dBm difference). They also tested the generality of the effect with multiple phones, protocols, different users, and environments, and  RSS profiles all followed the same trend.

Evaluation is in a campus, with three scenarios. 1. Simple line of sight (no blocks) 2. complex line of sight (vehicles etc) 3. Non line of sight (line of sight is completely blocked). Metric: absolute angular error: detected direction - actual direction. results: error < 30 degree for 80% cases, in simple LOS (line of sight); error < 65 degree for 80% cases in Non LOS.

SESSION: Cellular Networks

Traffic-Driven Power Saving in Operational 3G Networks,  Chunyi Peng, Suk-Bok Lee, Songwu Lu, and Haiyun Luo (University of California at Los Angeles, USA)

Transmission power of Base Stations increases linearly with the traffic load. The cooling power keeps constant and its comparable to the transmission power. As a result, high energy is consumed energy even at zero traffic. Existing solutions do not address practical issues and they follow a theoretical analysis. In this work, they propose a traffic-driven approach that exploits traffic dynamics to turn off under-utilised BSs for system-wide energy efficiency. They claim that traffic is quite predictable in the base station. There’s a lot of potential to save energy in quite hours but also in peak hours. Their solution also tries to be compatible with current 3G standard/deployment. Issues addressed: Issue 1: how to satisfy location dependent coverage and capacity constraints. Issue 2: how to estimate traffic load ?

Solution: based on profiling: estimate traffic envelope via profiling and leverage near-term stability. The set of BS active in idle hours should be a subset of the ones in peak hours. Their condition is that they should not switch BSs more than once per day. Provide location-dependent capacity. Their estimation is a moving average with 24 daily intervals. However, frequent on/off switching is undesirable: takes several minutes. It should be based on traffic characteristics.

MOTA: Engineering an Operator Agnostic Mobile Service, Supratim Deb, Kanthi Nagaraj, and Vikram Srinivasan (Bell Labs Research, India)

Cellular coverage varies with respect to locations. Users may not be happy with a single service provider, and there is a case for users choosing services from multiple providers. Dual sim phones are already popular in asia. Users are using services based on the cost from the providers. Goal of this work: Ability for users to join the network of choice at will based on location, pricing, and applications.

Solution: to propose changing operator from the user-side. They consider several solutions: Option 1: Centralised approach making decisions but operators unlikely to share network planning information. Option 2: Users use signal strength from different base stations. This is insufficient and can result in poor user experience.

They propose MOTA in which a service aggregator is introduced: new intermediary between users and operator and is responsible for maintaining customer relationships and handles all control plane operations that cannot be handled by a single operator. The also use a Utility function that incorporates fairness. Evaluation is based on the data from one of the largest cellular operators in India.

Anonymization of Location Data Does Not Work: A Large-Scale Measurement Study, Hui Zang and Jean Bolot (Sprint Applied Research, USA)

Call Detail Records (CDR) keep a lot of information about the phone calls of the users and they can be linked to a location. They can be used for marketing, security, LBS, Mobility Modelling, however, privacy might be breached if such data is released. Traditional approaches to protect privacy of users is through anonymisation, however, this works shows that does not work. CDR contains: mobile id, time of call, call durations, start cell id, start sector id, end sector id, call direction, caller id. If mobile id and caller id are anonymised, can we detect the user. Its shown that with gender, zipcode, and birthdate, 87% of USA population can be identified.

Their dataset consists of more than 30 billion call records made by 25 million cell phone users across the USA. Their approach is to infer top N locations for each user and correlate this with publicly available information such as census data. They show that the top 1 location does not yield small anonymity sets, but top 2 and 3 locations do at the sector or cell-level granularity. They also provide possible solutions based on spatial and time domain approaches for publishing location data without compromising on privacy.

SESSION: Infrastructureless Networking.

Enhance & Explore: An Adaptive Algorithm to Maximize the Utility of Wireless Networks, Adel Aziz and Julien Herzen (École Polytechnique Fédérale de Lausanne, Switzerland); Ruben Merz (Deutsche Telekom Laboratories, Germany); Seva Shneer (Heriot-Watt University, UK); andPatrick Thiran (École Polytechnique Fédérale de Lausanne, Switzerland)

This work addresses the problem of providing efficiency and fairness in wireless networks. Their approach is based on maximising a utility function. They propose an algorithm called Enhance and Explore that maximises the utility function. The challenges in designing this scheme are: work on existing mac, non-network wide message passing, and wireless capacity is unknown a priory.

They consider two scenarios: WLAN setting: inter-flow problem and optimally allocate resources. Multi-hop setting: intra-flow problem and avoid congestion. They show analytically that the proposed algorithm converges to a point of optimal utility. Evaluation is through experiments in a testbed and simulations in ns-3.

Scoop: Decentralized and Opportunistic Multicasting of Information Streams, Dinan Gunawardena, Thomas Karagiannis, and Alexandre Proutiere (Microsoft Research Europe, UK); Elizeu Santos-Neto (University of British Columbia, Canada); and Milan Vojnovic (Microsoft Research Europe, UK)

This work aims at leveraging mobility for content delivery in networks of devices experiencing intermittent connectivity. Main challenge: routing / relaying strategies. Existing solutions include epidemic routing. Drawback of existing works are: simplifying assumptions on mobility, and interact contact times are exponentially distributed. This work proposes SCOOP that

  • maximizes some global system objective
  • accounts for storage and transmission costs
  • multi-point to multi-point communications
  • decentralized
  • model-free (allows general node mobility)

There is a necessity to propose a mobility model-free system. They used classic traces: UCSD, Infocom, DieelNet and SF Taxis.  They show that two hops are enough to reach a large percentage of nodes. They also show that the delays in paths between a source and a destination are positively correlated. They aim to identify the strategy optimally exploiting mobility and buffer constraints and relays. However, this is a hard problem. They use a sub-gradient algorithm to solve it efficiently. Evaluation is through numerical experiments. They compared SCOOP with an idealized version of R-OPT of RAPID algorithm (assumes full global knowledge). Performance with respect to delivery ratio is very close to R-OPT.

R3: Robust Replication Routing Wireless Networks with Diverse Connectivity, Xiaozheng Tie, Arun Venkataramani (University of Massachusetts Amherst, USA) and Aruna Balasubramanian (University of Washington).

Wireless routing protocols are designed for specific target environments, like well-connected meshes, intermittently connected MANETs. Problems with this is routing protocols are fragile, and perform poorly outside its target environment. Wireless networks exhibit spatio-temporal diversity, therefore, compartmentalized design is not efficient. Can we design a protocol that ensures a robust performance across networks.

They propose to use Replication routing. They present a model to quantify replication gain. Replication gain depends on the path delay distributions and not just expected value. They study the average replication gain with respect to number of paths using DieselNet-DTN and Haggle traces. They propose R3: a link state protocol that selects replication paths using the proposed model. The scheme also adapts the replication to load.

Evaluation is both on DieselNet DTN testbed and a Mesh testbed. Simulation validation is also performed  using DieselNet deployment. Compared with several protocols. Simulation based on haggle trace shows that R3 reduces delay by up to 60% and increases good put by up to 30% over SWITCH. Simulations on DieselNet-Hybrid shows that R3 improves median delay compared to SWITCH  by 2.1x.

Flooding-Resilient Broadcast Authentication for VANETs, Hsu-Chun Hsiao, Ahren Studer, Chen Chen, and Adrian Perrig (Carnegie Mellon University, USA); and Fan Bai, Bhargav Bellur, and Aravind Iyer (General Motors Research)

Each vehicle possess an On Board Unit (OBU), and broadcasts info for safety and convenience. This information has to be secured. IEEE 1069.2 standard suggests to use ECDSA signature for these messages, however, its expensive for verification and takes around 22ms to verify, and its difficult if many messages arrive in short time. Can we reduce this verification delay. Core idea of this work: entropy aware authentication.

They propose two methods: (1) FastAuth - exploits predictability of future messages. Uses hash to verify location updates instead of ECDSA . The result is 1 us instead of 22000 us in ideal case. (2) SelAuth - selective verification before forwarding. They also reduce the communication overhead. Evaluation is based on real vehicle traces (4 traces), each generated by driving a car along a 2 mile path for 2 hours. Results show that the signature generation is 20x faster and verification is 50x faster compared to ECDSA.

SESSION: Protocols.

E-MiLi: energy-Minimizing Idle Listening in Wireless Networks, Xinyu Zhang and Kang G. Shin (University of Michigan-Ann Arbor, USA)

(Joint Best Paper Award)

Wi-Fi is a popular means of wireless Internet connection. However, Wi-Fi is a main energy consumer in mobile devices, 14x higher than GSM on phone. This is due to cost of idle listening. Moreover, idle listening power is comparable to TX/RX power. Existing solutions are variants of PSM, but, is this good enough. No, this is due to carrier sensing time. To overcome this, they propose E-MiLI that reduces the power consumption of idle listening. They down-clock the radio in idle listening mode. Down-clocking by 1/4 saves power by 47.5%. The key challenge is how to decode a packet given that receiver sampling rate should be no less than senders clock rate to decode a packet. The solution proposed is to separate detection from decoding.They add a preamble to 802.11 packet that can be detected by low clock rates.

One issue with this is false triggering. Packets intended for one client may trigger all other clients and this is a waste of energy. The second problem is the energy overhead caused by large preambles. The solution is a minimum-cost address sharing to allow multiple nodes to be assigned the same address. Address allocated according to channel usage. There’s a delay caused by cold-rate switching too. To reduce this they use opportunistic downclocking. Evaluation is with respect to: Packet detection: software radio based experiments, Energy consumption: through Wi-Fi traces, and Simulations using ns-2. Results: When SNR is above 8dB, miss detection probability is almost zero. They achieved close to 40% energy saving.

Refactoring Content Overhearing to Improve Wireless Performance, Shan-Hsiang Shen, Aaron Gember, Ashok Anand, and Aditya Akella (University of Wisconsin-Madison, USA)

The main aim is to improve on wireless performance by leveraging overheard packets. Several techniques available currently, but,  none of these leverage duplicate data. This work takes a content based overhearing approach and suppresses duplicate data transmission. Ditto is first work that used content based overhearing approach,  but it works at the granularity of objects, and does not remove sub packet redundancy. Moreover, it only works for some applications. This work presents REfactor content overhearing:

(1) this scheme puts content overhearing at the network layer, and this results in savings across applications.  Transport layer approach (used in Ditto) ties data to application or object chunk. Network layer approach reduces redundancy across all flows. Transport approach also requires payload reassembly.

(2) this scheme identifies sub-packet redundancy. This saves transmission times. Ditto only works in 8 - 32kb object chunks, whereas the proposed scheme operates at a finer granularity. This results in savings from redundancy as small as 64 bytes. and this also results in leveraging any overhearing even a single packet.

Evaluation through test-bed experiments show 6 to 20% improvement in Goodput. Simulation results also show that 20% improvement is achieved in Goodput.

Distributed Spectrum Management and Relay Selection in Interference-Limited Cooperative Wireless Networks, Zhangyu Guan (Shandong University, P. R. China); Tommaso Melodia (State University of New York at Buffalo, USA); Donfeng Yuan (Shandong University, P. R. China); and Dimitris A. Pados (State University of New York at Buffalo, USA)

Emerging multimedia services require high data rates. This work aims to maximize the capacity of wireless networks by leveraging the frequency and spatial diversity. Frequency: by dynamic spectrum access, and this improves spectral efficiency. Spatial: by cooperative communication, and this enhances link connectivity. Problem: maximize sum utility (capacity, log-capacity) of multiple concurrent traffic sessions by jointly optimizing relay selection (whether to cooperate or not) and direct transmission. Problem formulated as mixed integer non-convex problem. This is NP hard. They propose a solution based on branch and bound that is able to find a globally optimum solution. Polynomial time  solution is not guaranteed but in practice it works well. Evaluation is based on simulations. Results show that the proposed schemes converge very fast. Centralized algorithm achieves at least 95% of the global optimum, and distributed schemes are very close to optimal.

 

15Jul/110

Efficient Social Sensing based on Smart Phones

Posted by Kiran Rachuri

Mobile smart phones represent a perfect platform for building systems to capture the behaviour of users in the work-places, as they are ubiquitous, unobtrusive, and sensor-rich devices. However, there are many challenges in building such systems: mobile phones are battery powered and the energy consumption of sensor sampling, data transmission, and resource intensive local computation is high, the mobile phone sensors are inaccurate and not specifically designed for the purpose of capturing user behaviour, and finally, the local and cloud resources should be used efficiently by considering the changing mobile phone resources.

We address the above technical challenges for supporting social sensing applications in a paper to be presented at the upcoming ACM MobiCom '11 conference.

In the paper we describe the design, implementation, and evaluation of SociableSense, an efficient and adaptive platform based on off-the-shelf mobile phones that supports social applications aiming to provide real-time feedback to users or collect data about their behaviour.

The key components of the system are:

- A sensor sampling component adaptively controls the sampling rate of accelerometer, Bluetooth, and microphone sensors while balancing energy-accuracy-latency trade-offs based on reinforcement learning mechanisms. The learning mechanism adjusts the sampling rate of the sensors based on the context of the user in terms of events observed (interesting or not), i.e., the sensors are sampled at a high rate when there are interesting events observed and at a low rate when there are no events of interest.

- A computation distribution component based on multi-criteria decision theory dynamically decides where to perform computation of tasks by considering the importance given to each of the dimensions: energy consumption, latency, and data sent over the network.  For each classification task that needs to be processed, this scheme evaluates a utility function to decide on how to effectively distribute the subtasks of the classification between the local and the cloud resources.

We show through several micro-benchmark tests that the adaptive sampling scheme adjusts the sampling rate of sensors dynamically based on the user's context and balances energy-accuracy-latency trade-offs. We also evaluate the computation distribution scheme in terms of selecting the best configuration given the importance assigned to each performance dimension, and show that the computation distribution scheme efficiently utilises the local and the cloud resources and balances energy-latency-traffic trade-offs by considering the requirements of the experiment designers.

To further demonstrate the effectiveness of the SociableSense platform, we also conduct a social experiment using an application that determines the sociability of users based on colocation and interaction patterns. The use of computation distribution scheme leads to approximately 28% more battery life, 6% less latency per task, and 3% less data transmitted over the network per task compared to the model where all the classification tasks are computed remotely.

Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, Peter J. Rentfrow.  SociableSense: Exploring the Trade-offs of Adaptive Sampling and Computation Offloading for Social Sensing. In Proceedings of the 17th ACM International Conference on Mobile Computing and Networking (MobiCom '11), Las Vegas, USA. [PDF]

30Jun/110

MobiSys’11. Day 2

Posted by Narseo

Keynote - Mobile Computing: the Next Decade and Beyond

The keynote was given by Prof. Mahadev Satyanarayanan, "Satya", (Carnegie Mellon University, MobiSys Outstanding Contributor Award). A quick look at the abstract of his talk, can be enough to see his merits.

He thinks that research on mobile computing is socially demanded. New systems and apps are motivated by the fact that the number of sales of mobile devices in 2011 overtook the sales of PCs for the first time. In his opinion, mobile computing is a common ground between distributed systems, wireless networking, context-awareness, energy awareness and adaptive systems. He highlighted the enduring challenges in this area in the last years:

    - Weight, power, size constraints (e.g. tiny I/O devices).
    - Communication uncertainty: bandwidth, latency and money. We still struggle with intermittent connectivity.
    - Finite energy. Computing, sensing and transmitting data cost energy.
    - Scarce user attention: low human performance. Users are prone to make errors and they are becoming less patient.
    - Lower privacy, security and robustness. Mobile handsets have more attack vectors and can suffer physical damage more easily.

After that, he mentioned three future emerging themes, some of them related to several ongoing projects in Cambridge:

    Mobile devices are rich sensors. They support a wide range of rich sensors and they access nearby data opportunistically (content-based search can be more energy-efficient, so looks like there's some ground for CCN here). In fact, applications can be context and energy-aware. He mentioned some of the applications from yesterday's first session as examples.
    Cloud-mobile convergence. Mobile computing allows freedom. It enables access to anything, anytime, anywehere. However, this increases complexity. On the other hand, Cloud computing provides simplicity by centralization (one source has it all). The question is: can we combine the freedom of mobility with the simplicity of cloud computing? Cloud computing evolved a lot since its first conception in 1986 (he mentioned Andrew File System as the first cloud service ever). He also highlighted that the key technology/enabler is virtualization and an example is his research about Cloudlets. Virtual Machines allow ubiquity of state and behavior so they can perfectly re-create the state anywhere, anytime. Moreover, moving clouds closer to the end-user can minimise the impact of network latency. He also talked about an still quite unexplored space: the importance of offloading computation from the cloud to local devices (the other way has been quite well explored already).
    Resource-rich mobile apps. From my perspective, this is very related to the first example. He talked about applications incorporating face recognition or the role of mobile handsets to enable applications for mobile cognitive assistance.

Session 4. When and Where

This session was more about indoors localisation. The first presentation was: Indoor location sensing using geo-magnetism (J. Chung (MIT), M. Donahoe (MIT), I. Kim (MIT), C. Schmandt (MIT), P. Razavi (MIT), M. Wiseman (MIT)). In this paper, the authors try to provide an interesting approach to the classic problem of indoors location. In their project, they use magnetic field distortion fingerprints to identify the location of the user. They used their own gadget: a rotating tower with a magnetic sensor to obtain the magnetic fingerprint on a building (sampled every 2 feet). They proved that the magnetic field on their building hasn't changed in 6 months (they haven't checked whether there are changes at different times of the day or not) so the fingerprint doesn't have to be updated frequently. They implemented their own portable gadget with 4 magnetic sensors for the evaluation. The error is <1m in 65% of the cases so it's more precise (but more costly) than WiFi solutions. The main source of errors are moving objects (e.g. elevator).

The next paper is similar but in this case it leverages audio fingerprints: Indoor Localization without Infrastructure using the Acoustic Background Spectrum(S. Tarzia (Northwestern Univ.), P. Dinda (Northwestern Univ.), R. Dick (Univ. of Michigan), G. Memik (Northwestern Univ.)) -NOTE: This app is available in Apple's app store: BatPhone. The benefit of this system is that this does not require specialized hardware and it passively listens to background sounds and after it analyses the spectrum. It doesn't require any infrastructure support. They achieved a 69% accuracy for 33 rooms using sound alone. As many other fingerprint-based localization mechanism, it requires supervised learning techniques. To guess the current location, they find the "closest" fingerprint in a database of labeled fingerprints. In the future work list, they plan to use a Markov movement model to improve the accuracy and also they plan to add other sensors to increase accuracy as in SurroundSense.

Exploiting FM Radio Data System for Adaptive Clock Calibration in Sensor Networks was a quite impressive and neat piece of work. Time synchronization is important for various applications (event ordering, coordination, and there are new wireless interfaces such as Qualcomm's Flashlink that take advantage of a central clock to synchronise devices). In fact, time synchronization is usually based on message passing between devices. They exploit FM radio data system (RDS) for clock calibration. Some of its advantages are its excellent coverage and it's availability all over the world. They implemented their own FM hardware receiver, that was integrated with sensor network platforms on TinyOS. It also solves some of the coverage limitations of GSM networks. Their results show that RDS clock is highly stable and city-wide available and the power consumption is very low (so the cost, 2-3$). The calibration error is also ridiculously low even if the length of the calibration period is in the order of hours. Very neat.

The last presentation was a joint work between Univeristy of Michigan and AT&T Labs: AccuLoc: Practical Localization of Performance Measurements in 3G Networks. Cellular operators need to distinguish the performance of each geographic area in their 3G networks to detect and resolve local network problems. They claim that the “last mile” radio link between 3G base stations and end-user devices is essential for the user experiences. They take advantage of some previous papers that demonstrate that users' mobility is predictable and they exploit this fact to cluster cell sectors that accurately report network performance at the IP level. Those techniques allow them to characterize and identify problems in network performance: clustering cells allows capturing RTT spikes better.

Session 5. Security and Privacy

Caché: Caching Location-Enhanced Content to Improve User Privacy
S. Amini (CMU), J. Lindqvist (CMU), J. Hong (CMU), J. Lin (CMU), E. Toch (Tel Aviv Univ.), N. Sadeh (CMU). The idea is to periodically pre-fetch potentially useful location content so applications can retrieve content from a local cache on the mobile device when it is needed. Location content will be only revealed to third-party providers like "a region" instead of a precise location. Somehow similar to SpotMe.

The second presentation was ProxiMate: Proximity-based Secure Pairing using Ambient Wireless Signals by S. Mathur (AT&T Labs), R. Miller (Rutgers Univ.), A. Varshavsky (AT&T Labs), W. Trappe (Rutgers Univ.), N. Mandayam (Rutgers Univ.). This is about enabling security between devices in wireless environments that do not have a trusted relationship between them based on proximity. It tries to reduce the security issues of low power communications (susceptible to eavesdropping, or even to be sniffed from a mile away as Bluetooth). This takes advantage of code-offsets to generate a common cryptographic key directly from their shared time wireless environment. Quite complex to understand in the presentation. It provides security against computationally unbounded adversary. Complexity is O(n) while Diffie-Hellman is O(n^3).

Security versus Energy Tradeoffs in Host-Based Mobile Malware Detection
J. Bickford (Rutgers Univ.), H. Lagar-Cavilla (AT&T Labs), A. Varshavsky (AT&T Labs), V. Ganapathy (Rutgers Univ), L. Iftode (Rutgers Univ.). This interesting paper explores the security-energy tradeoffs in mobile malware detection. It requires periodically scanning the attack target but it can decrease the battery life two times faster. This work is a energy-optimized version of two security tools. The way it conserves energy is by adapting the frequency of checks and by defining what to check (scan fewer code/data objects). They are trying to provide a high-level security with a low power consumption. They are specially looking a rootkits (sophisticated malware requiring complex detection algorithms). In order to be detected, it's necessary to run the user OS on a hypervisor to check all the kernel data changes. This technique can provide a 100% security but a poor energy efficiency. In order to find the tradeoff, they target what they call the sweet-spot to generate a balanced security. With this technique they can detect 96% of the rootkit attacks.

Analyzing Inter-Application Communication in Android by E. Chin (UC Berkeley), A. Felt (UC Berkeley), K. Greenwood (UC Berkeley), D. Wagner (UC Berkeley). Malicious Apps can take advantage of Android's resources by registering a listener to an specific provider (This abstraction is called Intent in Android). An application can register implicit intents so they not for an specific receiver (i.e. application or service). They described several attacks that can be possible because sending implicit intents in android makes communication public: both the intent and the public receiver can be public for an attacker. Consequently, there are several attacks such as spoofing, man-in-the-middle, etc. A malicious app can also inject fake data to applications or collect information about the system. They evaluated the system called ComDroid with 20 applications. They claim that this can be fixed by either developers or by the platform.

Session 6. Wireless Protocols

This session tries to cover some optimisations for wireless protocols. The first presentation was Avoiding the Rush Hours: WiFi Energy Management via Traffic Isolation by J. Manweiler (Duke Univ.), R. Choudhury (Duke Univ.). This paper measured the power consumption of WiFi interfaces on Nexus One handsets and they found that the WiFi energy cost grows linearly with the number of access points available (dense neighborhoods). This system tries to force APs to collaborate and to coordinate their beacons. This approach only requires changing the APs firmware. Mobile clients can reduce the energy wasted in idle/overhear mode. This system (called SleepWell) forces APs to maintain a map of their neighboring peers (APs) to re-schedule efficiently their beacon timings. However, clients are synchronized to AP clocks. To solve this issue, the AP notifies the client that a beacon is going to be deferred so the client knows when it must wake up. As a result, the client can extend the period of time that it remains in deep sleep mode.

The next paper was Opportunistic Alignment of Advertisement Delivery with Cellular Basestation Overloads, by R. Kokku (NEC Labs), R. Mahindra (NEC Labs), S. Rangarajan (NEC Labs) and H. Zhang (NEC Labs). This paper tries to align cellular base-stations overload with the delivery of advertising content to the clients. The goal is to do not compromise the user-perceived quality of experience while making cellular network operations profitable with advertisements (e.g. embedded in videos). The overload can lead to reduce the available bandwidth per user. Their assumption is that cellular operators can control advertisement delivery, so it's possible to adapt the quality (lower rate) of some advertisements to an specific set of users. Their system called Opal considers two groups of users: regular users that receive their traffic share, and targeted users that receive advertisements during base station overloads. Opal initially maps all users to the regular group and it dynamically decides which users will be migrated between groups based on a long term fairness metric. The system is evaluated on WiMax and with simulations. In the future they're trying to target location-based advertising.

The final presentation was Revisiting Partial Packet Recovery in 802.11 Wireless LANs by J. Xie (Florida State Univ.), W. Hu (Florida State Univ.), Z. Zhang (Florida State Univ.). Packets in WiFi links can be partially received. In order to be recovered, all the packet has to be retransmitted so it has an energy and computational overhead. One solution is based on dividing the packets in smaller blocks so only the missed ones are retransmitted (like keeping a TCP window). Other technique is based on error-correction (e.g. ZipTx). Those techniques can have an important overhead on the CPU and they can be complementary. The novelty of their approach is including Target Error Correction and dynamically selecting the optimal repair method that minimizes the number of bytes sent and the CPU overhead.

.... and now the conference banquet :-)

30Jun/110

Mobisys’11. Day 1

Posted by Narseo

MobiSys started this morning with 3 sessions about mobile applications and services, energy-efficient management of displays and crowd-sourcing apps. Researchers affiliated to 26 different institutions were within the co-authors of the papers. The most successful ones are Duke University (4 papers), At&T (4 papers), Univ. Michigan (3 papers) and Univ. Southern California (3 papers). The keynote was given by Edward W. Felten from the Federal Trade Commission about how the FTC works.

Session 1. Services and Use Cases

The first presentation was a quite cool idea from Duke University called: TagSense: A Smartphone-based Approach to Automatic Image Tagging. Their system proposed a system for automatically tagging pictures by exploiting all the sensors and contextual information available on modern smartphones: WiFi ad-hoc network, Compass, Light sensors (to identify whether the handset is indoors or outdoors), Microphone, Accelerometer (movement of the user), Gyroscope and GPS (location). When the camera application is launched, it creates a WiFi ad-hoc network with all the nearby devices and they exchange contextual information to add rich metadata to the picture captured. One of the challenges they tackled was about discerning if the user was moving, posing, facing the camera, etc. They implemented a prototype on Android and they evaluated it with more than 200 pics. The paper compares the accuracy of automatic tagging results with the metadata that was manually added in Picassa and iPhoto. With this system, the number of tags missed is reduced considerably. Nevertheless, the system left some open research challenges such as user authentication and a system performance evaluation.

A second paper by also by Duke University researchers was Using Mobile Phones to Write in Air (it was an extension of a HotMobile paper in 2009). In this case, the idea is about using accelerometers to allow writing in the air using the phone as a pen. The accelerometer records the movement and they display the text on the screen after being processed on a server running Matlab. Some of the research challenges that they had to face were about filtering high frequency components from human hand vibrations (removed with a low-pass filter), recognizing the symbols (pre-loaded pattern recognition, it reminds me of how MS Kinect works), identifying pen lifting gestures and also dealing with hand rotation while writing (accelerometers only measure linear acceleration, wii uses a gyroscope to solve this issue). The system seems to work nicely and they said that it has been tested in patients unable to write manually.

The following presentation was Finding MiMo: Tracing a Missing Mobile Phone using Daily Observations from Yonsei University. This system allows finding lost/stolen mobile handsets in indoors environments. The authors claim that it solves some of the limitations of services such as Apple Mobile Me, which can be constrained by the availability of network coverage and battery capacity limitations. They support an adaptive algorithm for sensing and they also leverage several indoors location techniques.

Odessa: Enabling Interactive Perception Applications on Mobile Devices by M. Ra (Univ. of Southern California), A. Sheth (Intel Labs), L. Mummert (Intel Labs), P. Pillai (Intel Labs), D. Wetherall (Univ. of Washington) and R. Govindan (Univ. of Southern California), is about off-loading computation to the cloud to solve face, objects, pose and gesture recognition problems. Their system adapts at runtime and decides when and how to offload computation efficiently to the server based on the availability of resources (mainly network). They found that off-loading and parallelism choices should be dynamic, even for a given application, as performance depends on scene complexity as well as environmental factors such as the network and device capabilities. This piece of work is related with previous projects such as Spectra, NWSLite and Maui.

Session 2. Games and Displays

The first paper, entitled Adaptive Display Power Management for Mobile Games was a piece of work by Balan's group at the Singapore Management University. This problem tries to minimise the impact of interactive apps such as games that require having a power-hungry resource like the display active for long periods of time while trying to do not impact on the user experience. As an example, the show how while playing a youtube video, 45-50% of the energy consumption is taken by the display, cellular network takes 35-40% and the CPU 4-15%. This system dynamically combines screen brightness to reduce the energy consumption with non-linear gamma correction techniques per frame to compensate the negative effect of the brightness reduction. They also conducted a user study with 5 students to understand human thresholds for brightness compensation.

Switchboard: A Matchmaking System for Multiplayer Mobile Games by J. Manweiler (Duke Univ.), S. Agarwal (Microsoft Research), M. Zhang (Microsoft Research), R. Choudhury (Duke Univ.), P. Bahl (Microsoft Research), tries to predict the network conditions of mobile users to provide a good mobile gaming experience to the users. They presented a centralised service that monitors the latency between the game players to matchmaking them in mobile games. They tackled some scalability issues such as grouping users in viable game sessions based on their network properties.

Chameleon: A Color-Adaptive Web Browser for Mobile OLED Displays by M. Dong (Rice Univ.) and L. Zhong (Rice Univ.), take advantage of the well known observation about the impact of colors displayed on OLED screens. The energy consumption can vary from 0.5 W (almost black screen) to 2W (white screen). The power consumption of a OLED display increases linearly with the number of pixels, whle the energy consumption per pixel depends on the different leds that are active. In fact, 65% of the pixels on most of the common websites are white and this unnecessarily imposes a higher energy consumption on mobile handsets. Generally, green and red pixels are more energy-efficient than blue ones in most of the handsets so they propose transforming the colour of GUI objects on the display to make it more energy efficient in a similar fashion to Google Black. The 3 phases of their transformation are "color counting" (finding histogram of the GUI components), "color mapping" and "color painting". They also allow the user to use different color transformations for different websites.

Session 3. Crowdsourcing

In this session, some interesting applications were proposed such as Real-Time Trip Information Service for a Large Taxi Fleet by Balan (Singapore Mgmt Univ.). This application collects information about taxis availability and it finds routes/similar trips for the customers based on starting point, ending point, distance and time. They described how they had to find and eliminate sources of errors (e.g. weather) and how they used dynamic clustering (KD-Trees) to solve the problem. The second application was AppJoy: Personalized Mobile Application Discovery by B. Yan (Univ. of Massachusetts, Lowell) and G. Chen (Univ. of Massachusetts, Lowell). This is basically a recommendation engine for mobile apps according to user download history, ratings and passive information about how often users run those applications. They claim that the users that installed apps via AppJoy interacted with those apps more. They want to extend it to a context-aware recommendation engine. Finally, SignalGuru: Leveraging Mobile Phones for Collaborative Traffic Signal Schedule Advisory by E. Koukoumidis (Princeton Univ.), L. Peh (MIT) and M. Martonosi (Princeton Univ.), is a traffic signaling advisory system. It identifies traffic lights using the camera and tries to predict when they will turn red/green. They claim that this can considerably save an important amount of fuel to the drivers (20%) so it reduces the carbon footprint. The predictions are achieved by leveraging crowd-sourcing so cars collaborate and share information to identify those transitions. This system also uses sensors such as accelerometer and gyro-based image detection.