syslog
18Oct/110

Open Hardware Workshop 2011 (Grenoble, FR) (mk2)

Posted by Andrew Moore

Open Hardware Workshop, October 2011 Grenoble

(mk2 with some corrections from Javier)

I had reason to attend the Open Hardware Workshop 2011 in Grenoble. (Waving the flag for NetFPGA and Raspberry Pi and because openhardware appeals to my inner maker and is very cool. (This post comes across a little dour - it shouldn't - I think this is hugely exciting.)

Motivated by the ideals of Open Hardware and exposed with a definition and everything (OSHW) (exampled recently by the CERN Open Hardware License), this workshop was held coincident with the big physics instrumentation conference ICALEPCS, lots of high energy physics people present.

While not the first license for open hardware, the CERN OHL has seeded some interest and contributes to the area being taken more seriously - the event was organized by Javier Serrano and this coupled with an open-hardware success story, Arduino, this workshop was the result.

An exciting gathering with people from a bunch of high energy physics and related people, a bunch of companies (some trying and some not trying OH) and a tonne of others like me (keen enthusiasts, regular enthusiasts or people that see 'this is important').

Javier Serrano gave an introduction to (his version of) the 'open hardware community' he started with a nod to the OSHW definition. There was a nod to specific goals by part of the open hardware community: with a heavy taste of GPL derived OSS-scented ideas, while it did seem a little naive at times, there were clear best-intentions and in particular the community (drawing on experience from OSS) recognizes that there has to be flexibility in specifics as well as a community that fosters support companies otherwise no one is going to have a commercial route.

Another aspect was the Open Hardware Repository - while not intended to be Sauron's eye - it is a fantastic resource for combining and sharing projects. A little unclear what license things are/must/should/could be under but a great idea.

Tomasz Wlostowski, gave a quick summary of the Open Hardware Summit held a few weeks earlier in NYC - and the best (hope) take-away was "The best is yet to come" an interesting justification was given relating to the slow-down in Moores law as it relates to the simple manufacturing speed ups (the assertion was this moved the leading edge away from simply being make-things-faster, and would be the chance for making-things/different/cheaper/flexible/interesting to flourish - this being the open door for the Open Hardware movement.) in contrast to the 'engineers' of the Workshop, the Summit had been all about community-raising and a much more consumer/social event. Notable cool things was the Instructables (what happens when you mix 13 year olds, constuction-kits and many rubber bands), and the ifixit (free manuals) website.

Myriam Ayass (legal advisor of CERN's Knowledge Transfer group) talked about the Cern OHL, in particular that it is going to version 1.2 (with a version 1.3 in the planning stages) and that this attends many issues with earlier versions. There is a mailing list for discussing the license and if you have a heart felt opinion - first make sure you have the latest copy and second join the discussion. It is clear Cern have their heart in the right place over this and don't mind the legal investment most of us cannot make.

My notes are that currently this is a PCB-focused license, the definition of hardware is a confusion and that a kicking question is 'so what is RTL/VHDL/Verilog/etc' - is that hardware covered by the OHL or something else?

This talk generated a huge bunch of questions ranging from "what does Cern OHL want to be: GNU-like (coveting openness as priority) or BSD/Apache-like (coveting dissemination as priority). " certainly less understanding from those present about what happens when licenses are combined, when licenses are unclear and far too much "well don't use that tool then" in response to tool-chain lockin. However, none of these people were idiots and now is the time to have an impact, mailing list details are off the Cern OHL site link

This was followed by a talk by the Arduino cofounder David Cuartielles. They make rather nice, very very cheap, do-dads (started life as a "how do I get my kid interested in CompSci/CompEng"). work in collaboration with Telefonica (including Pablo Rodriguez), among others. Very near ideas, but they face a real trauma - because they are signed up to "open" they are put at risk economically by the problem that once the PCB artwork of boards are released, the company has less than four weeks before clone boards appear. The arduino boards are considered (and perhaps fairly-so) as a potentially useful blob in the "Internet of things" (yeah I know - no working definition) sensor boards, various neat flashing light things, robot boards, all that sort of stuff, neat and nice. I hope Raspberry Pi can recycle the community of connectables this project has fathered.

Creotech (the second of four commercials that had talks), founders include an ex-CERNer and are working on instrumentation that plugs into a bus called FMC- something (FMC is a wider standard called  VITA 57) that is intended to allow compatible instrumentation packages, what this means for you and me is unclear but the motivating project for a lot of these people was either to stop having their lives/work ruled by huge transactions of money to "National Instruments" (and other vested interests) and to stop needless repetition rebuilding things that already existed - a particular density Msamples-per-second do-dad was one particular issues that it appeared many organizations had started and perhaps even completed at least 90% of their own in-house designs. Creotech was open hardware thinking, seemed fairly successful and had some respect.

In contrast National Instruments, ICALEPCS (but not the OHW) sponsor, did not. A talk that largely consisted of "Yah national instruments" did contain one seed of useful insight; the problem most any organization has (and the one National instruments can do) is provide a 25 year guarantee of lifetime replacement/operation/etc It was clear the speaker was having a pot-shot at the open hardware startups, but he also made courting noises and a question "what does your talk have to do with open hardware" summed up the chasm nicely (in short: not much or "we are still thinking".) The problems for NI are interesting, but an example is this: apparently (don't quote me on how good the toolchain is) the NI tool chain can target special NI devices that are both programmable and include a range of flexible bits- from hardware to firmware (FPGA) to whatever. The toolchain is assured, and tightly (conservatively) bounds what the programming can actually do - this all is rather critical if you (as NI) need to give assurances that not only will your kit work for 25 years, but the 'motor control unit that closes the small door preventing escape of the nasty gas' will actually do its job." The idea that someone can knock together some new code to run on the NI device brings out the NI lawyers in cold-sweats.

The issue of liability is interesting too, as it is a strong theme in the Cern OHL.

Seven Solutions a little more Creotech-like,are dabbling and sell an open board or two. They make an interesting and active hybrid model (propriety + OH)

and Instrumentation Technologies another instrumentation group that   is flirting with open hardware talked, time passed, talk finished.

Facebook (John Kenevey) talked about the Open Compute Project. Don't get too excited, its about building a design for machine rooms that is more universal and more wide-ranging than simply fits in 19" racks - the current definition.
Notable soundbits: Open compute is a white-box channel to the market that challenges supplier base and allows new entrants. A conclusion was that the dance between silicon vendors (CPU makers) and box-benders meant that the vendors are screwed and the customers are worse-than-screwed. People pitching OCP as a mechanism to get out form under vendor locking.
When you see the in-house machines of Google, and Facebook, and others, this makes a lot of sense.

Modularization is key, and facebook seem to be enjoying not caring about doing anything more than 'motivating' the actions and encouraging the open-source hardware community. It is clear they are sick of being held over a barrel by people that assemble machines (metal benders) and hope for some nice innovations... Facebook consider this part of their "GRID to Gates" (GRID in this case means the power-grid) initiative. Problems seem to be what does a standard smell like? do we have fans in the rack or in the units, what is the form and nature of power in the rack or the exchangeable units.. etc etc. Sadly the impact for the man in the street (or the machine-room fitter-outer in the room) will be 12 months away (my wild guess). The slides of this talk were not available (nor the recording) as there was some discussion (funny stories) about specific 'metal benders.

Following lunch we had several speakers talking about tool chains for (PCB) design: two tools got discussed: GEDA and Kicad, kicad looked very nice indeed and certainly looked better than some of the approaches common. Problems discussed including importing artwork and the general mumbles of agreement about libraries of package and pinout. For this writers perspective it seems a public definition of pinouts and packages is obvious and in the interests of the manufacturers - although probably not in the interests of various 'big package' authors (Cadence, etc.) Time will tell.

Projects discussed in the remainder of the day included

hdlmake, a concept to get away from the GUIS commonplace in build tool-chains, adds manifests to permit dependency trees, and seems for the most part like a good idea. (Also makes me appreciate the effort Jad Naous and others did on NetFPGAv2 to build the make as clean as it was.)

Icarus Synthesis engine - considered critical OSS for Open Hardware. obvious problems include propriety core-handling.

Open FPGA toolchain (Sébastien Bourdeauducq who did the Milkymist open video (effects) hardware. neat stuff trying to hack his way around obfuscated FPGA details - (with a lot of grumbles about how mean Altera and Xilinx are) but the guy was totally oblivious to the idea that people don't realize details about their FPGA because some knock-off company will start making 10cent copies of the FPGAs themselves. Ok I'm being unfair, Sebastien's position is "let's get started nevertheless and see what happens". I think this would appeal strongly to academics who want to redesign/modify/mess-with the RTL -> FPGA process.

Other things presented included: SOLEIL synchrotron instrumentation and the RHINO project.

RHINO is interesting, as an open source radio thing that came from the radar remote sensing group at the University of Cape Town born from CASPER (a project at Berkeley) and the interest in South Africa in the SKA project (SKA is at this moment a competition to build the next serious astronomy platform: either in Australia or South Africa), neat stuff. This project incorporates BoRPH and a number of other technologies to make it easier to use and consume.

From the discussion slides for the Open Hardware Community: (some questions without answers)

  • Can clients change mindset from build-in-house (a not-invented-here variant) and pay for support?
  • How can we deal with Tech Transfer departments that argue against OSH (even if the hardware is not core buisness)
  • How can we involve universities?
  • (How) can we pool resources?
  • (How) can we pool manpower for projects?
  • (how) can we pool money to pay companies (for the dull stuff)?
  • Who are the communities?

lots of talk, not many answers - this is a very young community, lots of idealism lots of potential.

Most-all the presentations and videos of the presentations and Q&A are available from the workshop, all under creative commons (of course)

General comment: This community is very interesting but right at the moment there are considerable dumb-language (lazy language) defaults that conflate commercial and propriety when they mean "open-source" and commercial and it may be an issue from "English as a second language"; I know this wounds commercial organizations (cast as bad guys) and in fact the intention is something else.

It was great - I will go again, if Javier lets me.

 

30Sep/110

Photonics UK and Cyber Defense UK

Posted by Jon Crowcroft

Last couple of days I was in these two events

 

1.EPSRC Network of Networking 2 day workshop on Photoonics - see

http://www.commnet.ac.uk/node/34

 

Very interesting to see how coherent the UK's academic and industry photonics community are - they have a pretty clear roadmap for next 5 years and then some nice challenges - not a lot for CS (still) until they can do somethign cool in a) integration of optical links onto processors and b) build some more viable (in scale/integration/power terms) gates....but in terms of what they are doing for price/performance, they pretty much match Moore's law (terminating a 10GigE for 10 bucks is an amazing achievement!)

 

2. Rustat conference on UK Cybersecurity

http://www.cybersecurityforum2011.com/

 

This will almost certainly be blogged by Ross or someone else in the security group as they were there en masse. I chaired a session on UK skills and a couple of good outcomes were support from research counciles for more PhDs (whether this leads to money will remain to be seen) and

 

and the idea that CS graduates that end up on the Board as CIOs should make sure they have good business skills so they aren't looked down on by other board members as just a sort of uber "IT guy"...

 

Lots of very interesting corridor conversations. The UK gov budget in this space is 600M quid, so many SMEs scampering after it:) In general, we seem to be in ok shape (government policy doc on cybersecurity out soon, recent Chatam House report (can't find link right now) appareently less rosy, but still very useful. Expect to see more details here soon:

http://www.lightbluetouchpaper.org/

 

We're having a NATO work shop on this in 10 days at Wolfson in Cambridge...Rex Hughes there is coordinating it with the Cambridge Science and Policy group.

Finally, I suggested a Homeopathic remedy for cyberattacks might be to dilute the stuxnet virus say 10^11 times in some random bits (e.g windows vista kernel code) and add it to your site.

 

Oh yeah and can someone tell me just what does the ICTKTN do?? :)

28Sep/110

IBM TCG Visit and Cambridge Networks Network

Posted by Jon Crowcroft

The last couple of days were busy - IBM visited en masse and their Technical consulting group of around 50 people showed up (in CMS) to talk about   various interesting topics - for me, the best one was a talk about financial service industry regulatory controls through risk data sharing (via a third party - a sort of nuclear test ban treaty assurance service) - very neat - lots of other good topics - Rolls Royce were also there - amusingly, IBM complimented Rolls on their reliable history (compared with the Software Industry) - i didn't feel it fair to mention the RB211 or the recent A380 shattered turbine:)

 

More locally crucual was the kickoff meeting of the Cambridge Networks Network - see http://www.cnn.group.cam.ac.uk/ for more info - the

This kickoff was to setup a cross group, grass roots movement to join up various people in systems biology, brain mapping, economics, eplidemiology (including plant sciences) and others to share common knowledge and methods/techniques for studying complex networked systems with interesting (e.g. emergent) phenomena - the kickoff was amibitious with talked from 5 people supposed to be 10 mins each (averaging 20 mins:)

 

some ideas i thought of while listening

 

1. weak ties (long links) in modular systems (social nets, the brain, the internet) serve the same purpose as random perturbations (like mutation) does in optimisation tools (like Genetic Algorithms or Simulated Annealing) - to get you out of local minima:- most GAs work by cross-over which implements parallel search in local areas of a fitness landscape (since similar genes share / cross over/breed and are succesful or not similarly) - I wonder if there is any literature on how graphs have a small (but non zero) fraction of "escape routes" from the highly interconnected/modular/cliqueish structure of a small world are slightyly more robust than purely hierarchical modular ones???

 

2nd thought was about epidemics (and economics) - the Vickers report on the banking sectore is basically quarantining domestic banks (building socieities) from the high risk (prostitution and drug user/gambling/casino) banking sector. on the other hand, sharing information problemly (see Efficient Markets) would also work (see IBM work above)

 

The difference is that a structural regulation is much easier to implement than a big bang transparent information regime. maybe we do one now, the other later - who knows?

 

The talk on Citrus Blight in Miami lemon trees was fun - reminded me plants (genetically) are a lot easier than animals (c.f. fluphone:)

 

The map of spread of the blight looked really like the map of the nuclear tests recently shown on youtube (see

for that (esp. for Anil:)

 

One nice name check was the work on neural structures and VLSI that showed Rent's Law applies to both - cute (but should we add weak ties to our multicore systems - one for Steve Furber maybe?)

 

Anyhow, this looks like a very good (young, active, enthusiastic, smart) initiative - they will be having a bi-weekly seminar series starting pretty soon - probably coordinated with the statslab's networking series....

(for people too young to recall, Rolls Royce actually went bankrupt in the 1970s trying to make carbon fiber turbine blades work - in the end, a government bailout fixed it, and they are ok - the problem they hit was the fibers in the original blades weren't knit in enough different directions - a prob,lem shared with the fiberglass bodywork o nthe Reliant Scimitar (and robin) which would shatter under fairly light impact into lots of dangeous shards. The solution is to sew 3 dimensions of fiber (much more expensve/complex, but immesnly strong, but also tunable for different flexibility in any given dimension) into the matrix - the recent A380 engine problem wasnt design, but manufacturing process...

23Sep/110

Mobicom. Day 3

Posted by Narseo

3rd and final day... mainly about PHY/MAC layer and theory works

The day started with a Keynote by Farnan Jahanian (University of Michigan, NSF).  Jahanian talked about some opportunities behind cloud computing research. In his opinion, cloud computing can enable new solutions in fields such as health-care and also environmental issues. As an example, it can help to enforce a greener and more sustainable world and to predict natural disasters (e.g. the recent japanese tsunami) with the suport of a wider sensor network. His talk concluded with a discussion about some of the challenges regarding computer science research in the US (which seem to be endemic in other countries). He highlighted that despite the fact that the market demands more computer science graduates, few students are joining related programs at every level, including high school.

Session 7. MAC/PHY Advances.

No Time to Countdown: Migrating Backoff to the Frequency Domain, Souvik Sen and Romit Roy Choudhury (Duke University, USA); and Srihari Nelakuditi (University of South Carolina, USA)

Conventional WiFi networks perform channel contention in time domain. Such approach imposes a high channel wastage due to time back-off. Back2F is a new way of enabling channel contention in the frequency domain by considering OFDM subcarriers as randomised integer numbers (e.g. instead of picking up a randomised backoff length, they choose a randomly chosen subcarrier). This technique requires incorporating an additional listening antenna to allow WiFi APs to learn about the backoff value chosen by nearby access points and decide if their value is the smallest among all others generated by close-proximity APs. This knowledge is used individually by each AP to schedule transmissions after every round of contention. Nevertheless, by incorporating a second round of contention, the APs colliding in the first one will be able to compete again in addition to a few more APs. The performance evaluation was done on a real environment. The results show that the collision probability decreases considerable with Back2F with two contention rounds. Real time traffic such as Skype experiences a throughput gain but Back2F is more sensitive to channel fluctuation.

Harnessing Frequency Diversity in Multicarrier Wireless Networks, Apurv Bhartia, Yi-Chao Chen, Swati Rallapalli, and Lili Qiu (University of Texas at Austin, USA)

Wireless multicarrier communication systems are based on spreading data over multiple subcarriers but SNR varies in each subcarrier. In this presentation, the authors propose a join integration of three solutions to reduce the side-effects:

  1. Map symbols to subcarriers according to their importance.
  2. Effectively recover partially corrupted FEC groups and facilitate FEC decoding.
  3. MAC-layer FEC to offer different degrees of protection to the symbols according to their error rates at the PHY layer

Their simulation and testbed results corroborate that a joint combination of all those techniques can increase the throughput in the order of 1.6x to 6.6x.

Beamforming on Mobile Devices: A first Study, Hang Yu, Lin Zhong, Ashutosh Sabharwal, David Kao (Rice University, USA)

Wireless links present two invariants: spectrum is scarce while hardware is cheap. The fundamental waste in cellular base stations is because of the antenna design. Lin Zhong proposed passive directional antennas to minimize this issue. They used directional antennas to generate a very narrow beam with a larger spatial coverage. They have proved that this solution is practical despite small form factor of smartphone's antenna, resistent to nodes rotation (only 2-3 dB lost if compared to a static node), and does not affect the battery life of the handsets, specially in the uplink as the antenna's beam is narrower. This technique allows calculating the optimal number of antennas for efficiency. The system was evaluated both indoors and outdoors in stationary/mobile scenarios.  The results show that it is possible to save a lot of power in the client by bringing down the power consumption as the number of antennas increases with this technique.

SESSION 8. Physical Layer

FlexCast: Graceful Wireless Video Streaming, S T Aditya and Sachin Katti (Stanford University, USA)

This is a scheme to adapt video streaming to wireless communications. Mobile video traffic is growing exponentially and users' experience is very poor because of channel conditions. MPEG-4 estimates the quality over long timescales but channel conditions change rapidly thus it has an impact on the video quality. However, current video codecs are not equipped to handle such variations since they exhibit an all or nothing behavior. They propose that quality is proportional to instantaneous wireless quality, so a receiver can reconstruct a video encoded at a constant bit rate by taking into account information about the instantaneous network quality.

A Cross-Layer Design for Scalable Mobile Video, Szymon Jakubczak and Dina Katabi (Massachusetts Institute of Technology, USA)

One of the best papers in Mobicom'11. Mobile video is limited by the bandwidth available in cellular networks, and lack of robustness to changing channel conditions. As a result, video quality must be adapted to the channel conditions of different receivers. They propose a cross-layer design for video that addresses both limitations. In their opinion the problem is that the compression an error protection convert real-valued pixels to bits and as a consequence, they destroy the numerical properties of original pixels. In analog TV this was not a problem since there is a linear relationship between the transmitted values and the pixels so a small perturbation in the channel was also transformed on a small perturbation on the pixel value (however, this was not efficient as this did not compress data).

SoftCast is as efficient as digital TV whilst also compressing data linearly (note that current compression schemes are not linear so this is why the numerical properties are lost). SoftCast transforms the video in the frequency domain with a transform called 3D DCT. In the frequency domain, most temporal and spatial frequencies are zeros so the compression sends only the non-zero frequencies. As it is a linear transform, the output presents the same properties. They ended the presentation with a demo that demonstrated the real gains of SoftCast compared to MPEG-4 when the SNR of the channel drops.

Practical, Real-time Full Duplex Wireless, Mayank Jain, Jung II Choi, Tae Min Kim, Dinesh Bharadia, Kanna Srinivasan, Philip Levis andSachin Katti (Stanford University, USA); Prasun Sinha (Ohio State University, USA); and Siddharth Seth (Stanford University, USA)

This paper presents a full duplex radio design using signal inversion (based on a balanced/unbalanced (Balun) transformer)and adaptive cancellation. The state of the art in RF full-duplex solutions is based on techniques such as antenna cancellation and they present several limitations (e.g. manual tuning, channel-dependent). This new design supports wideband and high power systems without imposing any limitation on bandwidth or power. The authors also presented a full duplex medium access control (MAC) design and they evaluated the system using a testbed of 5 prototype full duplex nodes. The results look promising so... now it's the time to re-design the protocol stack!

Session 9. Theory

Understanding Stateful vs Stateless Communication Strategies for Ad hoc Networks, Victoria Manfredi and Mark Crovella (Boston University, USA); and Jim Kurose (University of Massachusetts Amherst, USA)

There are many communication strategies depending on the network properties. This paper explores adapting forwarding strategies that decides when/what state communication strategy should be used based on network unpredictability and network connectivity. Three network properties (connectivity, unpredictability, and resource contention) determine when state is useful. Data state is information about data packets, it is valuable when network is not well-connected whilst control-state is preferred when the network is well connected. Their analytic results (based on simulations on Haggle traces and DieselNet) show that routing is the right strategy for control state, DTN forwarding for data-state (e.g. Haggle Cambridge traces) and packet forwarding for those which are in the data and control state simultaneously (e.g. Haggle Infocom traces).

Optimal Gateway Selection in Multi-domain Wireless Networks: A Potential Game Perspective, Yang Song, H. Y. Wong, and Kang-Won Lee (IBM Research, USA)

This paper tries to leverage a coalition of networks with multiple domains with heterogeneous groups. They consider a coalition network where multiple groups are interconnected via wireless links. Gateway nodes are designated by each domain to achieve a network-wide interoperability.  The challenge is minimising the intra-domain cost and the sum of backbone cost. They used a game-perspective approach to solve this problem to analyse the equilibrium inefficiency. They consider that this solution can be also used in other applications such as power control, channel allocation, spectrum sharing or even content distribution.

Fundamental Relationship between Node Density and Delay in Wireless Ad Hoc Networks with Unreliable Links, Shizhen Zhao, Luoyi Fu, and Xinbing Wang (Shanghai JiaoTong University, China); and Qian Zhang (Hong Kong University of Science and Technology, China)

Maths, percolation theory ... quite complex to put into words

Tagged as: No Comments
22Sep/110

Mobicom. Day 2

Posted by Kiran Rachuri

Day 2 of MobiCom 2011 started with my talk on SociableSense. Fourteen papers were presented over four sessions, including two best papers.

SESSION: Applications

SociableSense: Exploring the Trade-offs of Adaptive Sampling and Computation Offloading for Social Sensing, Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, and Peter J. Rentfrow (University of Cambridge, United Kingdom)

Our work. Details at:

http://www.syslog.cl.cam.ac.uk/2011/07/15/efficient-social-sensing-based-on-smart-phones/

Overlapping Communities in Dynamic Networks: Their Detection and how they can help Mobile Applications, Nam P. Nguyen, Thang N. Dinh, Sindhura Tokala, and My T. Thai (University of Florida, USA)

A better understanding of mobile networks in terms of overlapping communities, underlying structure, organisation helps in developing efficient applications such as routing in MANETs, worm containment, and sensor reprogramming in WSNs. So, the detection of network communities is important, however, they are large and dynamic, and overlapping communication.  Can community detection be performed in a quick and efficient way.

They propose a two phase limited input dependent framework to address this. Phase 1: basic communities detection (basic communities are dense parts of the networks). Phase 2: update network communities when changes are introduced, i.e., handle: adding a node/edge, and removing a node/edge.  The evaluation is based on MIT reality mining data.  They evaluate the proposed scheme with respect to two applications: routing in MANETs and worm containment.

Detecting Driver Phone Use Leveraging Car Speakers, Jie Yang and Simon Sdhom> (Stevens Institute of Technology, USA); Gayathri Chandrasekaranand Tam Vu (Rutgers University, USA); Hongbo Liu (Stevens Institute of Technology, USA);Nicolae Cecan (Rutgers University, USA); Yingying Chen (Stevens Institute of Technology, USA);Marco Gruteser and Richard P. Martin(Rutgers University, USA)

(Joint Best Paper Award)

80% of people talk on cell phone while driving. The consequences of this might be dangerous (18% accidents). They claim that hands-free devices do not help because of the effects in the cognitive load on the driver. Several mobile apps in the market trying to solve that. (zoom safer ïzup, cellsafety). Recent measures:

-hard blocking: jammers, blocking calls etc

-soft interaction: delay calls, route to voice mail, automatic reply

Current apps that actively prevent cell phone use in vehicle only detect the phone is in vehicle or not through: GPS, handover, signal strength, speedometer etc. None of them have capability to find whether phone is used by driver or passenger. They use an acoustic ranging approach to solve this problem.  They identify the position of the cell phone based on the car speakers and mobile phone, and based on speakers emitting different sounds at different times. Cell phone mic has wider range of frequency range: so beep frequency to outside user hearing range.  Evaluation shows that the accuracy of detection is over 90%.

I Am the Antenna: Accurate Outdoor AP Location Using Smartphones, Zengbin Zhang, Xia Zhou, Weile Zhang, Yuanyang Zhang, Gang Wang, Ben Y. Zhao, and Haitao Zheng (University of Calfornia at Santa Barbara, USA)

The density of APs in the environment is very high. How to find the location of an AP?  Conventional AP location methods:

- Directional antenna: Fast, very accurate but expensive

- Signal map: Simple but time consuming

- RSS gradient: Low accuracy, low measurement overhead but low accuracy

Their solution is based on the effect  of user orientation degree to an AP on RSS. The body of the user can affect the SNR (they observed around 13dBm difference). They also tested the generality of the effect with multiple phones, protocols, different users, and environments, and  RSS profiles all followed the same trend.

Evaluation is in a campus, with three scenarios. 1. Simple line of sight (no blocks) 2. complex line of sight (vehicles etc) 3. Non line of sight (line of sight is completely blocked). Metric: absolute angular error: detected direction - actual direction. results: error < 30 degree for 80% cases, in simple LOS (line of sight); error < 65 degree for 80% cases in Non LOS.

SESSION: Cellular Networks

Traffic-Driven Power Saving in Operational 3G Networks,  Chunyi Peng, Suk-Bok Lee, Songwu Lu, and Haiyun Luo (University of California at Los Angeles, USA)

Transmission power of Base Stations increases linearly with the traffic load. The cooling power keeps constant and its comparable to the transmission power. As a result, high energy is consumed energy even at zero traffic. Existing solutions do not address practical issues and they follow a theoretical analysis. In this work, they propose a traffic-driven approach that exploits traffic dynamics to turn off under-utilised BSs for system-wide energy efficiency. They claim that traffic is quite predictable in the base station. There’s a lot of potential to save energy in quite hours but also in peak hours. Their solution also tries to be compatible with current 3G standard/deployment. Issues addressed: Issue 1: how to satisfy location dependent coverage and capacity constraints. Issue 2: how to estimate traffic load ?

Solution: based on profiling: estimate traffic envelope via profiling and leverage near-term stability. The set of BS active in idle hours should be a subset of the ones in peak hours. Their condition is that they should not switch BSs more than once per day. Provide location-dependent capacity. Their estimation is a moving average with 24 daily intervals. However, frequent on/off switching is undesirable: takes several minutes. It should be based on traffic characteristics.

MOTA: Engineering an Operator Agnostic Mobile Service, Supratim Deb, Kanthi Nagaraj, and Vikram Srinivasan (Bell Labs Research, India)

Cellular coverage varies with respect to locations. Users may not be happy with a single service provider, and there is a case for users choosing services from multiple providers. Dual sim phones are already popular in asia. Users are using services based on the cost from the providers. Goal of this work: Ability for users to join the network of choice at will based on location, pricing, and applications.

Solution: to propose changing operator from the user-side. They consider several solutions: Option 1: Centralised approach making decisions but operators unlikely to share network planning information. Option 2: Users use signal strength from different base stations. This is insufficient and can result in poor user experience.

They propose MOTA in which a service aggregator is introduced: new intermediary between users and operator and is responsible for maintaining customer relationships and handles all control plane operations that cannot be handled by a single operator. The also use a Utility function that incorporates fairness. Evaluation is based on the data from one of the largest cellular operators in India.

Anonymization of Location Data Does Not Work: A Large-Scale Measurement Study, Hui Zang and Jean Bolot (Sprint Applied Research, USA)

Call Detail Records (CDR) keep a lot of information about the phone calls of the users and they can be linked to a location. They can be used for marketing, security, LBS, Mobility Modelling, however, privacy might be breached if such data is released. Traditional approaches to protect privacy of users is through anonymisation, however, this works shows that does not work. CDR contains: mobile id, time of call, call durations, start cell id, start sector id, end sector id, call direction, caller id. If mobile id and caller id are anonymised, can we detect the user. Its shown that with gender, zipcode, and birthdate, 87% of USA population can be identified.

Their dataset consists of more than 30 billion call records made by 25 million cell phone users across the USA. Their approach is to infer top N locations for each user and correlate this with publicly available information such as census data. They show that the top 1 location does not yield small anonymity sets, but top 2 and 3 locations do at the sector or cell-level granularity. They also provide possible solutions based on spatial and time domain approaches for publishing location data without compromising on privacy.

SESSION: Infrastructureless Networking.

Enhance & Explore: An Adaptive Algorithm to Maximize the Utility of Wireless Networks, Adel Aziz and Julien Herzen (École Polytechnique Fédérale de Lausanne, Switzerland); Ruben Merz (Deutsche Telekom Laboratories, Germany); Seva Shneer (Heriot-Watt University, UK); andPatrick Thiran (École Polytechnique Fédérale de Lausanne, Switzerland)

This work addresses the problem of providing efficiency and fairness in wireless networks. Their approach is based on maximising a utility function. They propose an algorithm called Enhance and Explore that maximises the utility function. The challenges in designing this scheme are: work on existing mac, non-network wide message passing, and wireless capacity is unknown a priory.

They consider two scenarios: WLAN setting: inter-flow problem and optimally allocate resources. Multi-hop setting: intra-flow problem and avoid congestion. They show analytically that the proposed algorithm converges to a point of optimal utility. Evaluation is through experiments in a testbed and simulations in ns-3.

Scoop: Decentralized and Opportunistic Multicasting of Information Streams, Dinan Gunawardena, Thomas Karagiannis, and Alexandre Proutiere (Microsoft Research Europe, UK); Elizeu Santos-Neto (University of British Columbia, Canada); and Milan Vojnovic (Microsoft Research Europe, UK)

This work aims at leveraging mobility for content delivery in networks of devices experiencing intermittent connectivity. Main challenge: routing / relaying strategies. Existing solutions include epidemic routing. Drawback of existing works are: simplifying assumptions on mobility, and interact contact times are exponentially distributed. This work proposes SCOOP that

  • maximizes some global system objective
  • accounts for storage and transmission costs
  • multi-point to multi-point communications
  • decentralized
  • model-free (allows general node mobility)

There is a necessity to propose a mobility model-free system. They used classic traces: UCSD, Infocom, DieelNet and SF Taxis.  They show that two hops are enough to reach a large percentage of nodes. They also show that the delays in paths between a source and a destination are positively correlated. They aim to identify the strategy optimally exploiting mobility and buffer constraints and relays. However, this is a hard problem. They use a sub-gradient algorithm to solve it efficiently. Evaluation is through numerical experiments. They compared SCOOP with an idealized version of R-OPT of RAPID algorithm (assumes full global knowledge). Performance with respect to delivery ratio is very close to R-OPT.

R3: Robust Replication Routing Wireless Networks with Diverse Connectivity, Xiaozheng Tie, Arun Venkataramani (University of Massachusetts Amherst, USA) and Aruna Balasubramanian (University of Washington).

Wireless routing protocols are designed for specific target environments, like well-connected meshes, intermittently connected MANETs. Problems with this is routing protocols are fragile, and perform poorly outside its target environment. Wireless networks exhibit spatio-temporal diversity, therefore, compartmentalized design is not efficient. Can we design a protocol that ensures a robust performance across networks.

They propose to use Replication routing. They present a model to quantify replication gain. Replication gain depends on the path delay distributions and not just expected value. They study the average replication gain with respect to number of paths using DieselNet-DTN and Haggle traces. They propose R3: a link state protocol that selects replication paths using the proposed model. The scheme also adapts the replication to load.

Evaluation is both on DieselNet DTN testbed and a Mesh testbed. Simulation validation is also performed  using DieselNet deployment. Compared with several protocols. Simulation based on haggle trace shows that R3 reduces delay by up to 60% and increases good put by up to 30% over SWITCH. Simulations on DieselNet-Hybrid shows that R3 improves median delay compared to SWITCH  by 2.1x.

Flooding-Resilient Broadcast Authentication for VANETs, Hsu-Chun Hsiao, Ahren Studer, Chen Chen, and Adrian Perrig (Carnegie Mellon University, USA); and Fan Bai, Bhargav Bellur, and Aravind Iyer (General Motors Research)

Each vehicle possess an On Board Unit (OBU), and broadcasts info for safety and convenience. This information has to be secured. IEEE 1069.2 standard suggests to use ECDSA signature for these messages, however, its expensive for verification and takes around 22ms to verify, and its difficult if many messages arrive in short time. Can we reduce this verification delay. Core idea of this work: entropy aware authentication.

They propose two methods: (1) FastAuth - exploits predictability of future messages. Uses hash to verify location updates instead of ECDSA . The result is 1 us instead of 22000 us in ideal case. (2) SelAuth - selective verification before forwarding. They also reduce the communication overhead. Evaluation is based on real vehicle traces (4 traces), each generated by driving a car along a 2 mile path for 2 hours. Results show that the signature generation is 20x faster and verification is 50x faster compared to ECDSA.

SESSION: Protocols.

E-MiLi: energy-Minimizing Idle Listening in Wireless Networks, Xinyu Zhang and Kang G. Shin (University of Michigan-Ann Arbor, USA)

(Joint Best Paper Award)

Wi-Fi is a popular means of wireless Internet connection. However, Wi-Fi is a main energy consumer in mobile devices, 14x higher than GSM on phone. This is due to cost of idle listening. Moreover, idle listening power is comparable to TX/RX power. Existing solutions are variants of PSM, but, is this good enough. No, this is due to carrier sensing time. To overcome this, they propose E-MiLI that reduces the power consumption of idle listening. They down-clock the radio in idle listening mode. Down-clocking by 1/4 saves power by 47.5%. The key challenge is how to decode a packet given that receiver sampling rate should be no less than senders clock rate to decode a packet. The solution proposed is to separate detection from decoding.They add a preamble to 802.11 packet that can be detected by low clock rates.

One issue with this is false triggering. Packets intended for one client may trigger all other clients and this is a waste of energy. The second problem is the energy overhead caused by large preambles. The solution is a minimum-cost address sharing to allow multiple nodes to be assigned the same address. Address allocated according to channel usage. There’s a delay caused by cold-rate switching too. To reduce this they use opportunistic downclocking. Evaluation is with respect to: Packet detection: software radio based experiments, Energy consumption: through Wi-Fi traces, and Simulations using ns-2. Results: When SNR is above 8dB, miss detection probability is almost zero. They achieved close to 40% energy saving.

Refactoring Content Overhearing to Improve Wireless Performance, Shan-Hsiang Shen, Aaron Gember, Ashok Anand, and Aditya Akella (University of Wisconsin-Madison, USA)

The main aim is to improve on wireless performance by leveraging overheard packets. Several techniques available currently, but,  none of these leverage duplicate data. This work takes a content based overhearing approach and suppresses duplicate data transmission. Ditto is first work that used content based overhearing approach,  but it works at the granularity of objects, and does not remove sub packet redundancy. Moreover, it only works for some applications. This work presents REfactor content overhearing:

(1) this scheme puts content overhearing at the network layer, and this results in savings across applications.  Transport layer approach (used in Ditto) ties data to application or object chunk. Network layer approach reduces redundancy across all flows. Transport approach also requires payload reassembly.

(2) this scheme identifies sub-packet redundancy. This saves transmission times. Ditto only works in 8 - 32kb object chunks, whereas the proposed scheme operates at a finer granularity. This results in savings from redundancy as small as 64 bytes. and this also results in leveraging any overhearing even a single packet.

Evaluation through test-bed experiments show 6 to 20% improvement in Goodput. Simulation results also show that 20% improvement is achieved in Goodput.

Distributed Spectrum Management and Relay Selection in Interference-Limited Cooperative Wireless Networks, Zhangyu Guan (Shandong University, P. R. China); Tommaso Melodia (State University of New York at Buffalo, USA); Donfeng Yuan (Shandong University, P. R. China); and Dimitris A. Pados (State University of New York at Buffalo, USA)

Emerging multimedia services require high data rates. This work aims to maximize the capacity of wireless networks by leveraging the frequency and spatial diversity. Frequency: by dynamic spectrum access, and this improves spectral efficiency. Spatial: by cooperative communication, and this enhances link connectivity. Problem: maximize sum utility (capacity, log-capacity) of multiple concurrent traffic sessions by jointly optimizing relay selection (whether to cooperate or not) and direct transmission. Problem formulated as mixed integer non-convex problem. This is NP hard. They propose a solution based on branch and bound that is able to find a globally optimum solution. Polynomial time  solution is not guaranteed but in practice it works well. Evaluation is based on simulations. Results show that the proposed schemes converge very fast. Centralized algorithm achieves at least 95% of the global optimum, and distributed schemes are very close to optimal.