The glorious quatorze juillet in the Computer lab, we had >250 people in attendance at the Raspberry Jam, and TeachMeet, to bring toegether raspberry owners, wannabes, hackers and observers, and then to discuss specifics (e.g. lesson plans) for using Rasberry Pis and related tech for teaching the Computing at School curriculum.
For me, highlights included
1. Demo of RiscOS on Pi
2. School governer showing how to democratize ICT/CS in the school by embedding it in everything (using free and/or opensource s/w only, and no geek/operator/ICT technicians at all)
3. A teaching who created over a dozen Digital Leaders to teach computing out of her own 12 year old pupils - these kids to stand up classes and tutorials for parents - just awesome.
4. two talks on literally hundreds of projects out there to carry out in D&T or other non-directly CS classes
5. How to teach healthcare through computers
6. Plenty of hints on first steps in programming
7. Finally Pi foundation folks showed up with 200 devices for people at the event. Also announced various new things (e.g. camera board should be ready Real Soon Now)...
A lot of fun, I thought. Judge for yourself from the video:
One of Leon's vid of the Zoo talk - excelent!
This event: http://www.cnn.group.cam.ac.uk/news/scientific-meeting-on-social-networks-and-social-media-18th-january
Organised by: http://www.assystcomplexity.eu/
Bernardo Huberman (HP) - Getting Attention1. In last decade, web moved from download to 2-way2, Everyone has a megaphone (the interweb is one big ethernet?:-)3. attention is therefore the scarce resource attention is a coordination/rendezvous/synch mechanism anything except attention can be commodotized defn: attention in social terms (pagerank)4. Looking at this, we can build predictors and make money...5. Looked at Attention Seeking behaviours... doesn't matter if its your friends who applaud
Sanjeev Goyal - Contagion1. random v. smart attacks2. central v. distributed Net designer3. Design for resilience
Q (BH). can we filter spam on this basis/cost?(c.f. Ross work - can't afford email anymore) ------>
Maxi San Miguel (IFISC) What can we learn from simple social behaviour models
isolate interaction mechanisms and find collective effectsfind causal relationsips...
one example: random imitation role of topology co-evolution heterogeneity in timing of interactionsExample: voters - imitation dynamics abosrbing states (all coloured red/green) when&how do we get (what about metastable case/oscillitory?) results anayltical for regular nets show ordering... in complex nets (small world or BA etc) , can get non stable soln long range ties don't ctually help reachign agreement, counter-intuitive critical beaviour determined by mean node degree (c.f. haggle)dynamics of net (formation) & on the net (usage) but can we have co-evolution of agents and netrightwing view is net determis individual chocesleftwin view is individals constrined by social net :-) co-evol has model of agent/link changing/selected with distribution re-wiring rate v. use rate has critical phase changesheterogeneity in timing of individual activitie... empirically, we don't work at constant rate... so include a notion of node-specific internal time..
Keys dimenaionality coevolution timingTakehomes strong messages dont homogenize, but polarize social interaction can lead to consenss different from external message
Martin Everett - Manchester - dual project for 2 mode data
2 distinct groups - interaction netween group, not users in each group e.g. wikipedia & postersexample dataset women & southern events...problem - projection can lose data can we do directly? not necessarily...turns out (math says) there isn't much loss... can do SVD makes it easy to find core/key eents, and key people in women/southern fried chicken e.g. and various other clustering things...can use this to find centrality- discover key agents in women getting other women to events
takehome - ignore incorrect folkore about losing info in this approach...
Ross Anderson (CL) Temporal Node Centrality - work with hyoungschick kim
starting with attacks/defense on scale free net hubs etc....
results in papers by Hyoungschick et a...
Cecilia Mascolo (CL) Geo-spatial
Case study - add in geo-info (like foursquare) - with geo-tagging of postsm can see where messages/interactions occur.
questions of interest -
1. relation twixt geo-distance and social distance - for example
2. distance and degree
Applications - can we exploit geo-spatial info to build better social apps and systems?
1. link prediction....
2. movement model/prediction
Q&A for morning session chaired by Y.T. (for readers of snow crash:)
Someone more sociable than I can blog the pm:)
Salvatore Scellato@srg.cl $
Last week I was in (the other) Cambridge, attending the "Second conference on the Analysis of Mobile Phone Datasets and Networks", or NetMob, held at the MIT Media Lab together with SocialCom 2011. NetMob provides an interesting format: there is only one track of short contributed talks, with the possibility to present recent results or results submitted elsewhere. Speakers have about 10-12 minutes to present their work and then there is plenty of time to discuss ideas network with other people over 2 days. I gave two talks: one of our research on the effect of geographic distance on online social networks and another on our recent work on universal patterns in urban human mobility.
The unifying theme of the workshop is the analysis of mobile phone datasets: as people user mobile devices more and to do more things, these datasets help us to understand complex processes such as spread of information, human mobility, the usage of urban geography and so on. Indeed, the range of talks presented at the workshop was impressive and fascinating, spanning between two main points: the first day focused more on studying user mobility, while the second day featured works on social behaviour.
Among the most innovative works during the first day there was a talk by people at MIT & Berkeley on using mobile phone CDRs to make sense of urban roads, proposing to use a the Gini coefficient to measure the diversity of individual traffic carried by each street. Individual user mobility was the main theme of several talks: I particularly liked one on the seasonal patterns of user movements, presented by Northeastern University researchers, and one by a large team led by Vincent Blondel on exploring the spatio-temporal properties of human mobility and the regular home-work routine of many users. Laszlo Barabasi gave an invited talk on mobility and predictability, presenting much of his last work and trying to connect the statistical properties of human mobility to the performance limits of many related applications that rely on user regularity. Finally, AT&T Labs presented their results on why it is impossible to anonymize location data.
The second day featured works on the social properties of mobile phone communication between users. Researchers at CMU presented their results on quantifying how social influence might compel users to adopt some products by using randomization techniques. Another interesting talk by a a joint team UC3M and Telefonica presented how time allocation in social networks has strong constraint that are likely to affect and be affect by the social structure itself: well-connected hubs have a lower importance on information transmission than less connected users, with important consequences on many dynamic social processes. Sandy Pentland have another invited talk, offering a wide overview of how mobile devices are changing the technological landscape with their ubiquitous sensing capabilities. Another interesting talk discussed the economic value of mobile location data, presenting scenarios user actions can be monetized and profit shared among different service providers.
Overall NetMob provided an insightful venue for discussions and potential collaborations, always revolving around the idea that as mobile devices become more and more ubiquitous they will offer new fascinating research opportunities.
Many more details about all the talks in the book of abstracts.
I presented a couple of papers at this year's Socialcom . While I was presenting, the twittersphere was offering encouraging and puzzling feedbacks:
- I love the way @danielequercia introduces a book to read in each of his talk :D
- I really like @danielequercia style in making slides and presenting! minimal, cool and fun :D
The irony is that, during the coffee break right before my talk, I received few constructive feedbacks on how to structure my presentations and avoid having, as I often do, superficial and high-level slides for a *scientific* talk. Well, that's not the first time I get this feedback, and I accept it. However, I feel that many talks at conferences suffer from powerpoint karaoke syndrome - to look "right" (like a proper scientist/professional dude), one needs to recast a paper into slide format. Bad mistake, as The Great Simon L Peyton Jones would tell us. Since I apparently like to suggest books, then let me say that, despite the title, "Presenting to Win" is the best book on how to prepare and deliver presentations (it's for a business audience, but you can easily adapt it to your needs). Ideally, one should be able to give a talk without any slide - this way, i bet that karaoke presenters will be more likely to reach enlightenment and enter nirvana (provided that they spend 3 days to prepare a 15-minute presentation). If a smooth transition between powerpoint karaoke and nirvana is needed, then karaoke presenters might well try the "Takahashi Method" - Lawrence Lessig has successfully used it (link to one of his talks) and Steve Jobs was doing something similar for his keynotes.
Anyhooow :) this post isn't about presentation styles but about the two papers I presented :) Here is a quick abstract that summarizes them. Enjoy ;)
In the first paper [pdf paper slides], we tested whether Twitter users can be reduced to look-alike nodes (as most of the spreading models would assume) or, instead, whether they show individual differences that impact their popularity and influence. One aspect that may differentiate users is their character and personality. The problem is that personality is difficult to observe and quantify on Twitter. It has been shown, however, that personality is linked to what is unobtrusively observable in tweets: the use of language. We thus carry out a study of tweets and show that popular and influential users linguistically structure their tweets in specific ways. This suggests that the popularity and influence of a Twitter account cannot be simply traced back to the graph properties of the network within which it is embedded, but also depends on the personality and emotions of the human being behind it. Also, in the second paper [pdf paper slides], for a limited number of 335 users, we are able to gather personality data, analyze it, and find that both popular users and influentials are extroverts and emotionally stable (low in the trait of Neuroticism). Interestingly, we also find that popular users are "imaginative" (high in Openness), while influentials tend to be "organised" (high in Conscientiousness). We then show a way of accurately predicting a user's personality simply based on three counts publicly available on profiles: following, followers, and listed counts. Knowing these three quantities about an active user, one can predict the user's five personality traits with a root- mean-squared error below 0.88 on a [1,5] scale. Based on these promising results, we argue that being able to predict user personality goes well beyond our initial goal of informing the design of new personalized applications as it, for example, expands current studies on privacy in social media.
The last couple of days were busy - IBM visited en masse and their Technical consulting group of around 50 people showed up (in CMS) to talk about various interesting topics - for me, the best one was a talk about financial service industry regulatory controls through risk data sharing (via a third party - a sort of nuclear test ban treaty assurance service) - very neat - lots of other good topics - Rolls Royce were also there - amusingly, IBM complimented Rolls on their reliable history (compared with the Software Industry) - i didn't feel it fair to mention the RB211 or the recent A380 shattered turbine:)
More locally crucual was the kickoff meeting of the Cambridge Networks Network - see http://www.cnn.group.cam.ac.uk/ for more info - the
This kickoff was to setup a cross group, grass roots movement to join up various people in systems biology, brain mapping, economics, eplidemiology (including plant sciences) and others to share common knowledge and methods/techniques for studying complex networked systems with interesting (e.g. emergent) phenomena - the kickoff was amibitious with talked from 5 people supposed to be 10 mins each (averaging 20 mins:)
some ideas i thought of while listening
1. weak ties (long links) in modular systems (social nets, the brain, the internet) serve the same purpose as random perturbations (like mutation) does in optimisation tools (like Genetic Algorithms or Simulated Annealing) - to get you out of local minima:- most GAs work by cross-over which implements parallel search in local areas of a fitness landscape (since similar genes share / cross over/breed and are succesful or not similarly) - I wonder if there is any literature on how graphs have a small (but non zero) fraction of "escape routes" from the highly interconnected/modular/cliqueish structure of a small world are slightyly more robust than purely hierarchical modular ones???
2nd thought was about epidemics (and economics) - the Vickers report on the banking sectore is basically quarantining domestic banks (building socieities) from the high risk (prostitution and drug user/gambling/casino) banking sector. on the other hand, sharing information problemly (see Efficient Markets) would also work (see IBM work above)
The difference is that a structural regulation is much easier to implement than a big bang transparent information regime. maybe we do one now, the other later - who knows?
The talk on Citrus Blight in Miami lemon trees was fun - reminded me plants (genetically) are a lot easier than animals (c.f. fluphone:)
The map of spread of the blight looked really like the map of the nuclear tests recently shown on youtube (see
for that (esp. for Anil:)
One nice name check was the work on neural structures and VLSI that showed Rent's Law applies to both - cute (but should we add weak ties to our multicore systems - one for Steve Furber maybe?)
Anyhow, this looks like a very good (young, active, enthusiastic, smart) initiative - they will be having a bi-weekly seminar series starting pretty soon - probably coordinated with the statslab's networking series....
(for people too young to recall, Rolls Royce actually went bankrupt in the 1970s trying to make carbon fiber turbine blades work - in the end, a government bailout fixed it, and they are ok - the problem they hit was the fibers in the original blades weren't knit in enough different directions - a prob,lem shared with the fiberglass bodywork o nthe Reliant Scimitar (and robin) which would shatter under fairly light impact into lots of dangeous shards. The solution is to sew 3 dimensions of fiber (much more expensve/complex, but immesnly strong, but also tunable for different flexibility in any given dimension) into the matrix - the recent A380 engine problem wasnt design, but manufacturing process...
Kiran Rachuri@srg.cl $
Day 2 of MobiCom 2011 started with my talk on SociableSense. Fourteen papers were presented over four sessions, including two best papers.
SociableSense: Exploring the Trade-offs of Adaptive Sampling and Computation Offloading for Social Sensing, Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, and Peter J. Rentfrow (University of Cambridge, United Kingdom)
Our work. Details at:
Overlapping Communities in Dynamic Networks: Their Detection and how they can help Mobile Applications, Nam P. Nguyen, Thang N. Dinh, Sindhura Tokala, and My T. Thai (University of Florida, USA)
A better understanding of mobile networks in terms of overlapping communities, underlying structure, organisation helps in developing efficient applications such as routing in MANETs, worm containment, and sensor reprogramming in WSNs. So, the detection of network communities is important, however, they are large and dynamic, and overlapping communication. Can community detection be performed in a quick and efficient way.
They propose a two phase limited input dependent framework to address this. Phase 1: basic communities detection (basic communities are dense parts of the networks). Phase 2: update network communities when changes are introduced, i.e., handle: adding a node/edge, and removing a node/edge. The evaluation is based on MIT reality mining data. They evaluate the proposed scheme with respect to two applications: routing in MANETs and worm containment.
Detecting Driver Phone Use Leveraging Car Speakers, Jie Yang and Simon Sdhom> (Stevens Institute of Technology, USA); Gayathri Chandrasekaranand Tam Vu (Rutgers University, USA); Hongbo Liu (Stevens Institute of Technology, USA);Nicolae Cecan (Rutgers University, USA); Yingying Chen (Stevens Institute of Technology, USA);Marco Gruteser and Richard P. Martin(Rutgers University, USA)
(Joint Best Paper Award)
80% of people talk on cell phone while driving. The consequences of this might be dangerous (18% accidents). They claim that hands-free devices do not help because of the effects in the cognitive load on the driver. Several mobile apps in the market trying to solve that. (zoom safer ïzup, cellsafety). Recent measures:
-hard blocking: jammers, blocking calls etc
-soft interaction: delay calls, route to voice mail, automatic reply
Current apps that actively prevent cell phone use in vehicle only detect the phone is in vehicle or not through: GPS, handover, signal strength, speedometer etc. None of them have capability to find whether phone is used by driver or passenger. They use an acoustic ranging approach to solve this problem. They identify the position of the cell phone based on the car speakers and mobile phone, and based on speakers emitting different sounds at different times. Cell phone mic has wider range of frequency range: so beep frequency to outside user hearing range. Evaluation shows that the accuracy of detection is over 90%.
I Am the Antenna: Accurate Outdoor AP Location Using Smartphones, Zengbin Zhang, Xia Zhou, Weile Zhang, Yuanyang Zhang, Gang Wang, Ben Y. Zhao, and Haitao Zheng (University of Calfornia at Santa Barbara, USA)
The density of APs in the environment is very high. How to find the location of an AP? Conventional AP location methods:
- Directional antenna: Fast, very accurate but expensive
- Signal map: Simple but time consuming
- RSS gradient: Low accuracy, low measurement overhead but low accuracy
Their solution is based on the effect of user orientation degree to an AP on RSS. The body of the user can affect the SNR (they observed around 13dBm difference). They also tested the generality of the effect with multiple phones, protocols, different users, and environments, and RSS profiles all followed the same trend.
Evaluation is in a campus, with three scenarios. 1. Simple line of sight (no blocks) 2. complex line of sight (vehicles etc) 3. Non line of sight (line of sight is completely blocked). Metric: absolute angular error: detected direction - actual direction. results: error < 30 degree for 80% cases, in simple LOS (line of sight); error < 65 degree for 80% cases in Non LOS.
SESSION: Cellular Networks
Traffic-Driven Power Saving in Operational 3G Networks, Chunyi Peng, Suk-Bok Lee, Songwu Lu, and Haiyun Luo (University of California at Los Angeles, USA)
Transmission power of Base Stations increases linearly with the traffic load. The cooling power keeps constant and its comparable to the transmission power. As a result, high energy is consumed energy even at zero traffic. Existing solutions do not address practical issues and they follow a theoretical analysis. In this work, they propose a traffic-driven approach that exploits traffic dynamics to turn off under-utilised BSs for system-wide energy efficiency. They claim that traffic is quite predictable in the base station. There’s a lot of potential to save energy in quite hours but also in peak hours. Their solution also tries to be compatible with current 3G standard/deployment. Issues addressed: Issue 1: how to satisfy location dependent coverage and capacity constraints. Issue 2: how to estimate traffic load ?
Solution: based on profiling: estimate traffic envelope via profiling and leverage near-term stability. The set of BS active in idle hours should be a subset of the ones in peak hours. Their condition is that they should not switch BSs more than once per day. Provide location-dependent capacity. Their estimation is a moving average with 24 daily intervals. However, frequent on/off switching is undesirable: takes several minutes. It should be based on traffic characteristics.
MOTA: Engineering an Operator Agnostic Mobile Service, Supratim Deb, Kanthi Nagaraj, and Vikram Srinivasan (Bell Labs Research, India)
Cellular coverage varies with respect to locations. Users may not be happy with a single service provider, and there is a case for users choosing services from multiple providers. Dual sim phones are already popular in asia. Users are using services based on the cost from the providers. Goal of this work: Ability for users to join the network of choice at will based on location, pricing, and applications.
Solution: to propose changing operator from the user-side. They consider several solutions: Option 1: Centralised approach making decisions but operators unlikely to share network planning information. Option 2: Users use signal strength from different base stations. This is insufficient and can result in poor user experience.
They propose MOTA in which a service aggregator is introduced: new intermediary between users and operator and is responsible for maintaining customer relationships and handles all control plane operations that cannot be handled by a single operator. The also use a Utility function that incorporates fairness. Evaluation is based on the data from one of the largest cellular operators in India.
Anonymization of Location Data Does Not Work: A Large-Scale Measurement Study, Hui Zang and Jean Bolot (Sprint Applied Research, USA)
Call Detail Records (CDR) keep a lot of information about the phone calls of the users and they can be linked to a location. They can be used for marketing, security, LBS, Mobility Modelling, however, privacy might be breached if such data is released. Traditional approaches to protect privacy of users is through anonymisation, however, this works shows that does not work. CDR contains: mobile id, time of call, call durations, start cell id, start sector id, end sector id, call direction, caller id. If mobile id and caller id are anonymised, can we detect the user. Its shown that with gender, zipcode, and birthdate, 87% of USA population can be identified.
Their dataset consists of more than 30 billion call records made by 25 million cell phone users across the USA. Their approach is to infer top N locations for each user and correlate this with publicly available information such as census data. They show that the top 1 location does not yield small anonymity sets, but top 2 and 3 locations do at the sector or cell-level granularity. They also provide possible solutions based on spatial and time domain approaches for publishing location data without compromising on privacy.
SESSION: Infrastructureless Networking.
Enhance & Explore: An Adaptive Algorithm to Maximize the Utility of Wireless Networks, Adel Aziz and Julien Herzen (École Polytechnique Fédérale de Lausanne, Switzerland); Ruben Merz (Deutsche Telekom Laboratories, Germany); Seva Shneer (Heriot-Watt University, UK); andPatrick Thiran (École Polytechnique Fédérale de Lausanne, Switzerland)
This work addresses the problem of providing efficiency and fairness in wireless networks. Their approach is based on maximising a utility function. They propose an algorithm called Enhance and Explore that maximises the utility function. The challenges in designing this scheme are: work on existing mac, non-network wide message passing, and wireless capacity is unknown a priory.
They consider two scenarios: WLAN setting: inter-flow problem and optimally allocate resources. Multi-hop setting: intra-flow problem and avoid congestion. They show analytically that the proposed algorithm converges to a point of optimal utility. Evaluation is through experiments in a testbed and simulations in ns-3.
Scoop: Decentralized and Opportunistic Multicasting of Information Streams, Dinan Gunawardena, Thomas Karagiannis, and Alexandre Proutiere (Microsoft Research Europe, UK); Elizeu Santos-Neto (University of British Columbia, Canada); and Milan Vojnovic (Microsoft Research Europe, UK)
This work aims at leveraging mobility for content delivery in networks of devices experiencing intermittent connectivity. Main challenge: routing / relaying strategies. Existing solutions include epidemic routing. Drawback of existing works are: simplifying assumptions on mobility, and interact contact times are exponentially distributed. This work proposes SCOOP that
- maximizes some global system objective
- accounts for storage and transmission costs
- multi-point to multi-point communications
- model-free (allows general node mobility)
There is a necessity to propose a mobility model-free system. They used classic traces: UCSD, Infocom, DieelNet and SF Taxis. They show that two hops are enough to reach a large percentage of nodes. They also show that the delays in paths between a source and a destination are positively correlated. They aim to identify the strategy optimally exploiting mobility and buffer constraints and relays. However, this is a hard problem. They use a sub-gradient algorithm to solve it efficiently. Evaluation is through numerical experiments. They compared SCOOP with an idealized version of R-OPT of RAPID algorithm (assumes full global knowledge). Performance with respect to delivery ratio is very close to R-OPT.
R3: Robust Replication Routing Wireless Networks with Diverse Connectivity, Xiaozheng Tie, Arun Venkataramani (University of Massachusetts Amherst, USA) and Aruna Balasubramanian (University of Washington).
Wireless routing protocols are designed for specific target environments, like well-connected meshes, intermittently connected MANETs. Problems with this is routing protocols are fragile, and perform poorly outside its target environment. Wireless networks exhibit spatio-temporal diversity, therefore, compartmentalized design is not efficient. Can we design a protocol that ensures a robust performance across networks.
They propose to use Replication routing. They present a model to quantify replication gain. Replication gain depends on the path delay distributions and not just expected value. They study the average replication gain with respect to number of paths using DieselNet-DTN and Haggle traces. They propose R3: a link state protocol that selects replication paths using the proposed model. The scheme also adapts the replication to load.
Evaluation is both on DieselNet DTN testbed and a Mesh testbed. Simulation validation is also performed using DieselNet deployment. Compared with several protocols. Simulation based on haggle trace shows that R3 reduces delay by up to 60% and increases good put by up to 30% over SWITCH. Simulations on DieselNet-Hybrid shows that R3 improves median delay compared to SWITCH by 2.1x.
Flooding-Resilient Broadcast Authentication for VANETs, Hsu-Chun Hsiao, Ahren Studer, Chen Chen, and Adrian Perrig (Carnegie Mellon University, USA); and Fan Bai, Bhargav Bellur, and Aravind Iyer (General Motors Research)
Each vehicle possess an On Board Unit (OBU), and broadcasts info for safety and convenience. This information has to be secured. IEEE 1069.2 standard suggests to use ECDSA signature for these messages, however, its expensive for verification and takes around 22ms to verify, and its difficult if many messages arrive in short time. Can we reduce this verification delay. Core idea of this work: entropy aware authentication.
They propose two methods: (1) FastAuth - exploits predictability of future messages. Uses hash to verify location updates instead of ECDSA . The result is 1 us instead of 22000 us in ideal case. (2) SelAuth - selective verification before forwarding. They also reduce the communication overhead. Evaluation is based on real vehicle traces (4 traces), each generated by driving a car along a 2 mile path for 2 hours. Results show that the signature generation is 20x faster and verification is 50x faster compared to ECDSA.
E-MiLi: energy-Minimizing Idle Listening in Wireless Networks, Xinyu Zhang and Kang G. Shin (University of Michigan-Ann Arbor, USA)
(Joint Best Paper Award)
Wi-Fi is a popular means of wireless Internet connection. However, Wi-Fi is a main energy consumer in mobile devices, 14x higher than GSM on phone. This is due to cost of idle listening. Moreover, idle listening power is comparable to TX/RX power. Existing solutions are variants of PSM, but, is this good enough. No, this is due to carrier sensing time. To overcome this, they propose E-MiLI that reduces the power consumption of idle listening. They down-clock the radio in idle listening mode. Down-clocking by 1/4 saves power by 47.5%. The key challenge is how to decode a packet given that receiver sampling rate should be no less than senders clock rate to decode a packet. The solution proposed is to separate detection from decoding.They add a preamble to 802.11 packet that can be detected by low clock rates.
One issue with this is false triggering. Packets intended for one client may trigger all other clients and this is a waste of energy. The second problem is the energy overhead caused by large preambles. The solution is a minimum-cost address sharing to allow multiple nodes to be assigned the same address. Address allocated according to channel usage. There’s a delay caused by cold-rate switching too. To reduce this they use opportunistic downclocking. Evaluation is with respect to: Packet detection: software radio based experiments, Energy consumption: through Wi-Fi traces, and Simulations using ns-2. Results: When SNR is above 8dB, miss detection probability is almost zero. They achieved close to 40% energy saving.
Refactoring Content Overhearing to Improve Wireless Performance, Shan-Hsiang Shen, Aaron Gember, Ashok Anand, and Aditya Akella (University of Wisconsin-Madison, USA)
The main aim is to improve on wireless performance by leveraging overheard packets. Several techniques available currently, but, none of these leverage duplicate data. This work takes a content based overhearing approach and suppresses duplicate data transmission. Ditto is first work that used content based overhearing approach, but it works at the granularity of objects, and does not remove sub packet redundancy. Moreover, it only works for some applications. This work presents REfactor content overhearing:
(1) this scheme puts content overhearing at the network layer, and this results in savings across applications. Transport layer approach (used in Ditto) ties data to application or object chunk. Network layer approach reduces redundancy across all flows. Transport approach also requires payload reassembly.
(2) this scheme identifies sub-packet redundancy. This saves transmission times. Ditto only works in 8 - 32kb object chunks, whereas the proposed scheme operates at a finer granularity. This results in savings from redundancy as small as 64 bytes. and this also results in leveraging any overhearing even a single packet.
Evaluation through test-bed experiments show 6 to 20% improvement in Goodput. Simulation results also show that 20% improvement is achieved in Goodput.
Distributed Spectrum Management and Relay Selection in Interference-Limited Cooperative Wireless Networks, Zhangyu Guan (Shandong University, P. R. China); Tommaso Melodia (State University of New York at Buffalo, USA); Donfeng Yuan (Shandong University, P. R. China); and Dimitris A. Pados (State University of New York at Buffalo, USA)
Emerging multimedia services require high data rates. This work aims to maximize the capacity of wireless networks by leveraging the frequency and spatial diversity. Frequency: by dynamic spectrum access, and this improves spectral efficiency. Spatial: by cooperative communication, and this enhances link connectivity. Problem: maximize sum utility (capacity, log-capacity) of multiple concurrent traffic sessions by jointly optimizing relay selection (whether to cooperate or not) and direct transmission. Problem formulated as mixed integer non-convex problem. This is NP hard. They propose a solution based on branch and bound that is able to find a globally optimum solution. Polynomial time solution is not guaranteed but in practice it works well. Evaluation is based on simulations. Results show that the proposed schemes converge very fast. Centralized algorithm achieves at least 95% of the global optimum, and distributed schemes are very close to optimal.
Salvatore Scellato@srg.cl $
Some social scientists have suggested that the advent of fast long-distance travel and cheap online communication tools might have caused the "death of distance": as described by Frances Cairncross, the world appears shrinking as individuals connect and interact with each other regardless of the geographic distances which separates them. Unfortunately, the lack of reliable geographic data about large-scale social networks has hampered research on this specific problem.
However, the recent growing popularity of location-based services such as Foursquare and Gowalla has unlocked large-scale access to where people live and who their friends are, making possible to understand how distance and friendship ties relate to each other.
In a recent paper which will appear at the upcoming ICWSM 2011 conference we study the socio-spatial properties arising between users of three large-scale online location-based social networks. We discuss how distance still matters: individuals tend to create social ties with people living nearby much more likely than with persons further away, even though strong heterogeneities still appear across different users.
Kiran Rachuri@srg.cl $
Mobile smart phones represent a perfect platform for building systems to capture the behaviour of users in the work-places, as they are ubiquitous, unobtrusive, and sensor-rich devices. However, there are many challenges in building such systems: mobile phones are battery powered and the energy consumption of sensor sampling, data transmission, and resource intensive local computation is high, the mobile phone sensors are inaccurate and not specifically designed for the purpose of capturing user behaviour, and finally, the local and cloud resources should be used efficiently by considering the changing mobile phone resources.
We address the above technical challenges for supporting social sensing applications in a paper to be presented at the upcoming ACM MobiCom '11 conference.
In the paper we describe the design, implementation, and evaluation of SociableSense, an efficient and adaptive platform based on off-the-shelf mobile phones that supports social applications aiming to provide real-time feedback to users or collect data about their behaviour.
The key components of the system are:
- A sensor sampling component adaptively controls the sampling rate of accelerometer, Bluetooth, and microphone sensors while balancing energy-accuracy-latency trade-offs based on reinforcement learning mechanisms. The learning mechanism adjusts the sampling rate of the sensors based on the context of the user in terms of events observed (interesting or not), i.e., the sensors are sampled at a high rate when there are interesting events observed and at a low rate when there are no events of interest.
- A computation distribution component based on multi-criteria decision theory dynamically decides where to perform computation of tasks by considering the importance given to each of the dimensions: energy consumption, latency, and data sent over the network. For each classification task that needs to be processed, this scheme evaluates a utility function to decide on how to effectively distribute the subtasks of the classification between the local and the cloud resources.
We show through several micro-benchmark tests that the adaptive sampling scheme adjusts the sampling rate of sensors dynamically based on the user's context and balances energy-accuracy-latency trade-offs. We also evaluate the computation distribution scheme in terms of selecting the best configuration given the importance assigned to each performance dimension, and show that the computation distribution scheme efficiently utilises the local and the cloud resources and balances energy-latency-traffic trade-offs by considering the requirements of the experiment designers.
To further demonstrate the effectiveness of the SociableSense platform, we also conduct a social experiment using an application that determines the sociability of users based on colocation and interaction patterns. The use of computation distribution scheme leads to approximately 28% more battery life, 6% less latency per task, and 3% less data transmitted over the network per task compared to the model where all the classification tasks are computed remotely.
Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, Peter J. Rentfrow. SociableSense: Exploring the Trade-offs of Adaptive Sampling and Computation Offloading for Social Sensing. In Proceedings of the 17th ACM International Conference on Mobile Computing and Networking (MobiCom '11), Las Vegas, USA. [PDF]
few days ago, i attended the main social networks conference/gathering in UK.
there was an interesting discussion about the future of the elsevir journal "social networks". apparently, if you want to have an easy time getting in, you need to do research on 'methodology'. frankly, IMHO, this is the best thing they could do to kill the journal. alas, the journal's table of contents already reflect this decision. that is why i have rarely found interesting articles in this journal, while first monday and AJS are full of great contributions. don't get me wrong. i love methodological contributions to social networks - tom snijders and sinan aral are doing fantastic work in this area. i just think that methodological contributions are only a tiny part of a larger picture, a picture that hosts amazing work by, eg, duncan watts, danah boyd, and michael macy (all in US). instead, UK researchers in the area of social networks seem to be anchored to pretty "traditional research". at least, that was my impression based on the talks at UK SNA, but i will be very happy to be proven wrong ;) and there are notable exceptions in UK - eg, dunbar of oxford, bernie hogan of OII, and few others…
here are few notes taken during the talks.
cecile emery studied the relationship between big five personality traits and emerge of leaders. she considered not only leaders' personalities but also followers'. she found that leaders with high conscientiousness and extraversion tend to attract followers over time, and followers high in openness and conscientiousness tend to follow more. leader-follower pairs tend to be different on agreeableness and similar on openness.
agrita kiopa of georgia tech discussed a very interesting problem - how your friendship relations impact your work output. the main idea is that, to get something, you have to ask, so friendship becomes important also at work. they run a longitudinal national study of US academic scientists in 6 disciplines between 2006 and 2010. women are overepresented - i.e., 54% men, 46% women. friends are obtained by 6 name generators: role-based (collaborator, mentor) function-based (advice, discuss important issues), and close friends naming. 1600 egonetworks are collected as a result. so, for each person, there are 6 egonetworks. there is a considerable overlap among the 6 networks on average. full professors have more friends than assistant professors (control for tenure). the main results are that friendship has no effect in advice seeking but has effect on receive introductions and get reviews of, say, your papers. i hope she will devote a bit of future work to enemy (competitor) network. also, personality might be an interesting topic to study.
bernie hogan studied the correlates to social capital on Facebook. he used a mixed-method survey methodology and downloaded Facebook ego networks. he then focused on the question of whether your social capital is related to your (objective) network structure or to the way you (subjectively) perceive your network. Very interesting work.
tore opsahl's talk revisited the idea that small-world nets are ubiquitous. by contrast, he found that "small-world networks are far from as abundant as previously thought"
i attended netsci a couple of weeks ago. here is my stream of consciousness:
lada adamic talked about how info changes as it propagates through the blogsphere, and she effectively modelled this change as a simple urn model. more on her upcoming ICSW paper. her future research will go into how sentiment of memes changes/evolves (this topic has been recently covered by jure leskovec).
former navy officer duncan watts presented few macro sociological lab experiments and field experiments that he and his colleagues run on social media sites. he showed how media sites such as Twitter, Facebook, and Mechanical Turk allow researchers to measure individual level behaviour and interactions on a massive scale in real time. the good news is that there are already guides for running experiments on those platforms (see, for example, the tutorial at icwsm by paolacci and mason). the experiments he mentioned are fully reported in his latest book "Everything is Obvious". The main idea behind the book could be summarised as follows:
our intuition for human behaviour is so rich: we can "explain" essentially anything we observe. in turn, we feel like we ought to be able to predict, manage, direct the behaviours of others. yet often when we try to do these things (in government, policy, business, marketing), we fail. that is because, paradoxically, our intuition for human behaviour may actually impede our understanding of it. perhaps a more scientific behaviour would help us. the book is about experiments whose goal was to understand human behaviour at a large scale.
olivia woolley meza of max plank presented the results of a project that measured the impact of two events (ie, island vulcan ashes and 9/11) on flight fluxes. these fluxes were modelled as a network and metrics of interest were computed on the network - for example, they computed network fragmentation (nodes remaining in the largest connected component), and network inflation (how distances in the network decay). this study provided few intuitive take-aways, including:
- regions geographically closer to an attack are more affected
- between-region distancing is driven by hubs
with a presentation titled "A universal model for mobility and migration patterns", filippo simini (supervised by marta gonzalez) turned to the question of whether it is possible to predict the number of flying/public transportation commuters between two locations. law of gravitation for masses (also called gravity model) doesn't work so well with people, and that's why they proposed a new migration model. the main idea of this model is that an individual looks for a better job outside his home country and, as such, accepts the job in the closest country that has benefits higher than his home country. each country has a benefit value that is a composite measure based on income, working hours, and general employment conditions. one take away was that population (& not distance) is the key predictor of mobility fluxes.
giovanni petti of imperial skilfully delivered a very interesting presentation about a project called freeflow whose partners include UK unis in london, york, kent. in this project, they have collected data from sensors placed under speed bumps that measure the number of cars that pass and the amount of time each car has spent on a bump. traffic data has been arranged in a graph and, to identify congested areas, they run a community detection algorithm on the graph. it turns out that london behaves like one large giant in terms of traffic flow because of long-range spatial correlations.
tamás vicsek studied pigeon flocks and, more generally, studied the roles according to which birds tend to fly with each other. the main finding is that each member of a bird flock takes a specific role in a hierarchy, and roles arranged also change over time
marta gonzalez studied the mobility of people living in different of cities (including non-US ones) and found few regularities: for example, she found that residents of well-off areas tend to travel nearby (maybe because their areas tend to have plenty of internal resources). another interesting point is that trip length distributions at city scale are well described by a weibull distribution. Marta also tried to reconstruct temporal activity of people living in chicago using SVD (eigen-decomposition) - temporal activity is reconstructed only using the first 21 eigenvalues (which she called eigenactivities), and such reconstruction is also predictive of people's social demographics. the main point of this modelling exercise was to show that techniques such as principal component analysis combined with k-means are great tools to detect clusters in human activity
- Distributed Systems
- Operating Systems
- Research Agenda
- April 2013
- October 2012
- September 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- October 2011
- September 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- January 2011