Last couple of days I was in these two events
1.EPSRC Network of Networking 2 day workshop on Photoonics - see
Very interesting to see how coherent the UK's academic and industry photonics community are - they have a pretty clear roadmap for next 5 years and then some nice challenges - not a lot for CS (still) until they can do somethign cool in a) integration of optical links onto processors and b) build some more viable (in scale/integration/power terms) gates....but in terms of what they are doing for price/performance, they pretty much match Moore's law (terminating a 10GigE for 10 bucks is an amazing achievement!)
2. Rustat conference on UK Cybersecurity
This will almost certainly be blogged by Ross or someone else in the security group as they were there en masse. I chaired a session on UK skills and a couple of good outcomes were support from research counciles for more PhDs (whether this leads to money will remain to be seen) and
and the idea that CS graduates that end up on the Board as CIOs should make sure they have good business skills so they aren't looked down on by other board members as just a sort of uber "IT guy"...
Lots of very interesting corridor conversations. The UK gov budget in this space is 600M quid, so many SMEs scampering after it:) In general, we seem to be in ok shape (government policy doc on cybersecurity out soon, recent Chatam House report (can't find link right now) appareently less rosy, but still very useful. Expect to see more details here soon:
We're having a NATO work shop on this in 10 days at Wolfson in Cambridge...Rex Hughes there is coordinating it with the Cambridge Science and Policy group.
Finally, I suggested a Homeopathic remedy for cyberattacks might be to dilute the stuxnet virus say 10^11 times in some random bits (e.g windows vista kernel code) and add it to your site.
Oh yeah and can someone tell me just what does the ICTKTN do?? :)
3rd and final day... mainly about PHY/MAC layer and theory works
The day started with a Keynote by Farnan Jahanian (University of Michigan, NSF). Jahanian talked about some opportunities behind cloud computing research. In his opinion, cloud computing can enable new solutions in fields such as health-care and also environmental issues. As an example, it can help to enforce a greener and more sustainable world and to predict natural disasters (e.g. the recent japanese tsunami) with the suport of a wider sensor network. His talk concluded with a discussion about some of the challenges regarding computer science research in the US (which seem to be endemic in other countries). He highlighted that despite the fact that the market demands more computer science graduates, few students are joining related programs at every level, including high school.
Session 7. MAC/PHY Advances.
No Time to Countdown: Migrating Backoff to the Frequency Domain, Souvik Sen and Romit Roy Choudhury (Duke University, USA); and Srihari Nelakuditi (University of South Carolina, USA)
Conventional WiFi networks perform channel contention in time domain. Such approach imposes a high channel wastage due to time back-off. Back2F is a new way of enabling channel contention in the frequency domain by considering OFDM subcarriers as randomised integer numbers (e.g. instead of picking up a randomised backoff length, they choose a randomly chosen subcarrier). This technique requires incorporating an additional listening antenna to allow WiFi APs to learn about the backoff value chosen by nearby access points and decide if their value is the smallest among all others generated by close-proximity APs. This knowledge is used individually by each AP to schedule transmissions after every round of contention. Nevertheless, by incorporating a second round of contention, the APs colliding in the first one will be able to compete again in addition to a few more APs. The performance evaluation was done on a real environment. The results show that the collision probability decreases considerable with Back2F with two contention rounds. Real time traffic such as Skype experiences a throughput gain but Back2F is more sensitive to channel fluctuation.
Harnessing Frequency Diversity in Multicarrier Wireless Networks, Apurv Bhartia, Yi-Chao Chen, Swati Rallapalli, and Lili Qiu (University of Texas at Austin, USA)
Wireless multicarrier communication systems are based on spreading data over multiple subcarriers but SNR varies in each subcarrier. In this presentation, the authors propose a join integration of three solutions to reduce the side-effects:
- Map symbols to subcarriers according to their importance.
- Effectively recover partially corrupted FEC groups and facilitate FEC decoding.
- MAC-layer FEC to offer different degrees of protection to the symbols according to their error rates at the PHY layer
Their simulation and testbed results corroborate that a joint combination of all those techniques can increase the throughput in the order of 1.6x to 6.6x.
Beamforming on Mobile Devices: A first Study, Hang Yu, Lin Zhong, Ashutosh Sabharwal, David Kao (Rice University, USA)
Wireless links present two invariants: spectrum is scarce while hardware is cheap. The fundamental waste in cellular base stations is because of the antenna design. Lin Zhong proposed passive directional antennas to minimize this issue. They used directional antennas to generate a very narrow beam with a larger spatial coverage. They have proved that this solution is practical despite small form factor of smartphone's antenna, resistent to nodes rotation (only 2-3 dB lost if compared to a static node), and does not affect the battery life of the handsets, specially in the uplink as the antenna's beam is narrower. This technique allows calculating the optimal number of antennas for efficiency. The system was evaluated both indoors and outdoors in stationary/mobile scenarios. The results show that it is possible to save a lot of power in the client by bringing down the power consumption as the number of antennas increases with this technique.
SESSION 8. Physical Layer
FlexCast: Graceful Wireless Video Streaming, S T Aditya and Sachin Katti (Stanford University, USA)
This is a scheme to adapt video streaming to wireless communications. Mobile video traffic is growing exponentially and users' experience is very poor because of channel conditions. MPEG-4 estimates the quality over long timescales but channel conditions change rapidly thus it has an impact on the video quality. However, current video codecs are not equipped to handle such variations since they exhibit an all or nothing behavior. They propose that quality is proportional to instantaneous wireless quality, so a receiver can reconstruct a video encoded at a constant bit rate by taking into account information about the instantaneous network quality.
A Cross-Layer Design for Scalable Mobile Video, Szymon Jakubczak and Dina Katabi (Massachusetts Institute of Technology, USA)
One of the best papers in Mobicom'11. Mobile video is limited by the bandwidth available in cellular networks, and lack of robustness to changing channel conditions. As a result, video quality must be adapted to the channel conditions of different receivers. They propose a cross-layer design for video that addresses both limitations. In their opinion the problem is that the compression an error protection convert real-valued pixels to bits and as a consequence, they destroy the numerical properties of original pixels. In analog TV this was not a problem since there is a linear relationship between the transmitted values and the pixels so a small perturbation in the channel was also transformed on a small perturbation on the pixel value (however, this was not efficient as this did not compress data).
SoftCast is as efficient as digital TV whilst also compressing data linearly (note that current compression schemes are not linear so this is why the numerical properties are lost). SoftCast transforms the video in the frequency domain with a transform called 3D DCT. In the frequency domain, most temporal and spatial frequencies are zeros so the compression sends only the non-zero frequencies. As it is a linear transform, the output presents the same properties. They ended the presentation with a demo that demonstrated the real gains of SoftCast compared to MPEG-4 when the SNR of the channel drops.
Practical, Real-time Full Duplex Wireless, Mayank Jain, Jung II Choi, Tae Min Kim, Dinesh Bharadia, Kanna Srinivasan, Philip Levis andSachin Katti (Stanford University, USA); Prasun Sinha (Ohio State University, USA); and Siddharth Seth (Stanford University, USA)
This paper presents a full duplex radio design using signal inversion (based on a balanced/unbalanced (Balun) transformer)and adaptive cancellation. The state of the art in RF full-duplex solutions is based on techniques such as antenna cancellation and they present several limitations (e.g. manual tuning, channel-dependent). This new design supports wideband and high power systems without imposing any limitation on bandwidth or power. The authors also presented a full duplex medium access control (MAC) design and they evaluated the system using a testbed of 5 prototype full duplex nodes. The results look promising so... now it's the time to re-design the protocol stack!
Session 9. Theory
Understanding Stateful vs Stateless Communication Strategies for Ad hoc Networks, Victoria Manfredi and Mark Crovella (Boston University, USA); and Jim Kurose (University of Massachusetts Amherst, USA)
There are many communication strategies depending on the network properties. This paper explores adapting forwarding strategies that decides when/what state communication strategy should be used based on network unpredictability and network connectivity. Three network properties (connectivity, unpredictability, and resource contention) determine when state is useful. Data state is information about data packets, it is valuable when network is not well-connected whilst control-state is preferred when the network is well connected. Their analytic results (based on simulations on Haggle traces and DieselNet) show that routing is the right strategy for control state, DTN forwarding for data-state (e.g. Haggle Cambridge traces) and packet forwarding for those which are in the data and control state simultaneously (e.g. Haggle Infocom traces).
Optimal Gateway Selection in Multi-domain Wireless Networks: A Potential Game Perspective, Yang Song, H. Y. Wong, and Kang-Won Lee (IBM Research, USA)
This paper tries to leverage a coalition of networks with multiple domains with heterogeneous groups. They consider a coalition network where multiple groups are interconnected via wireless links. Gateway nodes are designated by each domain to achieve a network-wide interoperability. The challenge is minimising the intra-domain cost and the sum of backbone cost. They used a game-perspective approach to solve this problem to analyse the equilibrium inefficiency. They consider that this solution can be also used in other applications such as power control, channel allocation, spectrum sharing or even content distribution.
Fundamental Relationship between Node Density and Delay in Wireless Ad Hoc Networks with Unreliable Links, Shizhen Zhao, Luoyi Fu, and Xinbing Wang (Shanghai JiaoTong University, China); and Qian Zhang (Hong Kong University of Science and Technology, China)
Maths, percolation theory ... quite complex to put into words
Mobicom'11 is being held in the (always interesting) city of Las Vegas. In this first day, the talks were mainly about wireless technologies and different techniques to avoid congestions were proposed.
Keynote Speaker: Rajit Gadh (Henry Samueli School of Engineering and Applied Science at UCLA)
Prof. Gadh talked about UCLA project “SmartGrid”, a topic which is gaining momentum in California. This project is motivated by the fact that electricity comes from a grid that spread across a whole country and we are still using technology that has been deployed 100 years ago. The grid is rigid, fixed and large. In fact, Rajit Gadh thinks that there is a clear parallelism between data networks and power networks. Based on that observation, they aim to create a Smart Grid infrastructure with the following characteristics: self healing, active participation of consumers, capabilities to accommodate all the energy sources and storage options, eco-friendly, etc.. More information can be found in the project website.
SESSION 1. Enterprise Wireless
FLUID: Improving Throughputs in Entreprise Wireless LANs through Flexible Channelization, Shravan Rayanchu (University of Wisconsin-Madison, USA); Vivek Shrivastava (Nokia Research Center, Palo Alto); Suman Banerjee (University of Wisconsin-Madison, USA); and Ranveer Chandra (Microsoft, USA)
One of the problems in current 802.11 technologies is that channels width is fixed. However, many advantages arise by replacing fixed witch channels with flexible width ones. The goal of this paper is to build a model that can capture flexible channel conflicts, and then use this model to improve the overall throughput in a WLAN.
One of the problems in wireless channels is that depending on the interference, there are different approaches to avoid conflicts. Nevertheless, the interference depends on the configuration of the channel. As an example, narrowing the width helps to reduce interference but they also tried to better understand the impact of the power levels.
They showed that given a SNR, it is possible that nodes can predict the delivery ratio for an specific channel width. As a result, the receiver can compute the SNR and predict the Delivery ratio as a function of the SNR autonomously. Given that, the problem of channel assignment and scheduling becomes into a flexible channel assignment and scheduling problem.
SmartVNC: An Effective Remote Computing Solution for Smartphones, Cheng-Lin Tsao, Sandeep Kakumanu, and Raghupathy Sivakumar (Georgia Tech University, USA)
In our opinion, this paper was a great example of how to improve the user experience with certain applications. In this case, they are trying to improve the UX of mobile VNC. This kind of service was designed for desktops and laptops so they do not take into account the nature of smartphones. The goal is allowing users to access a remote PC (in this case Windows) from a smart phone (Android) in a friendly way. They evaluated the UX of 22 users (experienced users, students between 20-30 y.o.) and 9 applications running on VNC. They defined different metrics such as the opinion score (the higher the complexity lesser the mean opinion score) and task effort (number of operations required for a task such as mouse clicks , key storekes etc). Given that, they correlated both metrics for those users running apps in VNC and the results showed that when the task effor is high, the UX is poorer.
They proposed aggregating repetitive sequences of operations in user activity to remove redundancy without being harmless. One of the main problems was that application macros (like in excel) are not completely application agnostic but they are extensible whilst others such as raw macros (e.g. autohotkey) are completely opposite.
They enabled Smart macros. For that, they record events and build macros and they enabled a tailored interface with collapsive overlays on the remote computing client, grouping macros by app, automatic zooming, etc. For the applications they tested with those 22 users, they had a task effort reduction from 100 to 3 whilst the time to perform a task is also highly reduced. In the subjective evaluation, all the users showed their satisfaction with the new VNC. The talk was completed with a video recorded demo of the system.
FERMI: A FEmtocell Resource Management System for Interference Mitigation in OFDMA Networks, Mustafa Yasir Arslan (University of California Riverside, USA); Jongwon Yoon (University of Wisconsin-Madison, USA); Karthikeyan Sundaresan (NEC Laboratories America, USA); Srikanth V. Krishnamurthy (University of California Riverside, USA); Suman Banerjee (University of Wisconsin-Madison, USA); Mustafa Arslan
Femtocells: are small cellular base stations that use cable backhaul and they can extend the network coverage. In this scenario, interferences can be a problem but this problem differs from the ones that can be found in the WiFi literature. OFDMA (WiMax, LTE) uses sub-channels at the PHY and multiple users are scheduled in the same frame whilst WiFi uses OFDM (sequential units of symbols transmitted at an specific freq in time). Moreover, OFDMA presents a synchronous MAC (there's no carrier sensing like in WiFi). As a consequence, WiFi solutions cannot be applied to femtocells as interference leads to throughput loss and there are many clients coexisting in the same frame.
As a consequence, the solution must take into account both the time domain and the frequency domain. FERMI gathers load and interference related information. It operates at a coarse granularity (in the order of minutes) but this is not a drawback as interference does not change a lot in this time scale. Moreover, a per-frame solution is not feasible as the interference patterns change on each retransmission but aggregate interference and load change only at coarse time scales.
The system evaluation was done on a WiMax testbed and also on simulations. In both cases, they obtained a 50% throughput gain over pure sub-channel isolation solutions. The core results can be applicable to LTE as well.
SESSION 2. Wireless Access
WiFi-Nano: Reclaiming WiFi Efficiency through 800ns Slots, Eugenio Magistretti (Rice University, USA); Krishna Kant Chintalapudi (Microsoft Research, India);Bozidar Radunovic (Microsoft Research, U.K.); and Ramachandran Ramjee (Microsoft Research, India)
Wifi data rates have increased but throughput performance didn't see similar level of growth. Throughput is much lower than data-rate because of a high frame overhead. There’s a 45% overhead at 54Mbps but this overhead dominates at high bandwidth, around 80% in 300Mbps. This gets worst when multiple-links come at play.
This observation motivated WiFi nano, a technology that allows doubling the throughput of WiFi networks. Slot overhead can be reduced by 10x. Their solution proposes using nano slots to reduce slot duration to 9 microsec (that’s the standard one in 802.11a/n and it’s almost the minimum achievable). In addition, they exploit speculative preambles as preamble detection and transmission occur in parallel. As soon as the back-off expires, a node transmits the preamble but while transmitting preamble, it continues to detect incoming preambles even with self-interference. Their empirical results show that slightly longer preambles improve the throughput up to a 100% and frame aggregation can increase those figures even more. In fact, frame aggregation increases the efficiency as it grows from 17% to more than almost 80%.
XPRESS: A Cross-Layer Backpressure Architecture for Wireless Multi-Hop Networks, Rafael Laufer (University of California at Los Angeles, USA); Theodoros Salonidis; Henrik Lundgren and Pascal Leguyadec (Technicolor, Corporate Research Lab, France)
Multihop networks operate below capacity due to poor coordination across layers, and among transmitting nodes. They propose using backpressure scheduling and cross-layering optimisations. At each slot, it selects optimal link set for transmission. In their opinion, there are different challenges in multihop networks:
1- Time slots.
2- Link sets (e.g. knowing non-interfereng links)
3- Protocol overhead
4- Computation overhead
5- Link Scheduling.
6- Hardware constraints (e.g. memory limitations in wireless cards)
With XPRESS, all those challenges are addressed. XPRESS has two main components the MC (mesh controller) and the MAP (Mesh access point). MCs receive flow queues, computes schedule and disseminates schedule. On the other hand, MAP executes schedules and processes queues. The key challenge is computing the optimal schedule per slot. but this task takes a lot of time.
The MAP nodes use a x-layer protocol stack to compute the schedules. Apps running on the node go into the kernel who classifies the flows and allocates them on its own queue who is followed by a congestion controller. Then, the pipeline has a flow queue followed by a packet scheduller who puts into the proper link queue each packet. Somehow this reminds me of the work on Active Networks as they are dynamically change the behaviour of the network, in this case on a mesh-scenario. The proposed scheme achieves 63% and 128% gains over 802.11 24 Mbps and auto-rate schemes, respectively. They also performed an scalability evaluation.
CRMA: Collision-Resistant Multiple Access, Tianji Li, Mi Kyung Han, Apurva Bhartia, Lili Qiu, Eric Rozner, and Ying Zhang (University of Texas at Austin, USA); Brad Zarikoff (Hamilton Institute, Ireland)
FDMA, TDMA, FTDMA, CSMA are the traditional MAC protocols to avoid collisions. These techniques incur significant overhead so they move from collision avoidance to collision resistance based on a new encoding/decoding to allow mutliple signal to be transmitted.
In CRMA, every transmitter views the OFDM physical layer as multi orthogonal but sharable channels, and randomly selects a subset of the channels for transmission. When multiple transmissions overlap on a channel, these signals will naturally add up in the wireless medium.
In this system, ACKs are sent as data frames. However there’s a problem with misaligned collisions which are handled with cyclic prefixes (CP) so they force the collided symbols to fall in the same FFT window. On the other hand, overlapping transmissions are limited using exponential back-off.
The evaluation was done on a testbed experiment with CRMA on top of a default OFDM implementation in USRP. They also used Qualnet simulations to evaluate the efficiency of the networks.
This year's SIGKDD conference returned after 12 years to San Diego, California to host the meeting of Data Mining and Knowledge Discovery experts from around the world. The elite of heavy-weight data scientists was hosted at the largest hotel of the West Coast and together with industry experts and government technologists enumerated more than 1100 attendees, a record number in the conference's history.
The gathering kicked off with tutorials and the parallel of two classics; David Blei's topic models and Jure Leskovec' extensive work on Social Media Analytics. Blei offered a refreshing talk that stretched, from the very basics of text-based learning, to the most up to date extensions of his work with applications in streaming data and the online version of the paradigm that allows one to scale up the model to huge datasets satisfying the requirements of modern data analysis. Leskovec elaborated on a large spectrum of his past work, covering a wide range of topics including the temporal dynamics of news articles, sentiment polarisation analysis in social networks and information diffusion in graphs by modelling the influence of participating nodes. The first day's menu on the social front was completed with Lada Adamic' presentation on the relationship between structure and content in social networks. Her talk at the Mining and Learning with Graphs Workshop provided an empirical analysis on a variety of online domains, that described how the flow of novel content in those systems was evident of variations in the patterns of interaction amongst individuals. The day closed with the conference's plenary open session that featured submission and reviewing highlights and the usual KDD award ceremonies: the latter session honoured the decision trees man, Ross Quilan, who presented a historical overview of his work and a data mining legion of 25 students from NTU that won this year's KDD cup on music recommendations.
After the second night of sleep and repetitive jetlag ignited wake ups, Monday rolled in and the conference opened with sessions on user classification and web user modelling. A follow up in the afternoon with the presentation of the (student) award winning work on the application of topic models for scientific article recommendation attracted the interest of many. The dedicated session of the conference on online social networks also signified the interest of the Data Mining community for the nowadays hot domain. The latter opened with an interesting work on predicting semantic annotations in location-based social networks and in particular the prediction of missing labels in venues that lacked user generated semantic information. While the machine learning part of the work was sound, its applicability as a real problem was doubted, suggesting the need to identify the essential challenges in a relatively new application area. Nonetheless, the keyword of the day was scalability: two talks focused on an ever classic machine learning problem, clustering, introduced in the context of the trendy Map Reduce model. Aline Ene from University of Illinois introduced the basics, whereas the brazilian Robson Cordeiro offered novel insights with a cutting edge algorithm for clustering huge graphs. The work driven by the guru Christos Faloutsos featured the elegance of simplicity with the virtues of effectiveness, showing that for some size does not matter and petabytes of data can be crunched in minutes. A poster session came to shut the curtains of another day. The crowd was not discouraged by the only-one-free drink offer of the conference organisers and a vibrant set of interactions took place. Some were discussing techniques, some were looking for new datasets, while social cliques were also forming in the corners of the hotel's huge Douglas Pavilion.
Day 3 drove the conference participants to the dark technical depths of the well established topic of matrix factorisation, that was succeeded by the user modelling session.Yahoo!'s Bee-Chung Chen gave an intriguing presentation on a user reputation in a comment rating environment, followed by the lucid talk of Panayiotis Tsaparas on the selection of a useful subset of reviews for amazon products that were plagued by tones of reviews. The Boston-based Greek gang of Microsoft Research, also showed how Mechanical Turk can be used to assess the effectiveness of review selection in such systems. Poster session number 2 closed the day and the group's work on link-prediction in location-based social networks was up. The three hour exhaustive but fruitful interaction with location-based enthusiasts, agnostics and doubters was a good opportunity to get the vibe of the community in an up and coming hot topic. For application developers and online service providers the work was an excellent example of how location-based data could be used to drive personalised and geo-temporally aware content to users. For data mining geeks it presents an unexplored territory where existing techniques could be tested and novel ones devised. At the end of the poster session many of the participants headed for a taste of San Diego's downtown outing, whereas the relaxing boat trips at the local gulf were also highly preferred.
The final day of the conference was marked by Kaggle's visionary entrepreneur Jeremy Howard and a panel of experts in data mining competitions. The panel aimed to analyse the problems that were risen during previous competitions and the lessons learned for the creation of new successful ones. Howard presented radical views suggesting that the future of data mining and problem solving would be delivered in the form of competitions. Not only competitions could attract an army of approximately 10 million data analysts around the globe, but the design of them could promise a sustainable economic model that would bring money to all participants (even non-winners) and would perhaps put at stake a respectable number of PhD careers. His philosophy was driven by the idea that to solve challenging problems effectively, you need to awaken the diverse pool of minds that is out there and can constitute an infinite source of innovation.
But KDD attracted not only the interest of scientists and corporate experts, but also that of politicians. Ahead of 2012 elections the Obama data mining team is here and hiring! Rayid Ghani chief scientist at Obama for America highlighted the important role of predictive analytics and optimisation problems in the battle for an electorate body that is traditionally positioned to announce winners by only small margins of difference. It is left to see whether science will beat Tea Party style propaganda and will maximise positive votes in a bumpy and complex socio-political landscape. The political world was also also (quietly) represented by government data scientists and secret service analysts who were seeking to catch up with the state of the art in data mining and knowledge discovery, a vital survival requirement in a world overflowed with data and subsequent leaks...
The full proceedings of KDD 2011 can be found here.