Liveblog from SIGCOMM13 – Day 1

Greetings from SIGCOMM 2013, Hong Kong. I'll be live blogging as much of the event as I can, battery life permitting.


  • Warning - There may be a typhoon, stay tuned for further updates....

PC Chairs Note

  • Inspired by last year, broadened the scope by adding more topics to the CFP and targeting 40 papers for acceptance. 
  • First round papers got >200 papers. All got three reviews.
  • 96 to second round
  • 80 papers made it to PC meeting.
  • Highest acceptance rate ever, about 16%, almost 1/6.

Best Paper

  • Ambient Backscatter: Wireless Communication Out of Thin Air - Both the best paper and best student paper! 
  • Really cool idea. Communication using ambient RF as it's power source. Strong eval.

SIGCOMM Chair (Keshav)

  • A new way to asses papers, coming up at community feedback session


  • Test of time award: PlanetLab: an overlay testbed for broad-coverage services, A Delay-Tolerant Network Architecture for Challenged Internets delay tolerant network architecture 
  • SIGCOMM Dissertation Award: Shyamnath Gollakota (MIT)
  • SIGCOMM award: Larry Peterson - For ground breaking advances in how networking and distributed systems research is conducted.

 Kenote: Zen and the Art of Network Architecture (Larry Perterson)

  • Zen and the Art of Motorcycle Maintenance: Rejected by 121 publishers (world record)
  • Classic vs Romantic perspectives: Science vs Art
  • Classic view, systems of subsystems. Romantic view, "I just like to ride my bike"
  • Quality unifies both classic and romantic views. Whole is greater than the sum of its parts.
  • Buddhism: Life = suffering = GENI (test bed)
  • Duality - Networking vs Distributed Systems - Made GENI very hard to deal with.
  • The middle way: Involves balancing requirements, not about optimising any one dimension.
  • Path to Enlightenment:
    • Inspiration - Some new idea
    • Analysis - Simulation, back of the envelope - Does this make sense?
    • Controlled Lab Experiments - Implementation reality - Can it be built?
    • Deployment Studies - Traffic and user reality
    • Pilot Demonstrations - Customer reality
    • Commercial Adoption - Market reality
    • Change the market
  • PlanetLabs & CoBlitz - 12 year time frame.  CoBlitz is a CDN system.
    • Change the market: Operators now all have their own CDN systems.
  • The path to enlightenment:
    • See reality clearly - Assumptions hide the truth.
    • Users reveal the truth faster than any other way
  • Architectures tell engineers what they can't do, but engineers don't read architecture documents.
  • Lessons on architectures:
    • Part analysis, part intuition - whole is greater than sum of the parts
    • Unify abstractions - duality is an opportunity
  • Putting Lessons to Action
    • SDN vs NFV (network function virtualisation) vs Distributed Applications
    • Lines are quite blued. Distinctions without a difference.
      • Is a proxy an application or controller?
      • Is a scalable controller that uses NoSQL an app or a controller?
      • Is a firewall in the data plane or in the control plane?
    • Topology - We all have a network graph in our heads.
      • Network people like interesting topologies, but applications like a bit switch. Everything to everything.
      • Basic model: Application is a function that sits on top of the network switch
      • There are Topology optimisations -
        • cut-through - don't bother sending data up to the application
        • Inline - put the function in the network and don't bother sending to the "application"
      • Really - Cut-through = SDN, inline = NFV.
    • Model all network functions as "scalable services"
    • Use SDN to bootstrap a virtualization  layer - supports cut-through optimisation
    • NFV - tool to decide where to put functions.
  • XaaS - Everything-as-a-Service
    • Service as a unifying abstraction.
    • Unifies across resources (Compute, Network, Storage)
    • Unifies across the network (DC, Access, WAN)
    • Unifies across service levels (IaaS, PaaS, SaaS)
  • Conclusions
    • I'm indebted to lots and lots of people including the NSF.


Q: What about hardware? A: This model applies as well. Keep it simple, keep it fast and keep the interfaces static.

Q: PlanetLabs: Haven't we been doing OpenCloud all along: A: Network virtualisation is coming up to speed with server virtualisation.

Q: What about publishing: A: Operational experience should be a SIGCOMM category.

Q: What about incentives. Aren't there conflicting requirements encryption vs storage. A: Hard problem, don't have a good answer.



Session 1 - SDN

B4: Experience with a Globally-Deployed Software Defined WAN


  • We want to run our networks at 100% utilisation. But then we need to deal with loss. OK.
  • We are willing to tolerate small amounts of low-bandwidth, if it means we can get higher aggregate bandwidth.
  • Design principles
    • Separate forwarding and hardware from control
    • Manage network as a single fabric
    • Use application specific knowledge.
  • Hybrid SDN Deployment
    • Quagga used to implement IBGP/ISIS to remote sites, slow transition from traditional switches.
    •  At first we have done a huge amount of work, to just replicate functionality.
    • Now we can introduce new functionality. eg Traffic Engineering
  • Traffic engineering server takes demand, topology, application priority and creates a traffic matrix to be programmed into switches.
    • Basically max-min over flow-groups. Flow groups have priorities. Different paths are used to satisfy TE requirements
    • We don't have to use the shortest path!
    • 15% increase in BW.But...
    • The real benefit is in failure scenarios. We don't have to run backup links at low utilisation because we know what will happen in a failure.
  • Challenges
    • No time. SDN is not a pancea.
  • Benefits of SDN
    • Leverage commodity forwarding hardware
    • Control software innovation decoupled from hardware.
    • Global network optimization
    • Manage the network as a single fabric rather than a collection. Really matters at scale


Q: Why use legacy protocols, (IBGP etc) A: Backwards compat/failure. But as we gain experience, less so


Achieving High Utilization with Software-Driven Wan

  • MS, Google, Amazon etc - Large inter-DC WAN, very expensive.
  • 2 key problems  1) Poor efficiency 30-50% 2) poor sharing
  • Why?
  • Cause of inefficiency -
    • Lack of coordination
    • Greedy resource allocation. Shortest path. Suboptimal.
    • Poor sharing.  Services compete with each other. Greedy.
  • SWAN - Software driven WAN
  • Traffic demands forwarded to controller, rate allocations are sent back. Controller sets up switch planes.
  • Challenges: scalability, congestion free data plane, small amounts of switch memory.
  • #1 Computing allocations is hard. Max-Min takes minutes. So instead, we approximate the max-min.
    • Only 4% of flows deviated mored than 5% from their fair allocations.
  • #2: Congestion free updates are hard.
    • Leave a small amount of "scratch" capacity at each link.
    • With slack between 0-50% always exists a congestion free update.
    • LP based solution to find the update steps.
    • Use slack for background traffic so utilisation is still 100%, but congestion is possible.
  • #3: Switch memory is limited
    • 50 sites = 2,500 pairs
    • 3 priority classes
    • ==> >20K rules needed. Switches only have a few thousand rules.
    • Solution: Dynamic path selection.
    • Working path set << total needed paths.
  • Summary of workflow
    • Compute resource allocation
    • Compute rule update plan
    • Compute Congestion free updte plan
    • Notify services
    • Deploy
    • Notify applications.
  •  Compared to optimal, SWAN achieves 98% util.


Q: Can you compare MSR approaches to congestion free updates A: different situations, inter-DC vs intra DC

Q: Is there a concern with how long the update takes. A: Update order is arbitrary.

Q: Why do you want to achieve high util: A: We want to be able to support more traffic. Saving money.


SIMPLE-fying Middlebox Policy Enforcement Using SDN


  • Middlebox management is hard.  Can SDN simplify middlebox management? 
  • Middlebox functions are beyond L2/L3 tasks.
  • We want to do this without modifying middlebox or SDN hardware.
  • Problem: Enforcing specific policies using SDN
  • Challenges:
    • Policy composition - traditional composition may introduce loops.
    • Resource constraints -> need space to allocate resources in both switches and middleboxes
    • Dynamic modifications -> Correctness
  • Design:
    • Rule generator handles policy composition.
    • Resources Manager - Split into online and offline stages. Offline stage assumes that some network attributes (eg topology) don't change rapidly.
    • Modifications Handler - Payload similarity used to track packets as they are modified.
  • Evaluation
    • 4-7x better load balancing. Close to optimal.
    • LP solver takes 1s for 252 node topology
    • 95% accuracy for inferring flows.


Q: Want to maintain legacy? A: Yes. We want to be backwards compatible.

Q:Why bother with middle boxes, just use SDN. A: We think there are significant barriers to modifying middle boxes.

Q:How do you do the tags A: We use "spare bits" in the IP header.

Q: Why are middleboxes special? A: Don;t understand the question.

Q: 95% accuracy. What happens in the 5% A: It is true. It's a problem.


Session 2 - Wireless

Ambient Backscatter: Wireless Communication Out of Thin Air (best paper)

  • Want to build a device that that can interact without power.
  • Idea: leverage background wireless signals to power the device.
  • About 10uw available, but orders of magnitude lower than power needed to  power an RF device.
  • Idea: use background signals and reflect or absorb them.
  • Challenges:
    • #1 Extracting backscattered signals
    • #2 Decoding on a battery free device
    • #3 Designing a distributed MAC protocol for battery free devices.
  • #1 Extracting - Run a low-pass filter over the signal and detect the change in average signal strength.
  • #2 Decoding - Using an RC circuit you build a moving average filter.
  • #3 MAC layer - Uses CSMA. Use bit transitions as a proxy for energy change.
  • Can do 10Kbps at a range of about 12 inches with zero bits. Can do 100bps at a range of 3.5feet.
  • Demo - Tags can be used to self identify misplaced items.


Q: How far can you scale A: Right now about 2.5 feet. Expect to be able to scale to 10mbps at 10 feet.

Q: Aerial are large. Can it be smaller. A: Yes, probably can shrink to the size of an RFID

Q: Have you looked at the the asymmetric case? A:  Yes, maybe like an RFID without the broadcast mechanism.

Q: What about gigahertz. A: Power extraction is possible.


Dude, Where’s My Card? RFID Positioning That Works with Multipath and Non-Line of Sight


  • RFID's are pretty cheap now. ABout 5c
  • Reader range is about 15m.
  • What if we can do location within 10-15cm.  Can eliminate queues at shops. Can track items at home. Like "I forgot my laptop charger"
  • Current accuracy is about 1m.
  • Why not? Main challenge is multi path effects.
  • PinIt - 10-15cm RFID in multi path environments.
    • Focuses on proximity to reference devices.
    • Exploits multi-path -Similar peaks in the profiles of close items have similar peaks. Dissimilar peaks in the profiles of items are far from each other.
  • How to find the proximity from multi-path signals
    • Naive approach - Correlation. - Doesn't work.
    • Borrow an idea from speech recognition. DTW (Dynamic time warping). How do I modify the the one signal to arive at the other signal.
  • 200 RFIDs deployed in MIT library.
  • Can pin the book to within 4cm of the nearest locator.

Dhwani: Secure Peer-to-Peer Acoustic NFC


  • NFC principle is trust by proximity.
  • Penetration is low. 12% in the US
  • NFC does not define security. Assumes that security is inherent to  range. 
  • Dhwani
    • Uses phones speaker and microphone to do acoustic coupling.
  • Challenge #1 - Frequency Selectivity
    • Imperfect electromechanical conversion.
    • Non-uniform frequency response.
  • Challenge #2 - Ringing and rise time
    • ABout 2ms rise time to detect a signal.
  • Challenge #3 - Ambient noise
    • Measured in a cafe.
  • Software define acoustic radio
    • Carrier-less design
    • 128 sub-carriers each 171hz
    • 4ms cycle prefic (challenge 2)
    • operating bandiwdth 1KHz
    • Communication at 4kpbs
  • JamSecure
    • Receiver sends a jamming signal, but can cancel out the signal (mostly)
    • Key Challenge - Self Interference Cancellation
    • Frequency selective comes mostly from sender receiver physical characteristics - precomputed and mixed.
    • Need to compensate
  • Limitations
    • DOS - also eavesdropper can emit a jamming signal, but no info is lost
    • Directed antenna - hard as the direction area is about 10cm


See Through Walls with Wi-Fi!


  • Wifi travels through walls. 
  • If there's a person behind the wall, a reflection will come back.
  • But ... reflection directly from the wall is 10,000x higher than reflection behind the wall.
  • Wi-Vi - Low power, eliminates refection.
  • Transmit two waves that cancel each other when they reflect off static objects but cancel out when they reach dynamic objects.
  • Use 3 antennas - Two transmitting one receiving.
  • Send cancelling signals.
  • Using an antenna array to detect motion - however, no antnena array.
  • Instead - Use the idea that the object is moving to emulate an antenna array. (cool)
  • Can detect gestures with movement.
  • Classification algo developed to detect the number of people behind a wall. Pretty good up to about 3 people.


Best of CCR 1

What We Talk About When We Talk About Cloud Network Performance


  • We want network guarantees
  • "Just give me enough bandwidth" - Not enough. How/where/when do you measure it is bandwidth enough?
  • We should talk about cloud performance and latency measurements.
  • Customers want:
    • Predictable high bandwidth
    • Predictable low latency
    • Predictable low loss
    • Predictable low cost
    • Simple, flexible interface
  • Providers
    • Want happy customers
    • Efficient implementation
    • High utilization
  • High bandwidth:
    • Where do you measure bandwidth?
    • The hose model - there's one big virtual switch with bandwidth guarantees
      • You'll likely over provision
      • Simple abstract
      • Matches real world "basically top of rack switch"
      • Only two parameters
    • The pipe model - Bandwidth guarantees between pairs of VM's.
      •  N^2 parameters to understand it. Hard for customers to understand.
    • Bandwidth requirements change over time.
    • What are we measuring:
      • Mean bandwidth over a given period P
      • Peak bandwidth
      • Latency
      • Tail latency (99.999%)
    • Summary one size does not fit all.
  • Cloud networking is a business
    • Does fairness matter? - It's one way to allocate resources, in the absence of other mechanisms it's ok.
    • Map-reduce likes fairness, but not everything.
  • Implementation issues:
    • Hose model guarantees in an oversubscribed network
    • scalable pipe-model guarantees
    • Hose model gurantees + fiar work conserving.
  • Summary
    • Evidence suggests that we have not entirely converged, technologists need to talk to business people.

Session 4: Fast and Scalable Network Design


Maple: Simplifying SDN Programming Using Algorithmic Policies


  • SDN promises to make network management simpler with controller software. 
  • Openflow is a key building block, but also a source of complexity
  • onPacketIn(p)
    1. Figure out what to do with the packet
    2. Program the OF hardware for subsequent packets.
  • Programmers are motivated by performance to use wildcard rules, but it's easy to get these wrong.
  • We desire the ability to automate the the generation of rules.
  • Algorithmic policy -> Specified in a general purpose language such as Java/Python/Haskell
  • But, we ned to generate rules.
  • Maple : New technique: Runtime tracing.
  • Trace of the execution is taken as the packet moves through the rule to generate a trace tree.
  • Maple then compiles the trace tree into a flow table.
  • Trace tree method converts arbitrary algorithmic policies into correct forwarding entires with wildcards used properly.
  • Implemented in Haskell
  • To evaluate - implementing ACLs using maple.
  • Evaluate using class bench of 1000 and 2000 filters.
  • Maple is reasonably efficient in terms of table entries and priorities used.

Q: Modularity is a good thing. How can they interact? A: Maple is oblivious. If it had more insight it could be better.

Q: Hardware does not all support proprieties. How do you cope? A: Don't understand [ed: language barriers]. We assume that priorities work.


Forwarding Metamorphosis: Fast Programmable Match-Action Processing in Hardware for SDN


  • Would like to be able to software upgrade your switch with software. 
  • Fixed function switches uses a collection of stages with fixed functions and algos.
  • May want to be able to extend. But how?
  • SDN accentuates the need for flexibility.
  • What alternatives
    • 100x too slow, expensive
    • 10x too slow, expensive
    • 10x too slow, expensive
  • How  do I design a flexible switch chip.
    • Big chip
    • High frequency
    • wiring intensive
    • many crossbars
    • lots of TCAM
    • interaction between physical design and architecture
  • RMT switch model.
    • parse graph - parses headers
    • table graph - looks up stuff
    • eg if you add a field, you add it to the parser, and then you put it in the table
    • But the parse and table graph doesnt tell you how to build the switch.
  • For a flexible switch we need lots of CPUs in a pipeline
  • But memory bandwidth is a problem. So we replicate the CPUs.
  • Cost?
    • Many functions are the same. I/O, data buffer, queuing etc.
    •  Make extra functions optional: statistics
    • Memory dominates the area: RMT must use memory bit efficiently.
    • Overall a 15% area increase.
    • Overall a 12.4% power increase.


Q: When can I buy one. A: It's a research product, can't comment on product releases.

Q: You're arguing for homogenous CPUS, most people are talking about heterogeneous cores. A: Very different architecture.

Q: What about the compiler? A: We haven't produced either the chip or the compiler.


Compressing IP Forwarding Tables: Towards Entropy Bounds and Beyond

  • Not going to talk about the paper, talking about compressed data structures instead.



Comments (0) Trackbacks (0)

No comments yet.

Leave a comment

No trackbacks yet.