syslog
15Aug/130

Liveblog from SIGCOMM13 – Day 3

I'm back again, liveblogging from SIGCOMM13. Yesterdays blog was in fact not clyoned out of existence, but a very successful poster session kept me busy until after busses had started leaving for banquet. As a result of the successful poster session I've been here since 7am....

Privacy

Mosaic: Quantifying Privacy Leakage in Mobile Networks

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p279.pdf

  • How much information can be obtained by obtaining packet traces for a given user.
  • This work focuses on mobile because there is more private information and IPs change more frequently.
  • Lots of services use key/value pairs with user IDs.
  • We use OSN IDs as anchors, but their coverage of the sessions is too small about 2%
  • Traffic attribution markers. Group sessions into blocks using IPs.  We know that OSN IDs will be shared.
  • Using the same traffic markers and OSN IDs we can track users.
  • What if there are no traffic markers?
  • Users can be classified by their favourite DNS searches.
  • Traffic attribution evaluation
  • Compare to RADIUS sessions (ground truth)
  • Accuracy is quite good.
  • Mosaic of a real user:
    • 12 classes eg location, device, traffic, e-commerce etc.
  • Leakage:
    • OSN profiles provide static user information (education, interests)
    • Network analyse give real-time activities/locations.
    • Both complete/corroborate each other
  • How to prevent this?
    • Traffic markers should be limited/encrypted
    • Third party services should be strongly regulated
    • OSN public profiles should be carefully obfuscated.
    • HTTPS isn't really helping this as traffic markers are not encrypted.

Expressive Privacy Control with Pseudonyms

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p291.pdf

  • Tracking is pervasive. 
  • Cons:
    • Lack of privacy
  • Pros
    • Personalisation
    • Revenue
    • Security
  • Goal is to give users control over how they are tracked
  • We do not argue for removing the ability to be tracked, just to add more control
  • Current defences eg:
    • Cookie blocking
    • Do not track etc
    • Proxies
    • Tor
  • Problem: These mechanisms are course grained.
  • We use pseudonyms as a way to obfuscate users privacy demands.
  • Many IP address for an end-host.
  • We use IPv6 - even a small network has more addresses than the entire IPv4 block.
  • IPv6 uses top half of address for WAN, lower half for LAN ==> We can do whatever we want with the LAN address
  • Split LAN address into subnet, host and pseudonym, this is then encrypted to form an encrypted ID.
  • Web browsers extended use JS to request pseudonym IDs.
  • Policies are quite expressive from 1 pseudonym to every request with a new one.
  • Per request needs less than 10 bits.

Questions:

Q: Breaks some functionality, how would it work if people respected do not track; A: It would be ideal, but no guarantee, so this is more explicit for users to control.

Q: [ can't hear] PLEASE SPEAK UP!!!! A:

 

Towards Efficient Traffic-analysis Resistant Anonymity Networks

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p303.pdf

 

  • How to defend ourselves from mass surveillance. 
  • The problem of IP anonymity:
    • Man in the middle can inspect IPs to know where data goes.
    • Proxies can be be required to take logs and reveal users
    • Tor, decentralised - Onion routing. Bitwise deniability does not protect against traffic analysis.
  • Goal - K-anonymity (indistinguishable amongst k clients)
  • Threat model:
    • Global passive attacks, can snoop all traffic.
    • Active attack - Can alter traffic eg add watermarks.
  • Strawman
    • Pad flows  - Defats analysis, but has too high overhead.
  • Use a variable, uniform rate at the to pad at the core.
  • Use ksets to pad at the edge.
  • Evaluation:
    • Trace driver simulations. 100,000 bit-torrent users.
    • Models
      • constant rate onion V2
      • Boardacst: P5, DC-Nets
      • P2P: Tarzan
      • Aqua
  • Overhead?
    • In the median case, Aqua uses 30% overhead vs 80% for other models.
    • Throttling is low.

Question:

Q: Why use bittorrent over Tor? A: Tor is slow because of an incentives mismatched.

Q: You assume that messages are large: A:[Didn't understand]

Q: Expect aqua to have a lot of providers, how does it compare to DCent A: Uses broadcast so bandwidth  usage is much higher.

 

SplitX: High-Performance Private Analytics

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p315.pdf

  • Data analytics is important and useful.
  • Data exposure is becoming a concern.
  • Fast analytics of user controlled and distributed data.
  • Previous work - Servers aggregate data wihout seeing individual data. 
  • Previous systems
    • scale poorly, because they require public key operations
    • Malicious users can poison the results.
  • Components and assumptions
    • Analysis (potentially malicious)
    • Clients (have data, potentially malicious)
    • Servers, are honest but curious. Will follow the protocol, but can exploit extra information if found.
  • High to achieve high performance?
    • Don't use public key
    • split messages and use xor
  • Each message is sent to a separate mixer.
  • Clients cannot arbitrarily alter data.
  • We bucket results. ! answer per bucket.
  • Pub/Sub for queries.
  • System is 2-3 orders of magnitude than current systems.
  • Implementation is a chome plugin + jetty.
  • 400 clients. More details in the paper.

Questions:


Q: What is the long term business model? A:  data analytics is important but so is privacy. So maybe the incentives are well aligned.

Q: How much performance burden does each split add? A: Xor is very efficient. So it's doesn't cost that much.

 

Applications and Resource Allocation

Participatory Networking

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p327.pdf

  • Applications have preferences for the network, but we need to share access to SDN style resources.
  • We provide an end user API/syscalls
    1. Requests
    2. Hints
    3. Queries
  • Challenges:
    • How to resolve conflicting requests? [ed: wrong]
      • Expose shares - Flow groups, principals, privileges.
      • Sub shares - Kind of like capabilities. can be delegated
      • Oversubscription is allowed, but policy is enforced dynamically.
    • How to resolve conflicts
      • Policy trees. Hierarchal flow tables. [HFT]
      • Uses three combination operators to resolve conflict.
      • Requires associativity and default "don't care"
      • +D In node
      • +S sibiling
      • +P Parent-sibiling
      • HFT doesn't care so long as requirements are met.
      • Compiler linearises the tree into openflow rules.
        1. Generate the entire network flow table
        2. Break the table up using network information to distribute it.
    • Adapted to 4 applications, Haddoop, Zookper, [2 others]
    • Hadoop case:
      • 3 sort jobs,
      • 2 priorities.
    • PANE allows users to talk back to the control plane
    • So far bandwidth, access control and priority explored.

Question

Q: Why do these applications need to be able to control the network? A: We don't trust the VMs?

Q: Isn't it bad bad to mix data/control planes. Security? A: We have a restricted API.

Q: Other resources? A: Latency? Latency is hard. Maybe we consider switch hops as a proxy for latency.

Q: Scaling? A: Relates to the number of rules.

Q: [Can't hear] SPEAK UP!!! How would we integrate something like Sinbad? A: Good question, extensions needed.

 

 

Developing a Predictive Model of Quality of Experience for Internet Video

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p339.pdf

 

  • Quality of experience is tied into the value chain of content providers. 
  • If users are satisfied then they they will watch more.
  • The relationship between quality and and revenue is not well understood.
  • QoE both subjective and objective. Validated by comparison to subjective.
  • Subjective is not representative of "in-the-wild" settings.
  • Objective does not capture buffering, switching etc.
  • Subjective replaced with Engagement -
    • How many videos does the user watch as a fraction
  • Objective replaced with "qaulity" measures.
    • Join time
    • Buffering
    • Bit-rate of the session
  • Which one to use? We need a model!
  • Challenges:
    • Relationship between metics and engagement.
      • Non monotonic relationship between bitrate and engagement.
    • Quality metrics are not independent.
    • Confounding factors
      • Two types of video - live, and VOD - different viewing patterns.
        • live people don't watch to completion whereas VOD this is mostly the case.
      • Quality metrics - different buffers sizes for live and VOD
      • Quality - users on wireless more willing to wait ==> higher engagement
  • Cast this as a machine learning problem.
  • Choice of ML is important. 40% only.
  • But can get better if we can add extra confounding factor information.
  • Final model is a collection of decision trees with 70% accuracy.
  • Evaluated model against baseline, optimised and QoE => 100% than baseline, 20% better engagement.

 

Questions

Q: Your evaluation has no error bars. How significant is it? A: We did multiple runs.

Q: But there are no error bars. A: um.....

 

ElasticSwitch: Practical Work-Conserving Bandwidth Guarantees for Cloud Computing

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p351.pdf

  •  Desire work conserving bandwidth guarantees
    • Use the hose model.
  • Desire work conserving allocation. Tenants can use spare capacity.
  •  Desire the network to be practical.
    • Commodity devices.
    • Topology independent
    • Scaleable.
  • Elastic switch runs in a hypervisor
  • Two components: Guaranteed partitioning, rate limiting

Data Center Networks 2

zUpdate: Updating Data Center Networks with Zero Loss

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p411.pdf

 

  • Network updates are hard. Planning stages are hard. Performance hotspots and it's a very laborious process. 
  • Applications want low latency updates with no packet drops.
  • Congestion free updates are hard, and require a multi-step plan.
  • Example, a CLOS network with ECMP.
  • Problem how to transition between two congestion free network topologies?
  • Can't update both switches at the same time.
  • So we need to introduce an intermediate stages that are also congestion free.
  • zUpdate takes a traffic distribution and calculates a congestion free plan.
  • uses a two phase commit.
  • Is the computation time for the plan too long? Do we have enough rules in the switch?
  • We only implement tuning for critical flows.
  • Flows on bottleneck links are the critical flows.
  • Testbed - 22, 10G switches, open flow 1.0
  • "large" simulation with "production" traces.

 

 Got Loss? Get zOVN!

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p423.pdf

  • Two types of networks networks  physical networks, and application level  virtual networks. 
  • Why don't we use ethernet pause frames?
  • Contribution: Loss identification and characterisation, Introduce a zero loss network with flow control.
  • 3 virtual machines on a single physical machine with a vswitch.
  • Two merging to one.  Packets lots in the vswitch and end host.  Not an artefact of just one VM or vswitch impl.
  • Architecture:
    • Extended the socket interface with a lossless option.
    • Modified the device driver the the ability to pause and resume the queue.
    • Inside the vswitch. If the transmission port is full, then issue pause to the receiver queue.
  • Eval: partition aggregate
    • 4 rack servers with 16 physical cores + HyperThreading
    • Intel 10G adapters.
    • 1G control network.
    • Virtual network flow control improves mean flow completion time.
    • Simulation confirms findings from prototype.
    • For optimal performance use both physical and virtual flow control.
  • Reviewers questions:
    • If we have faster networks or CPUs does this solve the problem? No.
    • Lossless links - Order of magnitude better completion time in reduction in Partition/Aggregate jobs.

 

pFabric: Minimal Near-Optimal Datacenter Transport

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p435.pdf

 

  • DC  look more like interconnect inside a switch.
  • In response to a single query 100s of services are started
  • What really matters is getting messages through network. Message latency matters.
  • Need to schedule flows so that large flow so that they get high throughput
  • Need to schedule short flows to to get short delays
  • Existing work: Uses rate cnntorl HUL, DCTCP, D2TCP, D3, PDQ
  • The problem is it's constrained, doesn't take into account the the size of the flows.
  • Other work calculates the rate of the flow and assigns it to the flows. But switches need to cooperate.
  • Prior work vastly improved performance, but it's still complex.
  • PFbric in one slide.
    • Every flow has a single prioority number, prio= remaining flow size.
    • Small buffers.
    • Send highest priority, drop low priority
    • host just send aggressively.
  • DC fabric is basically just a big switch.
  • Need to decide which packets will get through.
  • Need to schedule flows over a giant switch
  • Always subject ingress/egress constraints.
  • Ideal flow scheduling is NP-Hard
  • Simple greedy algo is within factor of 2 of ideal.
  • Can't implement it. Requires a central scheduler that has global knowledge.
  • Key insight -
    • Decouple flow scheduling from rate control.
    • Implement flow scheduling via local mechanisms
    • End hosts do flow control.
  • Priority scheduling, whenever you have a packet, send it first.
  • If the buffer is full, always drop the lowest priority.
  • Buffers in pFabric is very small. Never need more than 2x BDP per port.
    • 10-30x smaller than a regular switch.
  • Pirority mechanisms need to find the min/max of~ 600 numbers
  • Binary tree takes about 10ns, budget is about 51ns.
  • With a priority shecuudling/dropping mechanism, queue build up doesn't matter.
  • Only task for rate control, is to make sure that elephant flows don't have congestion collapse.
  • Need minimal rate control to make sure that loss inside of the spine doesn't waste bandwidth.
  • Implement a minimal version of TCP
    • State at line rate.
    • No retransmission timeout estimation
    • Reduce window size upon packet loss
  •  Key invariant for ideal scheduling: If at any point in time, the highest priority packet is at the switch,  should be avialble.
  • Eval: Ns2 simulation.
    • 144 port leaf-spine network.
    • Parameters from other empirical work.
  • pFabric is with 20% of the ideal. ==> and the ideal is REALLY ideal.
  • at the average and at the 99% almost no jitter.
  • pFabric, simple yet near optimal.
  • Needs new switches and minor host changes.
  • How much can you get with existing switches?

Question:

Q: Doesn't consider topology. How sensitive are the results to packet spray. A: Near optimal.

Q: How do you define the priority? Esp for streaming. A: In a lot of applications you know the size. Some applications you don't. Need to to separate pfabric into a class of it's own.

Q:  You priority mechanisms mean that you get very different performance at the beginning and the end.  A: Yes that's hard.

Q: Can you do this hybrid, do you need new hardware A: Run something like this in  each slice.

Q: With a large frames, you wont be able to take more than 2 priorities at a time. A: Our eval, only 1500, probably need more slack for jumbo frames. But not much.

Q: Do you care about fairness. A: no? Can make it more fair. Fairness is something we can get off.

Q: Do you deal with reordering A: This is the simple version.

 

Integrating Microsecond Circuit Switching into the Data Center

http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p447.pdf

 

  • Datacenter's rely on scale. 
  • Underlying network must scale as well.
  • Providing bandwidth involves a lot of fibber optics.
  • Core network is a big source of cost, lots of transceivers.
  • Probably need about 5 core transceivers per host.
  • Effective switch radix is probably going up with speed and redundancy
  • A few ideas looking replacing circuit switches.
  • Fundamental connection between speed of circuit switches and how quickly the traffic can change.
  • Need faster circuit switching.
  • Morida - Circuit switching to the core.
  • Contribution:
    • 100us vs 100ms
    • 25us vs 25ms
  • Traffic matrix. Need to connect input to outputs. While the circuit switch is changing can't do anything. "Night time", while it's running "day-time"
  • Pipeline observe, compute, reconfigure.
  • Assuming we can make configuration and computation fast.
  • Goal to use a collection of traffic matrixes flows.
  • Use a set of permutation matrixes that compute to the final traffic matrixes.
  • Fast OCS requires a fast control plane.
  • Traffic matrix scheduling - Decompose it into a future schedule.
  • Prototype:
  • Previous tech uses  3D mems technology. Quite high port counts, but slow ~25ms
  •  We use 2d mems, mirrors are either on or off.  Fast 2us,  low port count 2-8 ports.
  • Key challenge is how to build a scalable network.
  • Build a physical ring, with a different wavelength for each port. Each port connects to a TOR switch.
  • Need to modify the TOR: 1 queue per destination, DMA schedule and up/down events.
  • We didn't have a TOR, so we used linux hosts with a kernel module. Also needed and out of band control plane.
  • Eval.
  • How fast can we run this prototype. Lots of layers that we need to configure. Mean time of about 11us.
  • Needed a guard time to get an idea for our circuit lengths. About 20ms.

 

Closing Remarks

  • Student Research Competition (SRC):
    • 1'st round Poster/Demo to a SIG conference (SIGCOMM)
    • 2'nd round presentation (this morning at 7:30)
    • 6 winners advance to grand final. Me!
  • Keshav
    • Nothing happens on it's own, every thing happens because people make it happen.
    • 100,000's hours put in to make this happen.
    • Remarkable show. Look forward to seeing everyone in Chicago!
Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.