syslog
10Dec/130

Live-blog from CoNEXT 2013 – day 1

Kicking off the conference with some explanations about how the TPC selected the papers. The program looks great and I will be covering some of the sessions here over the next three days.

There is an official live blog at layer9: http://www.layer9.org/ so I will be covering the sessions and papers that are closer to my field and those that I find particularly interesting.

Best full-paper award goes to "Enhancing video accessibility and availability with information-bound references"

Our "Staying Online While Mobile: The Hidden Costs" was a nominee for the best short paper award, but alas! The award goes to "Is There a Case for Mobile Phone Content Pre-staging".

All the papers can be found online at the conference website: http://conferences.sigcomm.org/co-next/2013/program.html

Session 1: Main Street, Wall Street

 

Crowd-assisted Search for Price Discrimination in E-Commerce: First results

Presented by Nikolaos Laoutaris, Telefonica Research

Previous work on Price Discrimination published at HotNets. It received a lot of publicity and questions from regulators that the approach did not scale to answer and they had to build a solution that would.

At least 20 retailers with price variations frequent and wide. Tracking prices of 100 products, the variation is quite large and price discrimination is seen across all kinds of retailers, both big and small *hotels.com, amazon.com or small ones like tuscanyleather.it). Mostly price discrimination done for products around 100$, but there is some for products in price range in the range of thousands, for example when looking at different countries even if you carefully discount all taxation differences.

Comparison of countries is also interesting: Finland in Europe is more expensive than others, Spain is similar to Germany and Brazil is significantly cheaper.

Live demo didn't work but you can use a chrome extension to see if you are experiencing price discrimination: http://pdexperiment.cba.upc.edu/

A lot of price discrimination is based on location alone, an open question os how important is personal information?

Q: What's the big surprise here? I'd be surprised if it didn't happen
A: No surprise, just a quantification. A while ago there were claims that e-commerce would lead to that, but nobody had done measurements.

Q: What's the next step to combat this? Provide the consumer the best deal.
A: It is complicated - many of the retailers will not ship the product if ip geolocation does not match shipping location
Q: What about a gift? Retailers need to allow for that
A: There are various cases, works for some retailers, does not work for others

 

A first look at IPv4 transfer markets

Presented by Ioana Livadariu, Simula Research Laboratory

 

IPv4 transfer market has been legitimized so far, but it is not clear whether the market exists. The questions are whether it exists, inhibits adoption of IPv6 and/or increases fragmentation of routing tables.

Using lists of published transfers of arin, apnic, ripe, etc. Some include non-market transgers (M&A), but there is an increase overall. 75% of published transfers are from legacy allocations, so there is a healthy redistribution of the address space.

According to Microsoft-Nortel deal (2011) IPv4 address is $11.25 - total value is $130 million...

Also inferring IP transfers from BGP routing tables. Currently exploring other approaches including DNS changes.

Q: (session chair) Do you see the market increasing?
A: Yes, pointing back to a figure in the beginning of the talk (but one could see a few different trends there...)

 

Session 2: Not Hardware Networking

 

Not quite my cup of tea, so I'd point you to the layer9 blog...

Optimizing the "One Big Switch" abstraction in Software-Defined Networks
Nanxi Kang (Princeton University), Zhenming Liu (Princeton University), Jennifer Rexford (Princeton University), David Walker (Princeton University)

Q: Why are you trying to minimize the number of rules in a switch? Does it matter if the number of rules in each switch is close to its capacity?
A: We are considering cases where the network is dense and rules space is constrained.

Q: You are using linear programming for rule allocation, how does this LP scale?

A: We use the observation that our rule-space allocation depends primarily on the total amount of space allocated to a path, rather than the portion of that space allocated to each switch.

An SDN based Flow Counting Framework for Anomaly Detection
Ying Zhang (Ericsson Research)

Q: There has been some prior work on flow counting using SDN, how relevant is that? and how does SDN help?
A: The prior work is relevant, with SDN its much easier to implement these function, e.g., selecting what flows to monitor. However, there is lack of study in how we can do active measurements.

Virtualizing the Access Network via Open APIs
Vijay Sivaraman (University of New South Wales), Tim Moors (University of New South Wales), Hassan Habibi Gharakheili (University of New South Wales), Dennis Ong (University of New South Wales), John Matthews (CSIRO ICT Centre), Craig Russell (CSIRO ICT Centre)

The biggest problem seems to be designing the right APIs. Essentially an opinion paper.

Q: One of the main problems that we have is incentives for ISPs and content providers (e.g. if they are the same entity). E.g. Comcast. How do you enable/enforce them to be friendly to others?

A: That solves the problem then. But users will be retrieving content from others, so will need some kind of fairness.

Distributed Resource Control using Shadowed Subgraphs
Greg Lauer (Raytheon BBN Technologies), Ryan Irwin (Raytheon BBN Technologies), Chris Kappler (Raytheon BBN Technologies), Itaru Nishioka (NEC America)

Session 3: The Roads Taken, in a Data Center

 

Per-packet Load-balanced, Low-Latency Routing for Clos-based Data Center Networks
Jiaxin Cao (Microsoft Research Asia), Rui Xia (Microsoft Research Asia), Pengkun Yang (Microsoft Research Asia), Chuanxiong Guo (Microsoft Research Asia), Guohan Lu (Microsoft Research Asia), Lihua Yuan (Microsoft), Yixin Zheng (Microsoft Research Asia), Haitao Wu (Microsoft Research Asia), Yongqiang Xiong (Microsoft Research Asia), Dave Maltz (Microsoft)

Presented by Chuanxiong Guo

Network latency has a long tail, but busy servers do not contribute to the long tail - where does it come from then? Seems like congested switch port that can use several MB for packet buffering (1MB buffer introduces 1ms latency for a 10G link - maximum I assume).

DRB (Digit-reversal bouncing) achieves 100% bw utilisation by per-packet routing... Not sure I understand why somebody would aim for 100% bw utilisation - what about traffic demand fluctuations? the goal is to achieve almost 0 queueing delay.

Model queueing latency - first-hop queue length vs. traffic load with switch port "number 25"? Supposedly DRB and RRB (Round-Robin Bouncing) achieves bounded queue length when load approaches 100%.

Got lost in the detail...

Q: My biggest concern with this type of work is that you make the strong assumption about completely symmetric links, no temporal congestion, because when there is some problem like this,  the whole work falls to pieces. E.g. when one link falls from 1gb to 100mb, the whole bandwidth in the datancenter falls 10x

A: Preferring complete failure to a slow link, because you can deal with failure as just a link that does not exist.

Q: So if you have temporary congestion on one link, is it possible?

A: Not covered by DRB, but handle this with congestion control mechanisms such as ECN.

Q: When you have multiple paths, e.g. through a spine switch, how does DRB work in that case?

A: not covered by DRB...

 

Scaling IP Multicast on Datacenter Topologies
Xiaozhou Li (Princeton University), Michael J. Freedman (Princeton University)

Presented by Xiaozhou Li.

Naive IP multicast problems: reliable, stable and not scalable. Their work aims to scale IP multicast to datacenters.

Challes with scaling - switches maintain info about all groups, periodically send queries... Communication overheads. In the data-plane, multicast addresses cannot be aggregated by IP prefix and the size of the multicast forwarding table is limited. Prior work scaled up # groups per switch - insufficient for datacenter nets?

Multi-rooted tree topologies simplify multicast tree construction.

Three techniques:

  1. Partition and distribute the multicast address space
  2. Enable local multicast address aggregation
  3. Handle network failures with fast local rerouting

Partition the space according to bitmasks and distribute the address space to multiple core and aggregation routers. When a new group is created, it should be assigned to the least loaded partition.

For aggregation, use local address translation and aggregation in a bottleneck switch, so that switches 'lower' in the tree need to rewrite them back into the original values. Group aggregation is NP-hard in general: given fixed # of aggregated groups, minimize network overhead. Local aggregation at bottlenecks solves that as sub-problem...

Manage multicast using SDN!

Evaluation based on simulations, not completely clear about the choice of parameters. But given the parameters it works well and improves the maximum number of groups in the datacenter network - not a lot for random VM placemet and more than 3x for nearby placement.

Q: you said that bloom-filtered based appraoches and other related work does not scale, but you don't have any simulations to compare against them

A: (basically wasn't really sure how it compares against bloom-filtered approaches)

 

Explicit Multipath Congestion Control for Data Center Networks
Yu Cao (Tsinghua University), Mingwei Xu (Tsinghua University), Xiaoming Fu (University of Goettingen), Enhuan Dong (Tsinghua University)

MPTCP + RED + ECN for datacenter congestion control.

Q: I wonder if you had a chance to compare your results with p-fabric?

A: No. I know the p-fabric is a sigcomm last year paper. We didn't compare against it.

Q: What is the motivation? Should we fix DCTCP? find more variants? Find a new way of doing congestion control, outside of TCP?

A: DCTCP is single-path congestion control. I really think that MPTCP can use the multiple passes in a datacenter.

Q: So forget DCTCP, in that case I suggest comparing against p-fabric, state of the art

 

Q: could you go back to your graph showing ultiization of the network (Goodput)? you use a small number of subflows - why should you use this particular number of subflows? Part of the explanation of the numbers you are getting is the small number of subflows.

A: (could not answer)

And another answer not answered...

Aspen Trees: Balancing Data Center Fault Tolerance, Scalability and Cost
Meg Walraed-Sullivan (Microsoft Research), Amin Vahdat (Google, UCSD), Keith Marzullo (NSF, UCSD)

Dataceters often loosely based on fat-tree, whether or not it is a good topology - it has an important benefit of high path multiplicity. Link failures still cause big problems for reliability and fast re-convergence.

Options for improved fault tolerance:

  1. Buy bigger switches - two links to each subtree at the cost of more expensive switches
  2. Build a bigger network - deeper network, loner paths, more switches
  3. Remove part of a network - (take a look at the slides for the exact mechanism) scalability problems possibly

These examples double the number of links at the top level, but could put extra links at any/all networks (Also double, triples, etc.). Resulting trees: aspen trees! Multi-rooted tree with extra links at one or more levels (e.g. VL2*).

If we double (triple) the number of links at every link, we half the number of hosts we can support. Algorithm to generate trees based on tradeoff constraints. ANP: notification protocol to leverage redundant links.

Where can we put redundant links? Top/bottom/doubling links? If only possible to change one level, change the top! Looks at different "Fault Tolerance Vectors" and how average convergence time and the number of hosts removed changes with different fault tolerance vectors. Giving examples of 4-level 4-port and 5-level 16-port trees.

supposedly Aspen Routing Protocol converges order of magnitude faster than reference OSPF implementation!

All abut methodology: finding ways to add redundancy, formalise the associated costs, determining range of possible topologies and finally understanding the tradeoffs that can be made.

Q: Loads of related work. What's new?

A: a slightly different point - adding a bit of latency. A lot similar to HFC topologies, some of them subsets of aspen trees.

 

Session 4: Moving Packets

 

Scalable, High Performance Ethernet Forwarding with CuckooSwitch
Dong Zhou (Carnegie Mellon University), Bin Fan (Carnegie Mellon University), Hyeontaek Lim (Carnegie Mellon Unviersity), David G. Andersen (Carnegie Mellon University), Michael Kaminsky (Intel Labs)

Pushing the limit of forwarding table scalability...

Based on Cuckoo hashhing [Pagh 01], and its improvement Optimistic Concurrent Cuckoo Hashing [NSDI '13]. The new hashing table can achieve much higher performance. Based on thread-safe read-write instructions (x86-optimized memory access). Improves throughput with batched hash table lookup and smart hash table prefetching.

Q: Future work?

A: One problem is memory access latency for using commodity hardware. One question is how to optimise it or algorithms for it.

Q: Other papers try to use GPU, how would that compare?

A: GPU only helps for compute-intensive applications because PCI bandwidth is much smaller than memory bw.

Scheduling Packets over Multiple Interfaces while Respecting User Preferences
Kok-Kiong Yap (Google/Stanford Unversity), Te-Yuan Huang (Stanford Unversity), Yiannis Yiakoumis (Stanford Unversity), Sandeep Chinchali (Stanford Unversity), Sachin Katti (Stanford Unversity), Nick McKeown (Stanford Unversity)

Deficit Round Robin scheduler often used to ensure fairness. Given different preferences of users regarding applications, weight can be assigned to them... but it does not work for multiple interfaces! Proposes miDRR (multiple-interface...): each interface does DRR separately and "steals" app traffic.

Q: When can you use two interfaces at the same time? Multipath propagation

A: Implementation details, proxy in between for example

Q: The policy would not allow multiple interfaces at the same time, e.g. multiple Gs

A: probably a policy question and I don't know how that will play out

Q: MPTCP?

A: Can't deal with preferences

Q: uplink vs downlink?

A: details in the paper about how HTTP 1.1 can be use to do that
On Limitations of Network Acceleration
Animesh Trivedi (IBM Research, Zurich), Bernard Metzler (IBM Research, Zurich), Patrick Stuedi (IBM Research, Zurich), Thomas R. Gross (ETH, Zurich)

For network acceleration move processing out of the CPU into the NIC. But network acceleration can reduce perf for apps! Investigation of that has lead to interesting findings:

  • Different CPUs implement different cache coherence protocols, e.g. access intention mismatch (interpreting DMA read as exclusive access)
  • Architectural overheads become same order of magnitude as network - sharing on-chip and off-chip resources.
  • Application access pattern influences architectural overheads - apps exposed to platform idiosyncrasies

So Apps need to know the architectural pitfalls and drive overhead management. Large caches and pre-fetchers are ineffective and instead need closer NIC-CPU integration.

Network accelerator usage is not transparent and solutions are highly system dependent.

No questions...

Now or Later - Delaying Data Transfer in Time-Critical Aerial Communication
Mahdi Asadpour (ETH Zurich), Domenico Giustiniano (ETH Zurich), Karin Anna Hummel (ETH Zurich), Simon Heimlicher (ETH Zurich), Simon Egli (ETH Zurich)

DTNs for UAVs.

Three-way tradeoff: optimal path, minimised delay and asap delivery (due to failure - crash, failure). Moving UAV has lower throughput than stationary ones, even delayed ones.

Implementation of a system using small airplanes and quadcopters. Analysis of throughput as a function of distance. Hovering strategy even at the same distance outperforms moving one in terms of throughput. Throughput drops rapidly with the increase of speed. A study of transmission strategies based on velocity, distance and failure rate. Basically, better to move closer with higher speed and start transmission sooner.

Q: why do you not consider cellular approaches at all?

A: part of a bigger project, where we consider interactions where cellular operation may not be available

 

That's it for today! Will be back tomorrow!

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.