syslog
24Oct/130

Liveblogging IMC 2013 – Day 2

As far as I'm concerned, starting technical sessions before 9AM should be made illegal, but hey, at least we have the best paper presented in the first session! Today we have sessions about mobile, weather (in the clouds), routing and phones.

Money and Madison Avenue

A Fistful of Bitcoins: Characterizing Payments Among Men with No Names (review) (long)

S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko (UC San Diego), D. McCoy (George Mason University), G. Voelker and S. Savage (UC San Diego)

De-anonymizing flows of bitcoins (rather than mappings between pseudonyms and individuals).

Each transaction must reference a previous transaction (can only be referenced once) - will exploit that for mapping. Importantly, each received transaction should be spent all at once (although there is a 'change address' mechanism to collect excess bitcoins). Users can use arbitrarily many public keys (pseudonyms) with no real restrictions - as a result, at the moment there are some 12 million public keys in the system. So, need to cluster the IDs.

Heuristic to cluster pseudonyms - if two (or more) addresses are inputs to the same transaction, they are controlled by the same user (assume that users do not share public keys). The result - 5.5 million distinct clusters. Given the change address transactions, can further cluster pseudonyms: any change address between pseudonyms implies the same user behind both. Identifying change address transactions is hard, but doable :)

Funfacts:

  • A lot of gambling clearly visible in the clustering - a "bicycle wheel" with many users at the edges and a central entity
  • Strongly connected component with most of the named users - mostly because of the bitcoin exchanges.
  • The "silk road" cluster also visible :)

To track bitcoins, can see when they "meaningfully" cross cluster boundaries - following "peels" of bitcoins.

Tracking stolen bitcoins! Most end up in exchanges after a few transactions. Problem for thieves: exchanges know ID of individuals...

Identifying drug purchases to silk road quite easy - an article in the news. (Wonder how they could get the ethics committee approval...)

 

Follow the Money: Understanding Economics of Online Aggregation and Advertising (review) (short) - best paper

P. Gill (The Citizen Lab/Stony Brook University), V. Erramilli (Telefonica Research), A. Chaintreau (Columbia University), B. Krishnamurthy (AT&T Research), K. Papagiannaki, and P. Rodriguez (Telefonica Research)

Focus on display advertising - $15B market in 2012

Little is known about the value of information advertising aggregators collect. Many tricky issues, in particular privacy - studying that can

Interviews with advertising professionals and came up with a revenue model: using CPM (Cost per mille) have CPM(u,p,a) = Run of network * Traffic quality multiplier * Implicit intent (u)

(where u - user, p - publisher, a - aggregator)

Implicit intent is based on the information advert aggregators collect about the user. What is intent? Based on user's browsing info as collected during their visits of websites. Each aggregator has a subset of intents as they are present only across a subset of all websites.

Data: network traces of 3M users and 1.5B sessions, combined with publicly available information from Alexa.

Results:

  • Unsurprisingly - most visited categories have higher revenues
  • 10% of aggregators are responsible for 90% of revenue
  • Active users contribute more - 20% to 80%

What happens when users enable one-sided privacy protection (e.g. DoNotTrack, blocking cookies)? Aggregators cannot infer intent! Mean intent values in the normal case are around 4.2 (HTTP) and 3.8 (mHTTP), but revenue can drop by a factor of 3.8. If only top 5% users block, revenues drop by 30%!

Q: loads of inference... how did you infer various numbers, e.g. intents?

A: combination of datasets and advertising knowledge

Q: dynamics between aggregators? Companies changing, getting bought, etc.

A: collapsed all aggregators into one big entity for this analysis

 

Understanding the Effectiveness of Video Ads: A Measurement Study (review) (long)

S.S. Krishnan (Akamai) and R.K. Sitaraman (UMass Amherst/Akamai)

The mega question: are videos sustainable, let alone profitable? Trying to do video ads, pay-per-view, subscriptions, but serving videos is costly. How can measure ad effectiveness? Simple metric: do people watch ad videos to completion?

Goal: understanding factors that could impact ad completion. Considering ad position, length, video in which it is inserted, time of day, watching habits, viewer's location/device...

Data: Akamai's media analytics platform, using a plugin that runs inside the media player and is globally deployed for millions of users. 65 million users, 33 video providers and 3000 publishers, 367 million video and 257 million ad views!!

Viewers are more likely to complete an ad inserted in the middle of a video when they are presumably more engaged with the content than when the ad is inserted in the beginning or the end of a video.

Trying to answer how much does video itself matter for ad completion by looking at long-form and short-form videos. Viewers are more likely to complete an ad inserted into a longer video such as movie likely because they see more video in the content itself.

Folklore asserts that viewers are more likely to complete ads during relaxation time (supposedly they have more time). The data does not show a lot of difference though.

How much does loyalty to a website matter for ad completion? Most often measured by loyalty. Turns out, viewers who repeatedly come back to a website are more likely to watch ads to completion, presumably because they deem content on the website more valuable.

How do people react to pre-roll ads? People abandon the video much more often when loading of the ad is taking a long time than when there is a pre-roll ad: 45% abandoned a slow-loading video and only 13% - the ad. Explanation is that expected waiting time is of known duration and because viewers accept ads as an implicit form of payment for the content.

A word about techniques: can't really run controlled experiments and the data gives a lot of chance for jumping to poor conclusions (e.g. pre-roll, mid-roll and post-roll ad completion rate). Using Quasi Experiments (Krishnan, S. IMC 2012). Idea: isolate the impact of treatment and control for confounding factors by matching.

Q: can't properly infer post-roll watching - the muser might have got up and left.

A: no way of knowing that in general - assume it is the case if a the ad is played to completion.

Q: ranking of ads themselves?

A: noticed a lot of variation between ads, but haven't done in-depth analysis yet

 

Follow the Green: Growth and Dynamics in Twitter Follower Markets (review) (long),

G. Stringhini, G. Wang (UC Santa Barbara), M. Egele (CMU), C. Kruegel, G. Vigna, H. Zheng and B.Y. Zhao (UC Santa Barbara)

 

Weather in the Clouds

Next Stop, the Cloud: Understanding Modern Web Service Deployment in EC2 and Azure (review) (long)

K. He, A. Fisher, L. Wang, A. Gember, A. Akella and T. Ristenpart (UW Madison)

Questions:

  1. Who's using public IaaS clouds? (traffic patterns, network design/traffic engineering)
  2. How are these services using the cloud? (impact of failures, ways to improve availability)
  3. How can quality of experience be improved?

Dataset: university packet capture - deep,  but possibly atypical. Use IDS system to get information about connections. Second: alexa subdomains dns records and mapping to IP addresses of cloud services (using public ip address ranges).

94.2% domains that use cloud (40k out of 1m domains) use amazon. From the university perspective, 80% of traffic goes to EC2 and the rest to Azure. The main traffic user is dropbox (68%!), but e.g. netflix has less than 2% - this is because they deliver video content from CDNs rather than clouds.

Load balancers identified by CNAME records (also PaaS such as heroku - just a different CNAME pattern).

Loads of stats about usage statistics by types of frontends, locations (regions/availability zones), numbers of regions used... Check paper. Summary of the seemingly important stats: most services on EC2 only use one region, but at least multiple availability zones - region failures would have high impact, but availability zones could be dealt with okish.

Q: infocom last year - very similar, the main difference being data from home users (50k) and their numbers are vastly different

A: ack

Q: reasons for not using multiple regions, e.g. financial?

A: yes, primarily - need to pay for traffic across regions, etc. Also more consistency issues across regions

 

Choreo: Network-Aware Task Placement for Cloud Applications (review) (long)

K. LaCurts, S. Deng, A. Goyal and H. Balakrishnan (MIT CSAIL)

Problem: each machine has cpu and memory capabilities and each task has those requirements - make an assignment.

Choreo measures network, profiles applications and places tasks. The goal is to minimize completion time of network-intensive cloud applications.

The motivation is large variation in pairwise VM throughputs.

In the paper assume that applications already know their network demands. More complicated (work under submission) analysis possible, but claims that the demands are highly predictable.

Use packet trains (previous work from 1986, 1993) to measure throughput - takes milliseconds. Compared against netperf, the measurements are quite close - in both cases error is within 10%. In practice, it takes ~3 minutes for a 10 machine topology. Network stability? According to their study, only 5% of paths see more than 6% error for all timescales (EC2). Comparing results between 2012 and 2013, looks like EC2 infrastructure has changed and hypothesize that aggregate traffic from a tenant is rate-limited to make bw less affected by cross traffic. Cloud nets are definitely changing...

There is some correlation between path length and bandwidth - if connections were affected by cross traffic, then the longer the path, the lower the observed bandwidth should be.

Placing tasks using greedy heuristics - example based on network, but paper considers cpu and memory as well. Optimal solution can be expressed as an integer linear program. Greedy is not always optimal, but median completion time is 13% greater than optimal.

Eval - data from HP cloud services data, used to simulate live traffic on EC2. Compared choreo against random placement, round-robin and a minimum number of machines placement. Two scenarios: all applications known upfront and applications arrive online.

For upfront known apps, up to 70% of applications have improvements (median improvement 13-18%). For online arrival, improvement for 85% of apps with median improvement of up to 53%.

Limitations: not meant for small apps, those that send little data or those that run for a short time.

Q: Assumed simple tree for bw  measurements, how would other topologies or multipath affect them?

A: haven't seen any evidence of multipath, but would need to adapt the metrics. Certainly seems doable

Q: What if cross traffic was a bigger problem?

A: didn't see a lot of cross traffic, but we did simulations to check that - details in the paper.

Q: Is it difficult to pinpoint rate limiting? That would change many problems

A: the measurements suggest that strongly, but hard to say what amazon are really doing

 

Benchmarking Personal Cloud Storage (review) (short)

I. Drago (University of Twente), E. Bocchi, M. Mellia (Politecnico di Torino), H. Slatman and A. Pras (University of Twente)

 

 

Measuring and Mitigating Web Performance Bottlenecks in Broadband Access Networks (review) (long)

S. Sundaresan, N. Feamster (Georgia Institute of Technology), R. Teixeira (CNRS/UPMC Sorbonne Universities)and N. Magharei (Cisco Systems)

NOT about the cloud; about web performance bottlenecks :)

Loads of performance optimization across all tiers: client side (caching, protocols), proxies and service providers. Last mile latency can be significant (previous work).

Two contributions: last-mile latency measurements and quantification of dns prefetching and tcp connection caching benefits (up to 35%).

Mirage indentifies latency bottlenecks by querying a url and fetching all objects. Measures time to first byte, and cycles through the list of objects linked from the page - somehow users that to estimate last mile?

Page load time is often bound by latency - page load time stops improving with increasing bw at 16mbps.

Popularity based DNS prefetching. It alone can improve page load time by up to 10%. (what about low-ttl domains?)

Not really feasible to keep prefetching dns names and keep tcp connections open to all popular domains... Only pre-fetch popular sites with timeout. Analyse using simulation on traces from 12 homes. Cache hits improve list size of 20 and timeout of 2 minutes: DNS hit rate improves to 19%.

Q: compare work with wprof from nsdi last year? Bottleneck is computation.

A: complimentary... will probably hit those limits anyway at some point, with improving computation speeds.

Q: Keeping connections on for 2 minutes is a long time for the content providers

A: content providers will improve infrastructure because they "really care"... because it has huge impact on performance

Routing

Studying Interdomain Routing over Long Timescales (review) (short)

G. Comarela, G. Gursun and M. Crovella (Boston University)

AS-level Topology Collection Through Looking Glass Servers (review) (short)

A. Khan (Seoul National University, South Korea), T. Kwon, H.-C. Kim (Sangmyung University, South Korea) and Y. Choi (Seoul National University, South Korea)

AS Relationships, Customers Cones, and Validations (review) (long)

M. Luckie, B. Huffaker, A. Dhamdhere (CAIDA/UC San Diego), V. Giotsas (University College London) and K. Claffy(CAIDA/UC San Diego)

Fun with Phones

RILAnalyzer: a Comprehensive 3G Monitor on Your Phone (review) (short)

N. Vallina-Rodriguez, A. Aucinas (University of Cambridge), M. Almeida, Y. Grunenberger, D. Papagiannaki (Telefonica Research) and J. Crowcroft (University of Cambridge) 

Not taking notes for own paper, but can use the opportunity to drop a link to our project's website: rilanalyzer.smart-e.org  :)

 

Signals from the Crowd: Uncovering Social Relationships through Smartphone Probes (review) (long)

M.V. Barbera, A. Epasto, A. Mei, V.C. Perta and J. Stefa (Sapienza University, Rome, Italy)

WiFi Probe Requests reveal a lot of information about the users: publish SSIDs known to devices. Funfacts from data collected during the conference and a large dataset collected over multiple events...

Inferring social links from their relationships with networks... Skipping the formal definition, if people share 1 or more networks, they might be connected, but not always.

Loads of related work in the past not even mentioned.

 

Rise of the Planet of the Apps: A Systematic Study of the Mobile App Ecosystem (review) (long)

T. Petsas, A. Papadogiannakis (FORTH-ICS), M. Polychronakis (Columbia University), E.P. Markatos (FORTH-ICS)and T. Karagiannis (Microsoft Research)

A study on how distribution of downloads looks like and comparing to other domains - WWW, P2P... Infer how pricing affect popularity.

 

Well.

 

Web

Analysis of the HTTPS Certificate Ecosystem (review) (long)

Z. Durumeric, J. Kasten, M. Bailey, J.A. Halderman (University of Michigan)

Loads of roots, even more intermediaries - have to establish trust.

How do we measure the ecosystem? Need to look at the edges. Performed 110 scans of entire IPv4 address space over 18 month period. completed 1.8 billion TLS handshakes and 42 unique certificates.

Active scans - need to limit scan impact. Didn't need to scan very fast - did the scans over 24 hour period, randomized and indicated the purpose of the sans over http, dns and whois honored all requests to be excluded from future scans (less than 100 networks).

CA stats:

  • 1832 CA certificates belonging to 683 orgs
  • 80% of orgs were not commercial CAs
  • 57 countries - 30% in usa, 21% in germany, 4% france, 3% japan...
  • 40% academic institutions
  • 20% commercial
  • 12% government, 12% corporations
  • 45% of organizations were provided certificates by German National Research and Education Network
  • A large number of root CAs have provided CA certs to unrelated orgs and governments - unrestricted CA certs to 62 corporations from one root CA
  • 75% are really signed by Comodo, Symantec and GoDaddy
  • 90% are descendants of 4 roots and 90% are signed by 40 intermediate certs
  • 26% are signed by one intermediate certificate!

Large attack service:

  • Only 6 CA certs had ame constraints - rest could sign for any domain
  • Only 40% had a length constraint - others can create new CA certs
  • Certs are issued for 40+ years
  • 49% of certs have a 1024-bit key

Lack of oversight - unmaintainable ecosystem. Policies are ignored, web browsers need co demand and coordinate change... A lot of recent work on TLS improvements, but need to make HTTPS more widespread.

Q: what are the incentives to lead the big technical change? How can we re-engineer the market?

A: the move towards more security... the end-user is never going to push for it, need a push from large companies and us as a community

 

Exploring EDNS-Client-Subnet Adopters in Your Free Time (review) (short)

F. Streibelt, J. Boettger, N. Chatzis (TU Berlin), G. Smaragdakis (T-Labs/TU Berlin), and A. Feldmann (TU Berlin)

non-ISP resolvers are gaining momentum - users are far from resolvers and often mess up CDN assignments.

13% of (top something) Alexa domains seem to support EDNS0 already.

Client IP in ECS cannot be checked - use that for probing (actually interesting)! Find the location of CDN caches within ISPs, observe the growth of CDNs, infer client-to-server mappings...

Validation against RIPE RIS.

Using to detect GGC (Google Global Cache edge servers) - various ISPs, very often used to serve users, usually from within an AS.

 

Mapping the Expansion of Google's Serving Infrastructure (review) (long)

M. Calder (USC), X. Fan, Z. Hu (USC/ISI), R. Govindan (USC), J. Heidemann (USC/ISI) and E. Katz-Bassett (USC)

Huge Google infrastructure expansion: Google.com servers in October '12 - 200 sites in 60 countries and 100ASes (mostly google's ASes) . One year forward: 1400 sites in 130+ countries and 800ASes (mostly ISP ASes).

The approach relies on "the fact that Google works hard to direct clients to nearby frontends"

Same technique with EDNS as previous paper. Longitudinal study of Google infrastructure.

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.