syslog
25Oct/112

proof of deletion

Posted by Jon Crowcroft

in between reading SOSP liveblogging notes, I'm still trying to think up how one might implement a "proof of deletion" service for cloud storage - here's the latest

a user stores data in the cloud - the data is encrypted so cloud provder cannot simply read it, but is amenable to privacy preserving queries on some keys.

the user wants to delete a record, contacts a third party (the grim reaper?), and gives then the keys of records. the third party tells the cloud service to delete the data. and then, using an anonymous service (via TOR etc) queries the record - they should get a 404 response.

of course, the cloud provider can squirrel data away but not in any useful way, as the TTP can do the query at any time

why ot just let the user run the query? well they might want to go away, and rely on the TTP who might also be persistent and might have bigger TOR guns....

24Oct/111

SOSP 2011 – day 1

Posted by Malte Schwarzkopf

Chris Smowton, Stephen Smith, Derek Murray, Steve Hand and myself are in the beautiful Cascais near Lisbon to attend the 23rd Symposium on Operating Systems principles. Hopefully, we are going to find some time to write up some of the presentations for syslog over the next couple of days. In the mean time, there is a semi-live blog of semi-structured notes here.

Filed under: Uncategorized 1 Comment
21Oct/110

NetMob 2011

Posted by Salvatore Scellato

Last week I was in (the other) Cambridge, attending the "Second conference on the Analysis of Mobile Phone Datasets and Networks", or NetMob, held at the MIT Media Lab together with SocialCom 2011. NetMob provides an interesting format: there is only one track of short contributed talks, with the possibility to present recent results or results submitted elsewhere.  Speakers have about 10-12 minutes to present their work and then there is plenty of time to discuss ideas network with other people over 2 days. I gave two talks: one of our research on the effect of geographic distance on online social networks and another on our recent work on universal patterns in urban human mobility.

The unifying theme of the workshop is the analysis of mobile phone datasets: as people user mobile devices more and to do more things, these datasets help us to understand complex processes such as spread of information, human mobility, the usage of urban geography and so on. Indeed, the range of talks presented at the workshop was impressive and fascinating, spanning between two main points: the first day focused more on studying user mobility, while the second day featured works on social behaviour.

Among the most innovative works during the first day there was a talk by people at MIT & Berkeley on using mobile phone CDRs to make sense of urban roads, proposing to use a the Gini coefficient to measure the diversity of individual traffic carried by each street. Individual user mobility was the main theme of several talks: I particularly liked one on the seasonal patterns of user movements, presented by Northeastern University researchers, and one by a large team led by Vincent Blondel on exploring the spatio-temporal properties of human mobility and the regular home-work routine of many users. Laszlo Barabasi gave an invited talk on mobility and predictability, presenting much of his last work and trying to connect the statistical properties of human mobility to the performance limits of many related applications that rely on user regularity. Finally, AT&T Labs presented their results on why it is impossible to anonymize location data.

The second day featured works on the social properties of mobile phone communication between users. Researchers at CMU presented their results on quantifying how social influence might compel users to adopt some products by using randomization techniques. Another interesting talk by a a joint team UC3M and Telefonica presented how time allocation in social networks has strong constraint that are likely to affect and be affect by the social structure itself: well-connected hubs have a lower importance on information transmission than less connected users, with important consequences on many dynamic social processes. Sandy Pentland have another invited talk, offering a wide overview of how mobile devices are changing the technological landscape with their ubiquitous sensing capabilities. Another interesting talk discussed the economic value of mobile location data, presenting scenarios user actions can be monetized and profit shared among different service providers.

Overall NetMob provided an insightful venue for discussions and potential collaborations, always revolving around the idea that as mobile devices become more and more ubiquitous they will offer new fascinating research opportunities.

Many more details about all the talks in the book of abstracts.

 

20Oct/110

SocialCom 2011

Posted by Daniele Quercia

here are few papers presented at socialcom (our two papers on personality and language are summarized here)

Funf: Open Android Sensing Framework. One tutorial at socialcom was dedicated to Funf. This is an open source set of functionalities running on phones and servers that enable the collection (sensing), uploading, and configuration of a wide range of data types (location, movement, usage, social proximity). This framework has been built by a professional developer within Sandy Pentland's group (thanks to a Google grant), and has just been made publicly available on the android market (well done!) (download link ). The conference featured a considerable number of papers that made use of the framework. A case in point is [1]. This paper is about predicting "who installs which (mobile) app" based on one's social network (here the term network refers to a composite graph made of different types of phone-sensed networks). It turns out that one has more common apps with familiar strangers than with friends (i'm not 100% sure though, you need to check the paper). A cute bit of the framework is its fun dashboard - this allows researchers to run studies in which personal data is shown to the participants and consequential changes of behaviour can be automatically traced. The ubicomp paper [2] highlights the vision behind the framework.
[1] Composite Social Network for Predicting Mobile Apps Installation
[2] the social fMRI: Measuring, Understanding and Designing Social Mechanism in the Real World. Ubicomp 11.

Another "special" session was dedicated to cyber-bullying - an extremely interesting topic in need of research (pdf overview). Folks at the media lab built an initial model to spot cyber-bullying from conversation in social media. Interestingly, they trained the model using the data from this site. The paper will soon be published and will be titled "Commonsense Reasoning for Detection, Prevention, and Mitigation of Cyberbullying"

Predicting Reciprocity in Social Networks. This paper studied the factors that are associated with the probability that a node w reciprocates and links to a node v in a social network. The most important factor is the difference in status between the two nodes v and w: status(v)/status(w), where status(v)=in_degree(v)/out_degree(v).
The larger that fraction, the more likely w will reciprocates the link. That is because a large denominator and small numerator indicate that v has many in-links and few out-links and that w has many out-links and few in-links. This suggests that v has higher "status" than w will be more likely to reciprocate.

Link Prediction in Social Networks using Computationally Efficient Topological Features. Using katz measure, these researchers effectively predicted social ties in a variety of networks. This isn't a very novel work, yet it's interesting that Katz measure performed best.

The new director of the media lab, Joi Ito, gave a interesting keynote on "Open Standards and Open Networks". He recounted his involvement in a post-disaster radiation monitoring effort in Japan. During his talk, I also learned that the a large number of governments are realising their data (not pictures or videos, but data) under creative common licence.

Fortune Monitor or Fortune Teller: Understanding the Connection between Interaction Patterns and Financial Status. This paper studied the relationship between interactions monitored using mobile phones and financial status. Apparently people with high income don't talk longer but their meeting patterns (mobility) tend to be more diverse than those of people on low income. They also studied people's personality traits and found that people high in
1) Agreeableness tend to have more friends and interact with diverse users (as per face-to-face interactions monitored with bluetooth)
2) Happiness [i hope they measured satisfaction with life] tend to be more diverse contact (it would be cool to double check the measure of diversity used here)

The workshop NetMob was running in parallel and featured a lot of interesting talks that used mobile phone data to answer very interesting societal questions. The full program is in pdf. Salvo fully attended it, so he might be able to tell you more about it ;)

Filed under: Uncategorized No Comments
20Oct/110

Our Twitter Profiles, Our Selves

Posted by Daniele Quercia

I presented a couple of papers at this year's Socialcom . While I was presenting, the twittersphere was offering encouraging and puzzling feedbacks:

  • I love the way @danielequercia introduces a book to read in each of his talk :D
  • I really like @danielequercia style in making slides and presenting! minimal, cool and fun :D

The irony is that, during the  coffee break right before my talk, I  received few constructive feedbacks on how to structure my presentations and avoid having, as I often do,  superficial and high-level slides for a *scientific* talk. Well, that's not the first time I get this feedback, and I accept it. However, I feel that many talks at conferences suffer from powerpoint karaoke syndrome - to look "right" (like a proper scientist/professional dude), one needs to recast a paper into slide format. Bad mistake, as The Great Simon L Peyton Jones would tell us. Since I apparently like to suggest books, then let me say that, despite the title, "Presenting to Win" is the best book on how to prepare and deliver presentations (it's for a business audience, but you can easily adapt it to your needs).  Ideally, one should be able to give a talk without any slide - this way, i bet that karaoke presenters will be more likely to reach enlightenment and enter nirvana (provided that they spend 3 days to prepare a 15-minute presentation). If a smooth transition between powerpoint karaoke and nirvana is needed, then  karaoke presenters  might well try the "Takahashi Method"  -  Lawrence Lessig has successfully used it (link to one of his talks) and Steve Jobs was doing something similar for his keynotes.

Anyhooow :)  this post isn't about presentation styles but about the two papers I presented :) Here is a quick abstract that summarizes them. Enjoy ;)

In the first paper [pdf paper slides], we tested whether Twitter users can be reduced to look-alike nodes (as most of the spreading models would assume) or, instead, whether they show individual differences that impact their popularity and influence. One aspect that may differentiate users is their character and personality. The problem is that personality is difficult to observe and quantify on Twitter. It has been shown, however, that personality is linked to what is unobtrusively observable in tweets: the use of language. We thus carry out a study of tweets and show that popular and influential users linguistically structure their tweets in specific ways. This suggests that the popularity and influence of a Twitter account cannot be simply traced back to the graph properties of the network within which it is embedded, but also depends on the personality and emotions of the human being behind it. Also, in the second paper [pdf paper slides], for a limited number of 335 users, we are able to gather personality data, analyze it, and find that both popular users and influentials are extroverts and emotionally stable (low in the trait of Neuroticism). Interestingly, we also find that popular users are "imaginative" (high in Openness), while influentials tend to be "organised" (high in Conscientiousness). We then show a way of accurately predicting a user's personality simply based on three counts publicly available on profiles: following, followers, and listed counts. Knowing these three quantities about an active user, one can predict the user's five personality traits with a root- mean-squared error below 0.88 on a [1,5] scale. Based on these promising results, we argue that being able to predict user personality goes well beyond our initial goal of informing the design of new personalized applications as it, for example, expands current studies on privacy in social media.

Filed under: Conference, Social No Comments
18Oct/111

Open Hardware Workshop 2011 (Grenoble, FR) (mk2)

Posted by Andrew Moore

Open Hardware Workshop, October 2011 Grenoble

(mk2 with some corrections from Javier)

I had reason to attend the Open Hardware Workshop 2011 in Grenoble. (Waving the flag for NetFPGA and Raspberry Pi and because openhardware appeals to my inner maker and is very cool. (This post comes across a little dour - it shouldn't - I think this is hugely exciting.)

Motivated by the ideals of Open Hardware and exposed with a definition and everything (OSHW) (exampled recently by the CERN Open Hardware License), this workshop was held coincident with the big physics instrumentation conference ICALEPCS, lots of high energy physics people present.

While not the first license for open hardware, the CERN OHL has seeded some interest and contributes to the area being taken more seriously - the event was organized by Javier Serrano and this coupled with an open-hardware success story, Arduino, this workshop was the result.

An exciting gathering with people from a bunch of high energy physics and related people, a bunch of companies (some trying and some not trying OH) and a tonne of others like me (keen enthusiasts, regular enthusiasts or people that see 'this is important').

Javier Serrano gave an introduction to (his version of) the 'open hardware community' he started with a nod to the OSHW definition. There was a nod to specific goals by part of the open hardware community: with a heavy taste of GPL derived OSS-scented ideas, while it did seem a little naive at times, there were clear best-intentions and in particular the community (drawing on experience from OSS) recognizes that there has to be flexibility in specifics as well as a community that fosters support companies otherwise no one is going to have a commercial route.

Another aspect was the Open Hardware Repository - while not intended to be Sauron's eye - it is a fantastic resource for combining and sharing projects. A little unclear what license things are/must/should/could be under but a great idea.

Tomasz Wlostowski, gave a quick summary of the Open Hardware Summit held a few weeks earlier in NYC - and the best (hope) take-away was "The best is yet to come" an interesting justification was given relating to the slow-down in Moores law as it relates to the simple manufacturing speed ups (the assertion was this moved the leading edge away from simply being make-things-faster, and would be the chance for making-things/different/cheaper/flexible/interesting to flourish - this being the open door for the Open Hardware movement.) in contrast to the 'engineers' of the Workshop, the Summit had been all about community-raising and a much more consumer/social event. Notable cool things was the Instructables (what happens when you mix 13 year olds, constuction-kits and many rubber bands), and the ifixit (free manuals) website.

Myriam Ayass (legal advisor of CERN's Knowledge Transfer group) talked about the Cern OHL, in particular that it is going to version 1.2 (with a version 1.3 in the planning stages) and that this attends many issues with earlier versions. There is a mailing list for discussing the license and if you have a heart felt opinion - first make sure you have the latest copy and second join the discussion. It is clear Cern have their heart in the right place over this and don't mind the legal investment most of us cannot make.

My notes are that currently this is a PCB-focused license, the definition of hardware is a confusion and that a kicking question is 'so what is RTL/VHDL/Verilog/etc' - is that hardware covered by the OHL or something else?

This talk generated a huge bunch of questions ranging from "what does Cern OHL want to be: GNU-like (coveting openness as priority) or BSD/Apache-like (coveting dissemination as priority). " certainly less understanding from those present about what happens when licenses are combined, when licenses are unclear and far too much "well don't use that tool then" in response to tool-chain lockin. However, none of these people were idiots and now is the time to have an impact, mailing list details are off the Cern OHL site link

This was followed by a talk by the Arduino cofounder David Cuartielles. They make rather nice, very very cheap, do-dads (started life as a "how do I get my kid interested in CompSci/CompEng"). work in collaboration with Telefonica (including Pablo Rodriguez), among others. Very near ideas, but they face a real trauma - because they are signed up to "open" they are put at risk economically by the problem that once the PCB artwork of boards are released, the company has less than four weeks before clone boards appear. The arduino boards are considered (and perhaps fairly-so) as a potentially useful blob in the "Internet of things" (yeah I know - no working definition) sensor boards, various neat flashing light things, robot boards, all that sort of stuff, neat and nice. I hope Raspberry Pi can recycle the community of connectables this project has fathered.

Creotech (the second of four commercials that had talks), founders include an ex-CERNer and are working on instrumentation that plugs into a bus called FMC- something (FMC is a wider standard called  VITA 57) that is intended to allow compatible instrumentation packages, what this means for you and me is unclear but the motivating project for a lot of these people was either to stop having their lives/work ruled by huge transactions of money to "National Instruments" (and other vested interests) and to stop needless repetition rebuilding things that already existed - a particular density Msamples-per-second do-dad was one particular issues that it appeared many organizations had started and perhaps even completed at least 90% of their own in-house designs. Creotech was open hardware thinking, seemed fairly successful and had some respect.

In contrast National Instruments, ICALEPCS (but not the OHW) sponsor, did not. A talk that largely consisted of "Yah national instruments" did contain one seed of useful insight; the problem most any organization has (and the one National instruments can do) is provide a 25 year guarantee of lifetime replacement/operation/etc It was clear the speaker was having a pot-shot at the open hardware startups, but he also made courting noises and a question "what does your talk have to do with open hardware" summed up the chasm nicely (in short: not much or "we are still thinking".) The problems for NI are interesting, but an example is this: apparently (don't quote me on how good the toolchain is) the NI tool chain can target special NI devices that are both programmable and include a range of flexible bits- from hardware to firmware (FPGA) to whatever. The toolchain is assured, and tightly (conservatively) bounds what the programming can actually do - this all is rather critical if you (as NI) need to give assurances that not only will your kit work for 25 years, but the 'motor control unit that closes the small door preventing escape of the nasty gas' will actually do its job." The idea that someone can knock together some new code to run on the NI device brings out the NI lawyers in cold-sweats.

The issue of liability is interesting too, as it is a strong theme in the Cern OHL.

Seven Solutions a little more Creotech-like,are dabbling and sell an open board or two. They make an interesting and active hybrid model (propriety + OH)

and Instrumentation Technologies another instrumentation group that   is flirting with open hardware talked, time passed, talk finished.

Facebook (John Kenevey) talked about the Open Compute Project. Don't get too excited, its about building a design for machine rooms that is more universal and more wide-ranging than simply fits in 19" racks - the current definition.
Notable soundbits: Open compute is a white-box channel to the market that challenges supplier base and allows new entrants. A conclusion was that the dance between silicon vendors (CPU makers) and box-benders meant that the vendors are screwed and the customers are worse-than-screwed. People pitching OCP as a mechanism to get out form under vendor locking.
When you see the in-house machines of Google, and Facebook, and others, this makes a lot of sense.

Modularization is key, and facebook seem to be enjoying not caring about doing anything more than 'motivating' the actions and encouraging the open-source hardware community. It is clear they are sick of being held over a barrel by people that assemble machines (metal benders) and hope for some nice innovations... Facebook consider this part of their "GRID to Gates" (GRID in this case means the power-grid) initiative. Problems seem to be what does a standard smell like? do we have fans in the rack or in the units, what is the form and nature of power in the rack or the exchangeable units.. etc etc. Sadly the impact for the man in the street (or the machine-room fitter-outer in the room) will be 12 months away (my wild guess). The slides of this talk were not available (nor the recording) as there was some discussion (funny stories) about specific 'metal benders.

Following lunch we had several speakers talking about tool chains for (PCB) design: two tools got discussed: GEDA and Kicad, kicad looked very nice indeed and certainly looked better than some of the approaches common. Problems discussed including importing artwork and the general mumbles of agreement about libraries of package and pinout. For this writers perspective it seems a public definition of pinouts and packages is obvious and in the interests of the manufacturers - although probably not in the interests of various 'big package' authors (Cadence, etc.) Time will tell.

Projects discussed in the remainder of the day included

hdlmake, a concept to get away from the GUIS commonplace in build tool-chains, adds manifests to permit dependency trees, and seems for the most part like a good idea. (Also makes me appreciate the effort Jad Naous and others did on NetFPGAv2 to build the make as clean as it was.)

Icarus Synthesis engine - considered critical OSS for Open Hardware. obvious problems include propriety core-handling.

Open FPGA toolchain (Sébastien Bourdeauducq who did the Milkymist open video (effects) hardware. neat stuff trying to hack his way around obfuscated FPGA details - (with a lot of grumbles about how mean Altera and Xilinx are) but the guy was totally oblivious to the idea that people don't realize details about their FPGA because some knock-off company will start making 10cent copies of the FPGAs themselves. Ok I'm being unfair, Sebastien's position is "let's get started nevertheless and see what happens". I think this would appeal strongly to academics who want to redesign/modify/mess-with the RTL -> FPGA process.

Other things presented included: SOLEIL synchrotron instrumentation and the RHINO project.

RHINO is interesting, as an open source radio thing that came from the radar remote sensing group at the University of Cape Town born from CASPER (a project at Berkeley) and the interest in South Africa in the SKA project (SKA is at this moment a competition to build the next serious astronomy platform: either in Australia or South Africa), neat stuff. This project incorporates BoRPH and a number of other technologies to make it easier to use and consume.

From the discussion slides for the Open Hardware Community: (some questions without answers)

  • Can clients change mindset from build-in-house (a not-invented-here variant) and pay for support?
  • How can we deal with Tech Transfer departments that argue against OSH (even if the hardware is not core buisness)
  • How can we involve universities?
  • (How) can we pool resources?
  • (How) can we pool manpower for projects?
  • (how) can we pool money to pay companies (for the dull stuff)?
  • Who are the communities?

lots of talk, not many answers - this is a very young community, lots of idealism lots of potential.

Most-all the presentations and videos of the presentations and Q&A are available from the workshop, all under creative commons (of course)

General comment: This community is very interesting but right at the moment there are considerable dumb-language (lazy language) defaults that conflate commercial and propriety when they mean "open-source" and commercial and it may be an issue from "English as a second language"; I know this wounds commercial organizations (cast as bad guys) and in fact the intention is something else.

It was great - I will go again, if Javier lets me.