{"id":228,"date":"2011-04-07T09:07:43","date_gmt":"2011-04-07T09:07:43","guid":{"rendered":"http:\/\/www.syslog.cl.cam.ac.uk\/?p=228"},"modified":"2011-04-10T21:35:39","modified_gmt":"2011-04-10T21:35:39","slug":"the-philosophy-of-trust-and-cloud-computing","status":"publish","type":"post","link":"https:\/\/www.syslog.cl.cam.ac.uk\/2011\/04\/07\/the-philosophy-of-trust-and-cloud-computing\/","title":{"rendered":"The Philosophy of Trust and Cloud Computing"},"content":{"rendered":"

The Philosophy of Trust and Cloud Computing
\nApril 5\/6, Corpus Christi, Cambridge
\nSponsored by Microsoft Research<\/p>\n

Richard Harper (MSR) and Alex Oliver (Cambridge) outlined
\nthe goals of the meeting, and everyone introduced themselves - the
\nmajority of attendees were either in Social Science\/Anthropology or
\nPhilosophy, with a few industrials and a couple of technical people
\nfrom Computing (networks&security).<\/p>\n

The talks were mostly in the social science style
\n(people literally \"read\" papers, rather than powerpoint), so one had
\nto concentrate a bit more than usual, rather than looking at
\nbulletpoints and catching up on email\/facebook.<\/p>\n

<\/p>\n

Onora O'Neill (Cambridge)
\nTrust and Mediated Communication<\/p>\n

Onara really set the intellectual tone of the meeting, talking about
\nspeech acts and contrasting them with what she terms
\n\"quasi-communication\", and how intermediation is the starting point
\nfor loss of trust (you don't see the original source of the utterance,
\nso notions of trustworthiness weaken). The reduction in trust means
\nthat norms (ownership, privacy, integrity, and others) therefore
\nweaken.<\/p>\n

An important point was that data protection law is actually operating
\nin a very poor way, since online organisations\u00c2\u00a0 routinely violate
\nnorms, typically, by getting people to check an EULA< and sign away,
\ntypically unwittingly their rights, when they are selling their
\ninheritance for a mess of pottage, whereas people that really want to
\nuse valuable data (medical epidemiologists) cannot get past first post
\n(ethics committee) - i.e. when data re-use might be useful for greater
\ngood, the law prevents it, whereas when it is simply used for
\nmarketting\/advertising, people get away with blue murder.
\nThere was a discussion about a more subtle notion of intended use
\n(processing of data) - in fact, humans integrate data from multiple
\nsources all the time in every day life, so legislating against it (the
\nMiss Marple syndrome) would be impossible or at least unreasonable.
\nHowever, in the context of computer mediated communication, perhaps
\nthe law could be more useful (and subtle). Aside - this links to a
\nlater discussion (promoted by a question to another speaker) where Bob
\nBriscoe of BT and George Danezis of MSR helpfully explained the new
\nidea\/ technology (and limits) for privacy preserving queries and data
\nprocessing.<\/p>\n

Jeroen Van den Hoven (TU Delft)
\nResponsibility, Trust and Clouds<\/p>\n

Jeroen talked very interestingly about value sensitive design, and
\ngave some nice examples from his work in the ETICA project. One lovely
\none was the \"racist overpass\" entering Brooklyn, where it was
\nallegedly set too low to stop buses moving between poor black and
\naffluent white areas of the city.<\/p>\n

Jeroen outlined the meta-obligations for the designer, and projected
\nthese onto the cloud - he used the horns of the dilemma as a means to
\nshow the tension in design between security, speed and reliability
\nneeds on the one hand, and opacity and offshoring on the other.
\nResolving this tension (or dilemma) would remove the unfortunate
\nembedded values (that lose privacy for example).<\/p>\n

John Naughton pointed out that Amazon's removal of Wikileaks from
\ntheir service was a pretty good example of opaque decision making.
\nThis seeded several later discussions of privacy and appropriate
\nsecurity throughout various points in the day (e.g. 700,000 people
\naccessing all the cables from all the embassies in the world is a very
\ngood example of centralisation of poorly secured data being a really bad
\nidea).<\/p>\n

I pointed out Simon Crosby's recent blog entry asserting that you
\nshould trust the cloud with your data just as you trust the bank with
\nyour money (this absurd metaphor raised laugh).<\/p>\n

Jeroen pointed out that some systems are not amenable to inspection
\n(fail the transparency test - e.g. growers of software
\nneural nets are often not able to
\nexplain their reasons for pattern matching outcomes) hence these
\nsystems are counter-indicated in situations where transparency is
\nrequired (e.g. medical practice) - this may mean that some
\ntechnologies are simply not ethically acceptable.<\/p>\n

I raised the example of the Ford Pinto 11$ safety feature which was
\nleft out for cost reasons despite the estimated 180 deaths per year.
\nHad this decision been public, the car would never have sold,
\nrendering the estimate that it was an affordable safety omission
\nincorrect economically, let alone morally.
\nRichard Holton (MIT)
\nTrust as attitude and as relationship<\/p>\n

This was a very interesting discussion of a \"realistic\" or \"physically
\nbased\" model of trust and started with a rehearsal of work on
\nincreasing trust through iterated games (a la tendency in the iterated
\nprisoners dilemma), but then moved on to recent work in behavioural
\neconomics and results motivating people to move away from the notions of
\nrational choice theory, but also away from altruism and reward.
\nEmpirical results show that people will expend resources to punish
\nmisbehaviour (even when the misbehaviour (i.e. byzantine, in the sense
\nof non rational and non altruistic - see BAR-T work at UT Austin) is
\ndirected to a third party).<\/p>\n

Richard then moved on to talk about neuroscience work that has shown
\nthe role of Oxytocin (I raised other work - e.g. Robin Dunbar's work
\non endorphins and trust, which is heading pretty much the same way).
\nThis shows that we can show there are ingroup behaviours where
\naltruism functions.<\/p>\n

There was a discussion about how strong this research result is and
\nits limitations.<\/p>\n

A couple of very interesting examples of \"good\" and \"bad\" behaviour
\nwere then bought up - Wikipedia (user contributed effort with little
\nobvious reward, not even fame). And the turnaround of Harvard
\nlibrarians when they had given content to the Google Books project in
\nthe expectation that all the out-of-copyright content would be made
\nfreely available to the world, only to find Google wanted to charge
\nfor that too - Made me raise the Marcel Mauss work on Gifts
\n(interestingly, one of the later speakers is from the Mauss Institute
\nin Paris). See this essay<\/a>
\nfor example, or the more recent book by Hyde called The Gift<\/p>\n

David D. Clark (MIT)
\nFinding Our Way in the Cloud: Engineering the Shared Experience<\/p>\n

Dave took us on a tour of cloud technology starting with Woody Guthrie
\nand the Grand Coulie dam, and the rather large (~1M CPU) data centers
\nthat Google, Microsoft et al build there on account of cheap,
\nplentiful and reliable electricity...(and even \"green\", despite
\ndissipating many Megawatts through heat)...<\/p>\n

and ending up with a discussion of the EULA (End User License
\nAgreement) power (and obfuscation) that Cloud Service Providers wield
\nover their \"customers\". This was illustrative of what Dave called
\n\"points of control\". Originally, the Internet was not invested with any
\ncentral points of control, by very deliberate design choice. This has
\nnow shifted, and the power in the application\/service space has
\nconcentrated in very few points. He also discussed Privacy and
\nContext, citing Nissenbaum's recent work.
\nHe speculated that in the next couple of years, we might well see a
\nregulatory\/policy push (or at least a discussion) moving from
\nNetwork towards \"Cloud Neutrality<\/p>\n

Dave couldn't make it in person due to medical problems bought on by
\nflying too much in confined circumstance (note, many of the early
\ninternet pioneers are very modest, almost like pilgrim fathers, in
\ntheir expenditure, even when they may have many ways to make a lot of
\nmoney, and therefore fly business -
\nI wonder if this ethic informs their technical decisions - I
\nfind people like Bob Braden, Van Jacobson, Vern Paxson and others who
\nactually write the code, are all quite similar to Dave in this
\nregard). Anyhow, he looked well \"in silicon\" over the video link.<\/p>\n

Luciano Floridi (Oxford, Herts)
\nCloud Computing and Its Ethical Challenges<\/p>\n

Luciano talked about ownership models and the technology in th e
\ncloud (vertical structure of cloud Hardware,\u00c2\u00a0 Virtual Machines,
\nplatforms and Applications).<\/p>\n

He talked about control and responsibility.<\/p>\n

John Naughton asked about Green considerations given the scale (as per
\nDave Clark's evidence!).<\/p>\n

I pointed out that the system was doubling every year, and that
\nmaximum energy savings perceived in industry working on greener data
\ncenters were about 1 order of magnitude which would be over taken in 4
\nyears by simple scaleout. This means that alternatives (fully
\ndecentralised clouds in the home (settopboxes, home hubs) might be the
\nonly viable solution - given the end user needs a pretty powerful
\ndevice just to render images\/video and support low latency interaction
\n(on cached copies) we might as well solve the complexity of this now.
\nTony Hoare pointed out that today, low speed links (especially wifi)
\nwould be a barrier to serving data from home or mobile devices - I
\nclaim that this is why we need Fiber to the home and 4G, which will
\nremove that latency barrier).<\/p>\n

There was also a more detailed discussion of the metaphor of banks as
\nopposed to clouds. (Aside - i think the cloud is more like a church
\nwith big buildings attended by priests (\"gurus\").<\/p>\n

Alvin Goldman (Rutgers)
\nWhat are Collective Epistemic Agents? And How Much Can They Be Trusted<\/p>\n

Alvin gave a fascinating insight into the area of collective epistemic
\nagents. This is a philosophical line of thinking about what can we say
\nabout group \"mental\" states (merely by examining the utterances or
\nspeech acts). He discussed the current thinking about the limitations
\nof the belief one can have about a groups \"group-think\", and gave some
\nlovely illustrative examples of when a group decides to collectively
\nlie (present a state that is not actually what the members all think,
\nbut is agreed as a convenient fiction - e.g. 1945 American radiologists
\n\"belief\" in the risk of radiation, when asked by US government
\n(presumably to \"excuse\" Hiroshima\/Nagasaki).<\/p>\n

This underpinned a discussion of succesful strategies for groups to
\nmanage themselves (voting\/democracy) and their limitations.<\/p>\n

Simon Blackburn (Cambridge)
\nReliability and Trust<\/p>\n

The previous talk was followed nicely when Simon showed
\na single picture illustrating the voters paradox (see\u00c2\u00a0
the Wikipedia article<\/a>
\nif you don't believe it - and think about how you will vote for AV on
\nMay 5th in England).<\/p>\n

Simon then related the story about massive trust failure due to google
\nmail adverts\u00c2\u00a0 from phishers. The tale (
in the guardian<\/a>)
\nwas very useful in illustrating a range of things from technology (as
\nGeorge Danezis discussed, Google simply don't have time to check all
\nthe people bidding to sell an advert - there is no human in the loop -
\nalthough I suspect some simple heuristics might limit damage), through
\nto moral and legal (who is to blame), and educational (why was the
\nvictim so naive).<\/p>\n

This then led to another very interesting discussion about the Cloud
\n(\"too Big too fail\" - that Bank Metaphor creeping in again).
\nI'd claim that while the economy probably does deepened on the Internet
\nworking (e.g. banking\/shopping) it is less obvious that it depends on
\nthe cloud just yet.<\/p>\n

At this point Thomas Simpson asked the question about crypto helping,
\nand Bob Briscoe and George Danezis gave a nice impromptu explanation -
\nsee\u00c2\u00a0
here<\/a> for more info<\/p>\n

Thomas Simpson (Cambridge)
\nWhen is it wise to follow the crowd?<\/p>\n

Thomas talked about his work on the wisdom and madness of crowds, and
\ndiscussed a number of rules of thumb for when to go with the flow, and
\nwhen not - he distinguished, pragmatically, between a mob and a crowd -
\nsocial scientists in the meeting begged to disagree about this
\ndistinction, although I suspect that (based on the ideas in epistemic
\ngroup-think and in Holton's presentation, that one could actually build
\na system that made this distinction detectable often, in practice).<\/p>\n

There was a discussion of how links get promoted\u00c2\u00a0 (sponsored or just
\nby links through pagerank's operation) - the obscurity (lack of
\ntransparency) of this process to most users is perhaps a source of
\nproblems for trust in the cloud in its oldest area (search results).
\nWe talked about social search and other schemes - we unpacked the idea
\nof search and realised that it has a lot of interesting details in its
\noperations (pagerank applies to terms as well as to results, and
\nclick-through applies to revenue - which doesn't directly link to
\npriority of result, but could lead to profits leading to sponsoring a
\nlink to move up - so there are many feedback loops within feedback
\nloops).<\/p>\n

---<\/p>\n

I had to leave at this point sadly, and missed the last two talks,
\nwhich was very annoying since both of the speakers had made many very
\ninteresting interventions\/interjections and comments throughout the 2
\ndays, and I had looked forward to their talks - one from the Social
\nScience perspective, the other from a law viewpoint.<\/p>\n

Rod Watson (Institut MM, Paris)
\nA Sociological Conception of Interpersonal Trust<\/p>\n

Ian Kerr (Ottawa)
\nIn Machines We Trust?\u00c2\u00a0 Cloud computing, ambient intelligence and robotics?<\/p>\n","protected":false},"excerpt":{"rendered":"

The Philosophy of Trust and Cloud Computing April 5\/6, Corpus Christi, Cambridge Sponsored by Microsoft Research Richard Harper (MSR) and Alex Oliver (Cambridge) outlined the goals of the meeting, and everyone introduced themselves – the majority of attendees were either in Social Science\/Anthropology or Philosophy, with a few industrials and a couple of technical people […]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1,30],"tags":[],"_links":{"self":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts\/228"}],"collection":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/comments?post=228"}],"version-history":[{"count":7,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts\/228\/revisions"}],"predecessor-version":[{"id":231,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/posts\/228\/revisions\/231"}],"wp:attachment":[{"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/media?parent=228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/categories?post=228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.syslog.cl.cam.ac.uk\/wp-json\/wp\/v2\/tags?post=228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}