syslog
7Apr/110

The Philosophy of Trust and Cloud Computing

The Philosophy of Trust and Cloud Computing
April 5/6, Corpus Christi, Cambridge
Sponsored by Microsoft Research

Richard Harper (MSR) and Alex Oliver (Cambridge) outlined
the goals of the meeting, and everyone introduced themselves - the
majority of attendees were either in Social Science/Anthropology or
Philosophy, with a few industrials and a couple of technical people
from Computing (networks&security).

The talks were mostly in the social science style
(people literally "read" papers, rather than powerpoint), so one had
to concentrate a bit more than usual, rather than looking at
bulletpoints and catching up on email/facebook.

Onora O'Neill (Cambridge)
Trust and Mediated Communication

Onara really set the intellectual tone of the meeting, talking about
speech acts and contrasting them with what she terms
"quasi-communication", and how intermediation is the starting point
for loss of trust (you don't see the original source of the utterance,
so notions of trustworthiness weaken). The reduction in trust means
that norms (ownership, privacy, integrity, and others) therefore
weaken.

An important point was that data protection law is actually operating
in a very poor way, since online organisations  routinely violate
norms, typically, by getting people to check an EULA< and sign away,
typically unwittingly their rights, when they are selling their
inheritance for a mess of pottage, whereas people that really want to
use valuable data (medical epidemiologists) cannot get past first post
(ethics committee) - i.e. when data re-use might be useful for greater
good, the law prevents it, whereas when it is simply used for
marketting/advertising, people get away with blue murder.
There was a discussion about a more subtle notion of intended use
(processing of data) - in fact, humans integrate data from multiple
sources all the time in every day life, so legislating against it (the
Miss Marple syndrome) would be impossible or at least unreasonable.
However, in the context of computer mediated communication, perhaps
the law could be more useful (and subtle). Aside - this links to a
later discussion (promoted by a question to another speaker) where Bob
Briscoe of BT and George Danezis of MSR helpfully explained the new
idea/ technology (and limits) for privacy preserving queries and data
processing.

Jeroen Van den Hoven (TU Delft)
Responsibility, Trust and Clouds

Jeroen talked very interestingly about value sensitive design, and
gave some nice examples from his work in the ETICA project. One lovely
one was the "racist overpass" entering Brooklyn, where it was
allegedly set too low to stop buses moving between poor black and
affluent white areas of the city.

Jeroen outlined the meta-obligations for the designer, and projected
these onto the cloud - he used the horns of the dilemma as a means to
show the tension in design between security, speed and reliability
needs on the one hand, and opacity and offshoring on the other.
Resolving this tension (or dilemma) would remove the unfortunate
embedded values (that lose privacy for example).

John Naughton pointed out that Amazon's removal of Wikileaks from
their service was a pretty good example of opaque decision making.
This seeded several later discussions of privacy and appropriate
security throughout various points in the day (e.g. 700,000 people
accessing all the cables from all the embassies in the world is a very
good example of centralisation of poorly secured data being a really bad
idea).

I pointed out Simon Crosby's recent blog entry asserting that you
should trust the cloud with your data just as you trust the bank with
your money (this absurd metaphor raised laugh).

Jeroen pointed out that some systems are not amenable to inspection
(fail the transparency test - e.g. growers of software
neural nets are often not able to
explain their reasons for pattern matching outcomes) hence these
systems are counter-indicated in situations where transparency is
required (e.g. medical practice) - this may mean that some
technologies are simply not ethically acceptable.

I raised the example of the Ford Pinto 11$ safety feature which was
left out for cost reasons despite the estimated 180 deaths per year.
Had this decision been public, the car would never have sold,
rendering the estimate that it was an affordable safety omission
incorrect economically, let alone morally.
Richard Holton (MIT)
Trust as attitude and as relationship

This was a very interesting discussion of a "realistic" or "physically
based" model of trust and started with a rehearsal of work on
increasing trust through iterated games (a la tendency in the iterated
prisoners dilemma), but then moved on to recent work in behavioural
economics and results motivating people to move away from the notions of
rational choice theory, but also away from altruism and reward.
Empirical results show that people will expend resources to punish
misbehaviour (even when the misbehaviour (i.e. byzantine, in the sense
of non rational and non altruistic - see BAR-T work at UT Austin) is
directed to a third party).

Richard then moved on to talk about neuroscience work that has shown
the role of Oxytocin (I raised other work - e.g. Robin Dunbar's work
on endorphins and trust, which is heading pretty much the same way).
This shows that we can show there are ingroup behaviours where
altruism functions.

There was a discussion about how strong this research result is and
its limitations.

A couple of very interesting examples of "good" and "bad" behaviour
were then bought up - Wikipedia (user contributed effort with little
obvious reward, not even fame). And the turnaround of Harvard
librarians when they had given content to the Google Books project in
the expectation that all the out-of-copyright content would be made
freely available to the world, only to find Google wanted to charge
for that too - Made me raise the Marcel Mauss work on Gifts
(interestingly, one of the later speakers is from the Mauss Institute
in Paris). See this essay
for example, or the more recent book by Hyde called The Gift

David D. Clark (MIT)
Finding Our Way in the Cloud: Engineering the Shared Experience

Dave took us on a tour of cloud technology starting with Woody Guthrie
and the Grand Coulie dam, and the rather large (~1M CPU) data centers
that Google, Microsoft et al build there on account of cheap,
plentiful and reliable electricity...(and even "green", despite
dissipating many Megawatts through heat)...

and ending up with a discussion of the EULA (End User License
Agreement) power (and obfuscation) that Cloud Service Providers wield
over their "customers". This was illustrative of what Dave called
"points of control". Originally, the Internet was not invested with any
central points of control, by very deliberate design choice. This has
now shifted, and the power in the application/service space has
concentrated in very few points. He also discussed Privacy and
Context, citing Nissenbaum's recent work.
He speculated that in the next couple of years, we might well see a
regulatory/policy push (or at least a discussion) moving from
Network towards "Cloud Neutrality

Dave couldn't make it in person due to medical problems bought on by
flying too much in confined circumstance (note, many of the early
internet pioneers are very modest, almost like pilgrim fathers, in
their expenditure, even when they may have many ways to make a lot of
money, and therefore fly business -
I wonder if this ethic informs their technical decisions - I
find people like Bob Braden, Van Jacobson, Vern Paxson and others who
actually write the code, are all quite similar to Dave in this
regard). Anyhow, he looked well "in silicon" over the video link.

Luciano Floridi (Oxford, Herts)
Cloud Computing and Its Ethical Challenges

Luciano talked about ownership models and the technology in th e
cloud (vertical structure of cloud Hardware,  Virtual Machines,
platforms and Applications).

He talked about control and responsibility.

John Naughton asked about Green considerations given the scale (as per
Dave Clark's evidence!).

I pointed out that the system was doubling every year, and that
maximum energy savings perceived in industry working on greener data
centers were about 1 order of magnitude which would be over taken in 4
years by simple scaleout. This means that alternatives (fully
decentralised clouds in the home (settopboxes, home hubs) might be the
only viable solution - given the end user needs a pretty powerful
device just to render images/video and support low latency interaction
(on cached copies) we might as well solve the complexity of this now.
Tony Hoare pointed out that today, low speed links (especially wifi)
would be a barrier to serving data from home or mobile devices - I
claim that this is why we need Fiber to the home and 4G, which will
remove that latency barrier).

There was also a more detailed discussion of the metaphor of banks as
opposed to clouds. (Aside - i think the cloud is more like a church
with big buildings attended by priests ("gurus").

Alvin Goldman (Rutgers)
What are Collective Epistemic Agents? And How Much Can They Be Trusted

Alvin gave a fascinating insight into the area of collective epistemic
agents. This is a philosophical line of thinking about what can we say
about group "mental" states (merely by examining the utterances or
speech acts). He discussed the current thinking about the limitations
of the belief one can have about a groups "group-think", and gave some
lovely illustrative examples of when a group decides to collectively
lie (present a state that is not actually what the members all think,
but is agreed as a convenient fiction - e.g. 1945 American radiologists
"belief" in the risk of radiation, when asked by US government
(presumably to "excuse" Hiroshima/Nagasaki).

This underpinned a discussion of succesful strategies for groups to
manage themselves (voting/democracy) and their limitations.

Simon Blackburn (Cambridge)
Reliability and Trust

The previous talk was followed nicely when Simon showed
a single picture illustrating the voters paradox (see the Wikipedia article
if you don't believe it - and think about how you will vote for AV on
May 5th in England).

Simon then related the story about massive trust failure due to google
mail adverts  from phishers. The tale (in the guardian)
was very useful in illustrating a range of things from technology (as
George Danezis discussed, Google simply don't have time to check all
the people bidding to sell an advert - there is no human in the loop -
although I suspect some simple heuristics might limit damage), through
to moral and legal (who is to blame), and educational (why was the
victim so naive).

This then led to another very interesting discussion about the Cloud
("too Big too fail" - that Bank Metaphor creeping in again).
I'd claim that while the economy probably does deepened on the Internet
working (e.g. banking/shopping) it is less obvious that it depends on
the cloud just yet.

At this point Thomas Simpson asked the question about crypto helping,
and Bob Briscoe and George Danezis gave a nice impromptu explanation -
see here for more info

Thomas Simpson (Cambridge)
When is it wise to follow the crowd?

Thomas talked about his work on the wisdom and madness of crowds, and
discussed a number of rules of thumb for when to go with the flow, and
when not - he distinguished, pragmatically, between a mob and a crowd -
social scientists in the meeting begged to disagree about this
distinction, although I suspect that (based on the ideas in epistemic
group-think and in Holton's presentation, that one could actually build
a system that made this distinction detectable often, in practice).

There was a discussion of how links get promoted  (sponsored or just
by links through pagerank's operation) - the obscurity (lack of
transparency) of this process to most users is perhaps a source of
problems for trust in the cloud in its oldest area (search results).
We talked about social search and other schemes - we unpacked the idea
of search and realised that it has a lot of interesting details in its
operations (pagerank applies to terms as well as to results, and
click-through applies to revenue - which doesn't directly link to
priority of result, but could lead to profits leading to sponsoring a
link to move up - so there are many feedback loops within feedback
loops).

---

I had to leave at this point sadly, and missed the last two talks,
which was very annoying since both of the speakers had made many very
interesting interventions/interjections and comments throughout the 2
days, and I had looked forward to their talks - one from the Social
Science perspective, the other from a law viewpoint.

Rod Watson (Institut MM, Paris)
A Sociological Conception of Interpersonal Trust

Ian Kerr (Ottawa)
In Machines We Trust?  Cloud computing, ambient intelligence and robotics?

Comments (0) Trackbacks (1)
  1. Thanks for the helpful notes Jon.

    Re: racist overpasses – see the refutation of this well-known story in http://www.wzb.eu/alt/met/pdf/do_politics.pdf

  2. It occurred to me that one of the “too big to fail” arguments about facebook is that they provide a rendezvous point for people (i.e. like search does too) – however, this doesn’t mean they need to do ANYTHING else at all – skype provide the rendezvous service, (to bootstrap nat traversal initiailly, but then to “call” people and provide “presence”) but thats all – so one could build a distributed twitter/fb/gmail/flicr starting from skype (note skype was founded by p2p folks:)

  3. An interesting essay on P2P community/philosophy/economy, which is almost the exact opposite of the cloud is
    http://snuproject.wordpress.com/2011/02/26/the-political-economy-of-peer-production-michel-bauwens/


Leave a comment