We're all getting older, curiously enough at exactly the same rate. My ancient mum tries to get on with technology, but its a constant struggle. As with about 1M other people in their 80s/90s in the UK, she has sight loss and hearing loss (sight loss is from macular degeneration, a common problem with age, although now it can be slowed with various treatment that reduce the rate the blood supply to retina dies off - hearing loss just from wear&tear).
So I have tried to setup computers and phones and TV in her house to be "accessible". Believe me, the "state of the art" in this stuff is appalling. It is truly shocking how bad support in terms of hardware and software is.Just think about default WiFi access control to start with. Most APs shipped by ISPs come with security default on. And you have to type in the SSID and Key to the computer - have you tried that with good eyesight. And you may have to do it within 30 seconds after hitting one of 3 buttons on the side of the AP. Doh.
Now try setting up accessibility options on a computer - e.g. screen magnification and voice/spoken menus etc. Firstly, on a windows box, the max magnification means menus go off the bottom of the screen (this is on a machine with a Very Big Screen indeed), and so you can't turn it off again (or reset it). Macs are not a lot better. Secondly, there's no tailoring of the speech stuff, so every screen event triggers an annoying voice (imagine that old paperclip, but with a stoopid accent - its worse, believe me).
Now on mobile phones. So find a touch screen fone which lets you setup accessibility - ought to be quite easy really. no it isn't. now look at wifi use in home (i.e. skype) so its free and you don't have to explain the way to use other phones. So now look at what happens when you move outside the home - skype will still work on 3G, but it could be ruinously expensive. Why can't one have a single voice API, which chooses a network stack (voice+gsm when out of wifi range, for example).
The only thing that was simple was the (cheapest big LCD screen) LG from Richer sounds. The UI is basically like any old TV, and the buttons on the screen are not too many or invisible.
The last straw was hearing aids. My mother has 3 (1 set of analog in the ear, 1 set of fairly good digitial behind the ear from NHS and one set of very fancy private digital ones) - these gizmos are all fairly amazing (they work very well when they work) - the digital ones do fancy filtering of background noise and attempt to compensate for differences in ears (matters for directional hearing and understanding speech in noisy settings like pubs/restaurants/shops) and are very cute until you get to
i) trrying to replace a battery
ii) trying to find a manual for them online
This whole world is astounding -all the websites one can find are ripoff merchants trying to make a buck, and nothing but vacuous generic advice - unlike any of the other tech stuff where (in my experience) you can get a professional engineers repair book for free for last weeks phone or last decades washing machine, and you can get useful tips from anything from photo guide how to, through to a youtube video - i've just fixed a fancy tent and last year fixed the jammed DVD drive on an old Macbook that way for free) - the world of accessiility is anything but accessible. So there's 3 buttons and a slideout battery draw on the (NHS provided) digital hearing aid. Can you find a website that says what the buttons do, or how long the batteries should last? nope. not at all. not even on RNID site (who do at least provide quite good generic advice). And remember a large fraction of the people using these gasdgets also have sight problems, so trying to see fiddly (stateful) buttons is really not a sane UI.
If I had any principles I'd start a company to fix all this...its quite shocking!
There have been a bunch of projects related to functional programming going on in the SRG recently, and many of them relevant beyond "just" the programming language crowd.
There were several interesting talks on various aspects of congestion control at IETF 80, spread around various working groups and research groups; the majority of work that I would classify as actual research being done in the IETF and IRTF at the moment seems to concern congestion control in some way or other. I've already written about Multipath TCP and Bufferbloat; here's a potpourri of other TCP problems and proposed solutions. Most of these came out of the meeting of the Internet Congestion Control Research Group (ICCRG) - strictly part of the IRTF rather than the IETF - but the presentation on SPDY came from the IETF Transport Area open meeting.
Session 7: Better Clouds
Kaleidoscope: Cloud Micro-Elasticity via VM State Coloring
The problem is that load on internet services fluctuates wildly throughout the day, but the bursts are very short (median around 20 minutes) and cloud providers are becoming "less elastic" (bigger VMs up for longer), and cannot support such short bursts because VMs are too heavyweight. The solution is based on VM cloning (SnowFlock), but the lazy propagation of state in SnowFlock leads to lots of blocking after the clone (for TPC-H). The solution is to do page coloring to work out the probable role of the page (code vs data, kernel vs user, etc.), and then tune the prefetching by color (such as read-ahead for cached files). Kaleidoscope also reduces the footprint of cloned VMs by allocating memory on-demand, and performing de-duplication. Most server apps tolerate cloning (only change is a new IP for the clones), and SPECweb, MySQL, httperf work fine. The experiments involved running Apache and TPC-H. Blocking decreases from 2 minutes to 30 seconds. TPC-H takes 80 seconds on a cold Xen VM, 20 seconds on a warm one, 130 seconds on a SnowFlock clone, and 30 seconds on a Kaleidoscope clone. Based on a simulation of an AT&T hosting service, Kaleidoscope achieved 98% fewer overheads using a 50% smaller data center. - dgm36
Session 4: Joules and Watts
Energy Management in Mobile Devices with the Cinder Operating System
A new mobile device OS, whose aim is to allow users to control their energy use, and allow applications to become more energy-efficient. First abstraction is throttling, which limits the draw that a particular application may have. However, the energy use is bursty, so this uses a reserve buffer that allows an application to use more energy if it has been running below maximum for a while. A process with an empty reserve will not be scheduled. To prevent hoarding of energy, the reserve drains with multiplicative decrease (e.g. 10%/sec). Reserves may be nested, to, for example, isolate the energy usage of a plugin like Adobe Flash. Energy may also be ring-fenced in "virtual batteries" for uses such as emergency calls. The OS abstraction is a process launcher called "enwrap", which launches an application with an allocation of power consumption. Background applications draw power from a smaller virtual battery to prevent unexpected power draw from applications you can't see; this is managed via a custom window manager. Development issues arose from the implementation of the HTC Dream, which uses a binary blob shared object to interact with the secure ARM9 core, and the exposure of the battery level as an integer 0 to 100; this led to concerns that future mobile phones will be more difficult to develop research OSs for, as there is a move to more use of secure cores and signed code. As a result of these frustrations, they moved to implement their abstractions in Linux, giving Cinder-Linux. One challenge was IPC: it was necessary to attribute energy use in daemons to the process making the IPC request. (This was easier in Cinder due to the use of gates, based on the same mechanism in HiStar.) One application developed was an energy-aware photo gallery, which modulated its download rate depending on energy properties. Next step is working out how to use these primitives, in terms of UI design (presenting a breakdown of energy use to users), energy modeling (currently use a simple energy model based on offline profiling, but could use something more sophisticated such as the approach described in the following talk), userspace code instrumentation and running Android (Dalvik) on Cinder.
Session 1: Data, Data, Data
Keypad: An Auditing File System for Theft-Prone Devices
The challenge is that mobile devices are prone to theft and loss, and encryption is not sufficient, because people have a habit of attaching the password to the device on a post-it, and it is vulnerable to social and hardware attack. Aim is to know what (if any) data is compromised in the event of a loss, and prevent future compromises. Solution is to force remote auditing on every file access (with encryption), by storing keys on the auditing server; this is done in the file system. File system metadata are stored on the trusted server. There are significant challenges in making this performant: caching/prefetching/preallocation are used to optimize key requests, but file creation is more challenging to optimize due to file systems semantics. Blocking filename registrations have correct semantics, but poor performance; vice versa for non-blocking registrations. To reconcile this, force a thief to use blocking semantics while allowing the user to use non-blocking semantics (as much as possible), which is based on using filenames as public keys. Second challenge is allowing disconnected access: the idea is to use multiple devices carried by the user to cross-audit file accesses, which still requires devices to hoard keys before going disconnected. - dgm36
Today was workshop day at EuroSys 2011, and I spent the day at the inaugural SFMA workshop. The aim of the workshop was to bring together practitioners from the fields of operating system, programming language and computer architecture research, and provoke discussion about new trends in parallel computing. The most notable thing about the workshop was the number of practitioners that it attracted, starting off with standing room only at 9am in the morning, and maintaining a respectable audience of 35 people through to 5pm. I was on the program committee for the workshop, and Ross did a great job of organising the whole thing.
The Philosophy of Trust and Cloud Computing
April 5/6, Corpus Christi, Cambridge
Sponsored by Microsoft Research
Richard Harper (MSR) and Alex Oliver (Cambridge) outlined
the goals of the meeting, and everyone introduced themselves - the
majority of attendees were either in Social Science/Anthropology or
Philosophy, with a few industrials and a couple of technical people
from Computing (networks&security).
The talks were mostly in the social science style
(people literally "read" papers, rather than powerpoint), so one had
to concentrate a bit more than usual, rather than looking at
bulletpoints and catching up on email/facebook.
How do you write a program that runs on hundreds or thousands of computers? Over the last decade, this has become a real concern for many companies that must be able to handle ever-growing data sets in order to stay in business. When those data sets grow to terabytes or petabytes in size, a single disk (or even a RAID array) can't deliver the data fast enough, so a solution is needed to exploit the throughput of hundreds or thousands of disks in parallel. In this post, I'll introduce various solutions to this problem, and explain how our CIEL execution engine supports a larger class of algorithms than existing systems.