Open Hardware Workshop 2011 (Grenoble, FR) (mk2)
Open Hardware Workshop, October 2011 Grenoble
(mk2 with some corrections from Javier)
I had reason to attend the Open Hardware Workshop 2011 in Grenoble. (Waving the flag for NetFPGA and Raspberry Pi and because openhardware appeals to my inner maker and is very cool. (This post comes across a little dour - it shouldn't - I think this is hugely exciting.)
Motivated by the ideals of Open Hardware and exposed with a definition and everything (OSHW) (exampled recently by the CERN Open Hardware License), this workshop was held coincident with the big physics instrumentation conference ICALEPCS, lots of high energy physics people present.
While not the first license for open hardware, the CERN OHL has seeded some interest and contributes to the area being taken more seriously - the event was organized by Javier Serrano and this coupled with an open-hardware success story, Arduino, this workshop was the result.
An exciting gathering with people from a bunch of high energy physics and related people, a bunch of companies (some trying and some not trying OH) and a tonne of others like me (keen enthusiasts, regular enthusiasts or people that see 'this is important').
Javier Serrano gave an introduction to (his version of) the 'open hardware community' he started with a nod to the OSHW definition. There was a nod to specific goals by part of the open hardware community: with a heavy taste of GPL derived OSS-scented ideas, while it did seem a little naive at times, there were clear best-intentions and in particular the community (drawing on experience from OSS) recognizes that there has to be flexibility in specifics as well as a community that fosters support companies otherwise no one is going to have a commercial route.
Another aspect was the Open Hardware Repository - while not intended to be Sauron's eye - it is a fantastic resource for combining and sharing projects. A little unclear what license things are/must/should/could be under but a great idea.
Tomasz Wlostowski, gave a quick summary of the Open Hardware Summit held a few weeks earlier in NYC - and the best (hope) take-away was "The best is yet to come" an interesting justification was given relating to the slow-down in Moores law as it relates to the simple manufacturing speed ups (the assertion was this moved the leading edge away from simply being make-things-faster, and would be the chance for making-things/different/cheaper/flexible/interesting to flourish - this being the open door for the Open Hardware movement.) in contrast to the 'engineers' of the Workshop, the Summit had been all about community-raising and a much more consumer/social event. Notable cool things was the Instructables (what happens when you mix 13 year olds, constuction-kits and many rubber bands), and the ifixit (free manuals) website.
Myriam Ayass (legal advisor of CERN's Knowledge Transfer group) talked about the Cern OHL, in particular that it is going to version 1.2 (with a version 1.3 in the planning stages) and that this attends many issues with earlier versions. There is a mailing list for discussing the license and if you have a heart felt opinion - first make sure you have the latest copy and second join the discussion. It is clear Cern have their heart in the right place over this and don't mind the legal investment most of us cannot make.
My notes are that currently this is a PCB-focused license, the definition of hardware is a confusion and that a kicking question is 'so what is RTL/VHDL/Verilog/etc' - is that hardware covered by the OHL or something else?
This talk generated a huge bunch of questions ranging from "what does Cern OHL want to be: GNU-like (coveting openness as priority) or BSD/Apache-like (coveting dissemination as priority). " certainly less understanding from those present about what happens when licenses are combined, when licenses are unclear and far too much "well don't use that tool then" in response to tool-chain lockin. However, none of these people were idiots and now is the time to have an impact, mailing list details are off the Cern OHL site link
This was followed by a talk by the Arduino cofounder David Cuartielles. They make rather nice, very very cheap, do-dads (started life as a "how do I get my kid interested in CompSci/CompEng"). work in collaboration with Telefonica (including Pablo Rodriguez), among others. Very near ideas, but they face a real trauma - because they are signed up to "open" they are put at risk economically by the problem that once the PCB artwork of boards are released, the company has less than four weeks before clone boards appear. The arduino boards are considered (and perhaps fairly-so) as a potentially useful blob in the "Internet of things" (yeah I know - no working definition) sensor boards, various neat flashing light things, robot boards, all that sort of stuff, neat and nice. I hope Raspberry Pi can recycle the community of connectables this project has fathered.
Creotech (the second of four commercials that had talks), founders include an ex-CERNer and are working on instrumentation that plugs into a bus called FMC- something (FMC is a wider standard called VITA 57) that is intended to allow compatible instrumentation packages, what this means for you and me is unclear but the motivating project for a lot of these people was either to stop having their lives/work ruled by huge transactions of money to "National Instruments" (and other vested interests) and to stop needless repetition rebuilding things that already existed - a particular density Msamples-per-second do-dad was one particular issues that it appeared many organizations had started and perhaps even completed at least 90% of their own in-house designs. Creotech was open hardware thinking, seemed fairly successful and had some respect.
In contrast National Instruments, ICALEPCS (but not the OHW) sponsor, did not. A talk that largely consisted of "Yah national instruments" did contain one seed of useful insight; the problem most any organization has (and the one National instruments can do) is provide a 25 year guarantee of lifetime replacement/operation/etc It was clear the speaker was having a pot-shot at the open hardware startups, but he also made courting noises and a question "what does your talk have to do with open hardware" summed up the chasm nicely (in short: not much or "we are still thinking".) The problems for NI are interesting, but an example is this: apparently (don't quote me on how good the toolchain is) the NI tool chain can target special NI devices that are both programmable and include a range of flexible bits- from hardware to firmware (FPGA) to whatever. The toolchain is assured, and tightly (conservatively) bounds what the programming can actually do - this all is rather critical if you (as NI) need to give assurances that not only will your kit work for 25 years, but the 'motor control unit that closes the small door preventing escape of the nasty gas' will actually do its job." The idea that someone can knock together some new code to run on the NI device brings out the NI lawyers in cold-sweats.
The issue of liability is interesting too, as it is a strong theme in the Cern OHL.
Seven Solutions a little more Creotech-like,are dabbling and sell an open board or two. They make an interesting and active hybrid model (propriety + OH)
and Instrumentation Technologies another instrumentation group that  is flirting with open hardware talked, time passed, talk finished.
Facebook (John Kenevey) talked about the Open Compute Project. Don't get too excited, its about building a design for machine rooms that is more universal and more wide-ranging than simply fits in 19" racks - the current definition.
Notable soundbits: Open compute is a white-box channel to the market that challenges supplier base and allows new entrants. A conclusion was that the dance between silicon vendors (CPU makers) and box-benders meant that the vendors are screwed and the customers are worse-than-screwed. People pitching OCP as a mechanism to get out form under vendor locking.
When you see the in-house machines of Google, and Facebook, and others, this makes a lot of sense.
Modularization is key, and facebook seem to be enjoying not caring about doing anything more than 'motivating' the actions and encouraging the open-source hardware community. It is clear they are sick of being held over a barrel by people that assemble machines (metal benders) and hope for some nice innovations... Facebook consider this part of their "GRID to Gates" (GRID in this case means the power-grid) initiative. Problems seem to be what does a standard smell like? do we have fans in the rack or in the units, what is the form and nature of power in the rack or the exchangeable units.. etc etc. Sadly the impact for the man in the street (or the machine-room fitter-outer in the room) will be 12 months away (my wild guess). The slides of this talk were not available (nor the recording) as there was some discussion (funny stories) about specific 'metal benders.
Following lunch we had several speakers talking about tool chains for (PCB) design: two tools got discussed: GEDA and Kicad, kicad looked very nice indeed and certainly looked better than some of the approaches common. Problems discussed including importing artwork and the general mumbles of agreement about libraries of package and pinout. For this writers perspective it seems a public definition of pinouts and packages is obvious and in the interests of the manufacturers - although probably not in the interests of various 'big package' authors (Cadence, etc.) Time will tell.
Projects discussed in the remainder of the day included
hdlmake, a concept to get away from the GUIS commonplace in build tool-chains, adds manifests to permit dependency trees, and seems for the most part like a good idea. (Also makes me appreciate the effort Jad Naous and others did on NetFPGAv2 to build the make as clean as it was.)
Icarus Synthesis engine - considered critical OSS for Open Hardware. obvious problems include propriety core-handling.
Open FPGA toolchain (Sébastien Bourdeauducq who did the Milkymist open video (effects) hardware. neat stuff trying to hack his way around obfuscated FPGA details - (with a lot of grumbles about how mean Altera and Xilinx are) but the guy was totally oblivious to the idea that people don't realize details about their FPGA because some knock-off company will start making 10cent copies of the FPGAs themselves. Ok I'm being unfair, Sebastien's position is "let's get started nevertheless and see what happens". I think this would appeal strongly to academics who want to redesign/modify/mess-with the RTL -> FPGA process.
Other things presented included: SOLEIL synchrotron instrumentation and the RHINO project.
RHINO is interesting, as an open source radio thing that came from the radar remote sensing group at the University of Cape Town born from CASPER (a project at Berkeley) and the interest in South Africa in the SKA project (SKA is at this moment a competition to build the next serious astronomy platform: either in Australia or South Africa), neat stuff. This project incorporates BoRPH and a number of other technologies to make it easier to use and consume.
From the discussion slides for the Open Hardware Community: (some questions without answers)
- Can clients change mindset from build-in-house (a not-invented-here variant) and pay for support?
- How can we deal with Tech Transfer departments that argue against OSH (even if the hardware is not core buisness)
- How can we involve universities?
- (How) can we pool resources?
- (How) can we pool manpower for projects?
- (how) can we pool money to pay companies (for the dull stuff)?
- Who are the communities?
lots of talk, not many answers - this is a very young community, lots of idealism lots of potential.
Most-all the presentations and videos of the presentations and Q&A are available from the workshop, all under creative commons (of course)
General comment: This community is very interesting but right at the moment there are considerable dumb-language (lazy language) defaults that conflate commercial and propriety when they mean "open-source" and commercial and it may be an issue from "English as a second language"; I know this wounds commercial organizations (cast as bad guys) and in fact the intention is something else.
It was great - I will go again, if Javier lets me.
Photonics UK and Cyber Defense UK
Last couple of days I was in these two events
1.EPSRC Network of Networking 2 day workshop on Photoonics - see
http://www.commnet.ac.uk/node/34
Very interesting to see how coherent the UK's academic and industry photonics community are - they have a pretty clear roadmap for next 5 years and then some nice challenges - not a lot for CS (still) until they can do somethign cool in a) integration of optical links onto processors and b) build some more viable (in scale/integration/power terms) gates....but in terms of what they are doing for price/performance, they pretty much match Moore's law (terminating a 10GigE for 10 bucks is an amazing achievement!)
2. Rustat conference on UK Cybersecurity
http://www.cybersecurityforum2011.com/
This will almost certainly be blogged by Ross or someone else in the security group as they were there en masse. I chaired a session on UK skills and a couple of good outcomes were support from research counciles for more PhDs (whether this leads to money will remain to be seen) and
and the idea that CS graduates that end up on the Board as CIOs should make sure they have good business skills so they aren't looked down on by other board members as just a sort of uber "IT guy"...
Lots of very interesting corridor conversations. The UK gov budget in this space is 600M quid, so many SMEs scampering after it:) In general, we seem to be in ok shape (government policy doc on cybersecurity out soon, recent Chatam House report (can't find link right now) appareently less rosy, but still very useful. Expect to see more details here soon:
http://www.lightbluetouchpaper.org/
We're having a NATO work shop on this in 10 days at Wolfson in Cambridge...Rex Hughes there is coordinating it with the Cambridge Science and Policy group.
Finally, I suggested a Homeopathic remedy for cyberattacks might be to dilute the stuxnet virus say 10^11 times in some random bits (e.g windows vista kernel code) and add it to your site.
Oh yeah and can someone tell me just what does the ICTKTN do?? :)
Mobicom. Day 3
3rd and final day... mainly about PHY/MAC layer and theory works
The day started with a Keynote by Farnan Jahanian (University of Michigan, NSF). Â Jahanian talked about some opportunities behind cloud computing research. In his opinion, cloud computing can enable new solutions in fields such as health-care and also environmental issues. As an example, it can help to enforce a greener and more sustainable world and to predict natural disasters (e.g. the recent japanese tsunami) with the suport of a wider sensor network. His talk concluded with a discussion about some of the challenges regarding computer science research in the US (which seem to be endemic in other countries). He highlighted that despite the fact that the market demands more computer science graduates, few students are joining related programs at every level, including high school.
Session 7. MAC/PHY Advances.
No Time to Countdown: Migrating Backoff to the Frequency Domain, Souvik Sen and Romit Roy Choudhury (Duke University, USA); and Srihari Nelakuditi (University of South Carolina, USA)
Conventional WiFi networks perform channel contention in time domain. Such approach imposes a high channel wastage due to time back-off. Back2F is a new way of enabling channel contention in the frequency domain by considering OFDM subcarriers as randomised integer numbers (e.g. instead of picking up a randomised backoff length, they choose a randomly chosen subcarrier). This technique requires incorporating an additional listening antenna to allow WiFi APs to learn about the backoff value chosen by nearby access points and decide if their value is the smallest among all others generated by close-proximity APs. This knowledge is used individually by each AP to schedule transmissions after every round of contention. Nevertheless, by incorporating a second round of contention, the APs colliding in the first one will be able to compete again in addition to a few more APs. The performance evaluation was done on a real environment. The results show that the collision probability decreases considerable with Back2F with two contention rounds. Real time traffic such as Skype experiences a throughput gain but Back2F is more sensitive to channel fluctuation.
Harnessing Frequency Diversity in Multicarrier Wireless Networks, Apurv Bhartia, Yi-Chao Chen, Swati Rallapalli, and Lili Qiu (University of Texas at Austin, USA)
Wireless multicarrier communication systems are based on spreading data over multiple subcarriers but SNR varies in each subcarrier. In this presentation, the authors propose a join integration of three solutions to reduce the side-effects:
- Map symbols to subcarriers according to their importance.
- Effectively recover partially corrupted FEC groups and facilitate FEC decoding.
- MAC-layer FEC to offer different degrees of protection to the symbols according to their error rates at the PHY layer
Their simulation and testbed results corroborate that a joint combination of all those techniques can increase the throughput in the order of 1.6x to 6.6x.
Beamforming on Mobile Devices: A first Study, Hang Yu, Lin Zhong, Ashutosh Sabharwal, David Kao (Rice University, USA)
Wireless links present two invariants: spectrum is scarce while hardware is cheap. The fundamental waste in cellular base stations is because of the antenna design. Lin Zhong proposed passive directional antennas to minimize this issue. They used directional antennas to generate a very narrow beam with a larger spatial coverage. They have proved that this solution is practical despite small form factor of smartphone's antenna, resistent to nodes rotation (only 2-3 dB lost if compared to a static node), and does not affect the battery life of the handsets, specially in the uplink as the antenna's beam is narrower. This technique allows calculating the optimal number of antennas for efficiency. The system was evaluated both indoors and outdoors in stationary/mobile scenarios.  The results show that it is possible to save a lot of power in the client by bringing down the power consumption as the number of antennas increases with this technique.
SESSION 8. Physical Layer
FlexCast: Graceful Wireless Video Streaming, S T Aditya and Sachin Katti (Stanford University, USA)
This is a scheme to adapt video streaming to wireless communications. Mobile video traffic is growing exponentially and users' experience is very poor because of channel conditions. MPEG-4 estimates the quality over long timescales but channel conditions change rapidly thus it has an impact on the video quality. However, current video codecs are not equipped to handle such variations since they exhibit an all or nothing behavior. They propose that quality is proportional to instantaneous wireless quality, so a receiver can reconstruct a video encoded at a constant bit rate by taking into account information about the instantaneous network quality.
A Cross-Layer Design for Scalable Mobile Video, Szymon Jakubczak and Dina Katabi (Massachusetts Institute of Technology, USA)
One of the best papers in Mobicom'11. Mobile video is limited by the bandwidth available in cellular networks, and lack of robustness to changing channel conditions. As a result, video quality must be adapted to the channel conditions of different receivers. They propose a cross-layer design for video that addresses both limitations. In their opinion the problem is that the compression an error protection convert real-valued pixels to bits and as a consequence, they destroy the numerical properties of original pixels. In analog TV this was not a problem since there is a linear relationship between the transmitted values and the pixels so a small perturbation in the channel was also transformed on a small perturbation on the pixel value (however, this was not efficient as this did not compress data).
SoftCast is as efficient as digital TV whilst also compressing data linearly (note that current compression schemes are not linear so this is why the numerical properties are lost). SoftCast transforms the video in the frequency domain with a transform called 3D DCT. In the frequency domain, most temporal and spatial frequencies are zeros so the compression sends only the non-zero frequencies. As it is a linear transform, the output presents the same properties. They ended the presentation with a demo that demonstrated the real gains of SoftCast compared to MPEG-4 when the SNR of the channel drops.
Practical, Real-time Full Duplex Wireless, Mayank Jain, Jung II Choi, Tae Min Kim, Dinesh Bharadia, Kanna Srinivasan, Philip Levis andSachin Katti (Stanford University, USA); Prasun Sinha (Ohio State University, USA); and Siddharth Seth (Stanford University, USA)
This paper presents a full duplex radio design using signal inversion (based on a balanced/unbalanced (Balun) transformer)and adaptive cancellation. The state of the art in RF full-duplex solutions is based on techniques such as antenna cancellation and they present several limitations (e.g. manual tuning, channel-dependent). This new design supports wideband and high power systems without imposing any limitation on bandwidth or power. The authors also presented a full duplex medium access control (MAC) design and they evaluated the system using a testbed of 5 prototype full duplex nodes. The results look promising so... now it's the time to re-design the protocol stack!
Session 9. Theory
Understanding Stateful vs Stateless Communication Strategies for Ad hoc Networks, Victoria Manfredi and Mark Crovella (Boston University, USA); and Jim Kurose (University of Massachusetts Amherst, USA)
There are many communication strategies depending on the network properties. This paper explores adapting forwarding strategies that decides when/what state communication strategy should be used based on network unpredictability and network connectivity. Three network properties (connectivity, unpredictability, and resource contention) determine when state is useful. Data state is information about data packets, it is valuable when network is not well-connected whilst control-state is preferred when the network is well connected. Their analytic results (based on simulations on Haggle traces and DieselNet) show that routing is the right strategy for control state, DTN forwarding for data-state (e.g. Haggle Cambridge traces) and packet forwarding for those which are in the data and control state simultaneously (e.g. Haggle Infocom traces).
Optimal Gateway Selection in Multi-domain Wireless Networks: A Potential Game Perspective, Yang Song, H. Y. Wong, and Kang-Won Lee (IBM Research, USA)
This paper tries to leverage a coalition of networks with multiple domains with heterogeneous groups. They consider a coalition network where multiple groups are interconnected via wireless links. Gateway nodes are designated by each domain to achieve a network-wide interoperability. Â The challenge is minimising the intra-domain cost and the sum of backbone cost. They used a game-perspective approach to solve this problem to analyse the equilibrium inefficiency. They consider that this solution can be also used in other applications such as power control, channel allocation, spectrum sharing or even content distribution.
Fundamental Relationship between Node Density and Delay in Wireless Ad Hoc Networks with Unreliable Links, Shizhen Zhao, Luoyi Fu, and Xinbing Wang (Shanghai JiaoTong University, China); and Qian Zhang (Hong Kong University of Science and Technology, China)
Maths, percolation theory ... quite complex to put into words