Tuesday, November 30, 2010

Mutability Considered (Somewhat) Harmful

We recently had a coffee-room discussion on the futility of trying to introduce new relevant programming languages. Computer scientists seem to invent new programming languages on a daily basis - it's an important contribution to conceptual research in computer science and having at least one language to one's name seems to be important for bragging rights in certain circles.

However for a programming language to become practically relevant is extremely rare. We can make the somewhat flippant educated guess, that since the dawn of commercial computer use in the 1960ies, there have been about 5 major commercially successful languages: FORTRAN, COBOL, C, C++ and Java - roughly about 1-2 per decade.

Certainly, there have been many other vendor and/or domain specific languages over the years or others with a significant popularity, just not enough to make it into the all-time A-list. There are multiple surveys which try to measure which are currently the most popular programming languages, the graph below is an example of one of them.


But if a new significant general-purpose language were to emerge, the one thing it would have to get right is concurrency - maybe by largely avoiding it. In the 15 years since Java was introduced, computers have increasingly evolved towards distributed systems. Today's top super-computers are basically massive clusters of customized PC servers. Even desktop computers have typically multi-core CPUs with complex distributed caches and memory hierarchies, which work increasingly harder at pretending that they are still a von Neumann machine with a single, flat and consistent memory space. Maybe it is time to give up this convenient illusion and find a new model for the more complex and certainly more dynamic reality. E.g. this talk by Rich Hickey might provide some food for thought on how a practically programming model outside the von Neumann box might look like.

Concurrent programming using shared state and mutual exclusion has been around for a long time, but it is tricky, error prone and until recently has been the domain of a small number of programmers working on operating systems, databases and other high-performance computing systems. Also many people may find that even on moderately multi-core machines, lock contention is starting to cause serious performance issues which are hard to diagnose and even harder to fix.

Maybe the most successful way of dealing with concurrency is to avoid it. E.g. immutable value-objects and pure functions are one way to reduce the impact of concurrency. But since most real life computer applications involve dealing with things that evolve, the key might be to come up with a programming model where mutability is pushed into the framework, much how garbage collection moved resource management from being a concern of the application to being (largely) the problem of the programming run-time environment.

There has been a fair amount of recent research in lock-free concurrency control, which avoids shared state in favor of atomic operations and data replication and versioning. There is obviously a cost associated with increased data replication, but maybe in a world of high-N cores and complex distributed memory hierarchies, the trade-off could still be a winning proposition.

And maybe functional programming languages, which have existed outside the mainstream for decades might finally have their day in the sun.

Friday, October 22, 2010

Privacy: the Transatlantic Divide

Despite what the likes of Mark Zuckerberg may say, there are still some people who strongly care about privacy.  This seems to be more so the further east you go from Silicon Valley. And even in the Valley, an increasing number of people are becoming aware of this, even though they may not understand or appreciated the alternate viewpoints.

Whenever I am asked for an opinion on the European "obsession" with privacy or for an explanation on why the Germans seem so incredibly hung-up on privacy, my standard answer goes about as follows:

Yes, there are indeed differences in the approach and attitude to privacy in particular between the US and non Anglo-Saxon continental Europe, but in substance the differences may be smaller than the commonality. And yet we humans seem to be particularly good a picking out small (cultural) differences and get disproportionately stressed-out over over them. In robotics when a humanoid model is close, but just doesn't feel quite right,  this effect is called the uncanny valley. It is also very hard to really understand any alternate viewpoint which derives from a different experience that is not ones own.

For somebody with a US perspective, maybe the following analogy with freedom of speech is worth considering. Practically all major democracies have a strong constitutional commitment to freedom of speech (often called freedom of expression outside the US). And yet when it comes to trading off freedom of speech vs. other fundamental rights, the US typically strikes the balance strongly in favor of freedom of speech, while Europeans typically favor the kinds of human rights which protect the individual from harm. Americans overall feel very strongly about freedom of speech and are willing to make sacrifices in other areas to support it.

Since the current debate about privacy rights on the Internet is also much about the conflict between freedom of expression and the protection of the individual, this difference in tradition and priorities might also explain a good part of the difference in approach and attitude on both sides of the Atlantic. While in Europe privacy and data protection is considered a human right per-se, the US rather sees it as a consumer protection issue, mostly concerned with material damages.

Another explanation for the difference in attitude towards data protection in particular is that many Europeans have a recent memory of where the abuse of information can lead. The use and abuse of information played an important role for both fascist and socialist totalitarian regimes through much of the 20th century in Europe.  As Vaclav Havel describes in the "The Power of the Powerless", the essential source of power for a post-totalitarian system rests in its ability to control information in order to create a collective distortion of reality ("living a lie").

And finally, anybody who wants to better understand the fears of excessive data collection and abuse of personal information, should watch the excellent German film and 2007 winner of the best foreign language academy award -  "The Lives of Others".

Tuesday, August 10, 2010

I @#$%&* JavaScript!

I am by no means an expert web developer or particularly familiar with JavaScript, which might influence my distaste for it. My experience is mostly from making small changes in moderately complex existing applications.

JavaScript was originally intended as a small domain-specific glue-language to ad some client-side behavior to otherwise largely server-side web applications: do some client-side input validation, dynamically modify the page based on user input, etc. But with the growing popularity of AJAX style web-applications, much of the JavaScript client-side code has grown into monstrosities - mostly because of a lack of inherent support for modularity and encapsulation.

One of the most important properties a language environment should support in order to scale to large projects is a way to divide an conquer. There should be a way for one programmer to build upon the work of others without having to understand the implementation details of these building blocks which might be called libraries, modules, packages, interfaces, objects, widgets, components, etc. depending on the on the language. This should also include the ability to debug in the context and level of abstraction in which the code is being written.

For example, if an error occurs as the result of an API call, the error should be reported in terms of the API and the parameters passed through it and not just by a location in somebody elses low-level library code, where supposedly something has gone wrong, most likely because I make a mistake in an API call.

My experience with complex JavaScript libraries and frameworks (in particular the Closure library) is that the benefits one would expect to gain from using high-level libraries and components is greatly reduced, by having to debug and understand so much of the provided low-level code - just because the language system does not provide the necessary isolation. If the inclination is to write everything from scratch, so that at least I understand the code which I have to debug in my application, then the language environment has failed from a scaling point of view.

In this case, the blame not only lies with JavaScript as a language but as much with the ecosystem it lives in. De facto, JavaScript runs in a set of target environment, which are typically the most popular browsers (IE, Firefox, safari, chrome, etc.). The debugging support of these browsers is still severely lacking - even though things have gotten a lot better with things like the firebug extension to Firefox or JavaScript console built into Webkit. Part of the problem also comes from the fact that the typical JavaScript application is never self-contained but interacts with and depends on the DOM representation of a page in the browser and depends heavily on the quirks with which this particular browser renders the page and interprets the CSS attributes. The trickiest part of using a 3rd party JavaScript UI widget is often getting the right kind of CSS definitions loaded in the right order in the page where the JavaScript code is being executed.

Since rich AJAX style web applications are compelling and powerful to use, there will have to be a way out of this mess. JavaScript and its browser based ecosystem will either need to grow the features needed to support large scale software development, a higher-level language abstraction will be layered over it to make it more productive to programmers or a whole new web programming model will be created.

For an example of the second approach, GWT is an interesting step in the right direction. GWT is a compiler based approach, where an AJAX web application is written in Java and then compiled into JavaScript as the target for execution in the browser - relegating JavaScript to a kind of assembly or virtual machine language. The app can be partially debugged naively in Java. However because of browser quirks, much debugging is still required on the browser, where the abstraction breaks down again as it did for high-level languages before the existence of source-level debuggers: write code in a high-level language and debug the generated machine instructions.

Another advantage of this high-level compiled language approach is that the execution engine in the browsers is now only used by code generated by compilers and could be more easily optimized or even replaced by something new altogether simply by close collaboration between whoever builds the compilers and the JavaScript engines in the various leading browser platforms.

However there is another unfortunate issue with the GWT approach: while it can be scaled up pretty well to very complex AJAX web applications, it cannot be scaled down as easily to the simple tasks JavaScript was originally designed to do. This would leave a world where for simple things one would still use hand-written JavaScript and for complex applications a compiled AJAX framework like GWT. With an obvious discontinuity when a once simple application grows into a complex one and unfortunately, this is a pretty common case in real life.

Among the languages I commonly use, Python has the best properties of scalability from a software development point of view. It seems to be easily approachable by novice programmers and is now commonly used to teach a gentle introduction to programming for non-technical users. It is sufficiently high-level and low on framework-overhead to do simple things simply and easily and yet has just enough rigor and structure to scale to some pretty amazingly large-scale projects. It is an embeddable language and while it's standard distribution is quite a bit bigger than JavaScript, it would not be impossible to embed into a browser a library to manipulate the DOM. In fact some efforts to do that seem to exists. While Python was nowhere near as mature and proven in 1995 as it is today, one is left to wonder what would be the state of web programming, if Netscape had chosen to embed Python into its browser instead of JavaScript...

Sunday, July 25, 2010

The Psychology of Marginal Cost

One of the side-effects of moving internationally is that one is typically required to completely re-evaluate the set of services which we are generally accustomed to - e.g water, power, telecommunications, transport etc. Partly because traditional utilities are very local and partly because the circumstances of life are more or less subtly organized differently in different places. One such difference can be the available pricing models for a particular service - most commonly some form of flat-rate or metered pricing.

In areas where both flat-rate and metered pricing plans exist, analysis often shows that even though many consumer prefer flat-rate, the typical user would be better off with metered pricing, as only very few heavy users manage to fully use or "abuse" the plan.

Consumers often quote predictable cost and "no bad surprise" at the end of the month as a key benefit of flat-rate pricing. But another interesting observation to take into account is that in most cases, flat rate pricing stimulates increased usage. The real motivation for consumer to choose flat-rate pricing, specially for things related to fun and entertainment, might also be to get the unpleasant financial considerations out of the picture once and for all and not to remain as a kill-joy, nagging question each time the user feels like making use of the particular service.

For us, the most significant area of changed behavior seems to have been transportation. In Switzerland there is a flat-rate pricing option for all public transport, popularly referred to as the GA (in German). For about 200CHF per month and person, this allows to hop on any train, bus or boat anytime, anywhere in Switzerland. On the other hand, we don't currently own a car. This decision was made easier by the existence of Mobility CarSharing - a dense and well established car sharing service. The signature red Mobility cars can be found at almost any train station and there are about 6 cars available in our neighborhood, just a few hundred meters from our front-door. Pricing is a mix of hourly rent and per-km charge, which clearly encourages a networked usage of trains for long-distance, mobility car for "last mile" service. Pricing is around 3CHF/h (0.6CHF during night-time) and 0.5-1CHF per km charge depending on the car model. This is clearly not cheap and for a long week-end trip breaks about even with a normal rental car.

However, the membership based system and the dense network of dispersed self-service locations offers hugely better pickup and drop-off experience than any rental-car company possibly could. Having a small child, we knew that the optimized combination of train and car would likely not work, due to having to lug around a heavy ECE R44 group II compliant car seat in addition to all the other stuff, small children generally come with. Yet assuming the cost of owning our own car to be at least about 1000 CHF per month, we could literally take a car for each week-end and still come out ahead.

After almost a year, it turns out that we have used the car less than half a dozen times (mostly going to IKEA or other furniture moving activities) and did all other travel including many spontaneous excursion by public transportation. One key factor is that the decision to take the care will most likely result in a bill in the order of hundred(s) of CHF, while the marginal cost of using any public transportation for any time or distance is for us now zero CHF. Even with the streamlined procedures, reserving, picking up and dropping of a mobility care requires some level of planing, preparation and discipline, while the combination of GA and Google mobile transit directions provides a near frictionless level of spontaneous mobility (at least between town centers). Yet, despite the very dense Swiss transportation network, a typical trip still takes us much longer by public transport than it would by car - but maybe because I don't particularly like driving, I am more willing to put up with time lost having to wait for a connection.

Despite a bit of traveling in the last year, we still did not reach or exceed the cost of the GA compared to the optimal strategy using individual fares. Yet we would still consider it as a success, by putting a lot of emphasis on convenience (not having to figure out what the idea fare is, and how each ticket vending machine works...) as well as not having any excuse to avoid going out and discovering our new surroundings - encouraged by the psychology of a zero marginal cost for each trip.

Friday, July 23, 2010

Android 2.2 - Froyo

I just finally got the new Android 2.2 release for my Nexus One (it's a long story...). Most of the significant features of this release are behind the scene like increasing java execution performance through just-in-time compilation or increasing javascript performance in the browser by using the V8 JavaScript engine from Chrome. Not sure I really notice much of a difference in everyday use, since I mostly use apps which are UI and framework bound for their performance (no CPU heavy games...) and most web-pages are light in JavaScript and browser performance is limited by network and rendering performance. But still these are very welcome optimization to help improve the platform overall.

There are a few small enhancement - most significant for me is the ability to switch auto-correct/complete languages for the on-screen keyboard on the fly, since I write emails and SMS in multiple languages on any given day. There is now also finally a switch to disable the use of cellular data independent of any other function, which had been one of my complaints for a long time.

New on the platform side are a few services like the ability to back up application data in the cloud, make use of the SD-card for installing a lot more apps than what the capacity of the internal flash memory can handle or the C2DM notification service discussed here earlier. Any of these require applications written with the new SDK and using these new features to show their full potential.

But I am most excited about the official support for tethering in Android 2.2 - i.e. the ability to use the phone's cellular connection as an uplink for other devices either through USB or wifi. While USB would clearly be better from a battery life perspective, the number of devices which support networking over USB via the phone is a lot more limited than the number of devices which support wifi: today pretty much any internet capable device seems to support wifi. When turning on the mobile wifi hot-spot feature, the phone is basically acting like a wifi base-station, creating a wifi subnet which other devices can join and access the Internet by sharing the phone's cellular data connection.

A quick test with speedtest.net from my tethered laptop vs. the speedtest Android app running on the phone shows no noticeable throughput degradation by tethering (about 2.7Mbps upstream and 1.5Mbps downstream on Orange CH 3G service from our house in Zürich around midnight).

Friday, July 2, 2010

Push Notifications for Android

After struggling with a few apps which use lots of battery and network resources while trying to sync half the Internet onto the device, one wonders if Apple didn't accidentally have a point with their claim that most apps don't really need background processing as long as there is a way to push background notifications to the device.

This leads to a split application design, where part of the application resides on a server in the Internet, doing whatever the background service on the device would be doing, but with a lot less worries about power and bandwidth. If there is something new and interesting, a small notification is pushed to the device to alert the user, that there is something worth looking at. As long as the device has network connectivity when the user acts on this notification, the details of this notification might be about is loaded on demand. If mobile networks are ubiquitous and fast enough, the resulting experience is almost as good as an app which continuously loads and caches content in the background - and a lot friendlier on battery and network usage.

The upcoming Android 2.2 release includes support for the new Android Cloud to Device Messaging Framework, which promises to do that and then some in a generic fashion.

If an app on an Android device wants to receive some particular notifications, the application registers to receive the notification events from the Android OS and requests an authorization key from the C2DM framework and sends it to its backend server in the Internet (e.g. by http request, SMS or whatever). When the server wants to push an event to such a registered device, it uses the authorization key to contact the C2DM service, which will then push the notification to the device.

One of the sample apps included with the framework - Chrome to Phone - shows how this could work. From a Chrome browser extension, the user can choose to send the current page to their Android device, where the page then can be loaded from an entry in the notification bar:



But since Android does support background processing, this framework isn't limited to simply pass on notifications to the user. An app could still choose to act on some incoming notifications in the background, i.e. by fetching and caching the content which the notification event refers to. The frameworks seem flexible enough to create all kinds of new applications which create a more seamless integration of mobile and web-based applications.

Friday, May 21, 2010

Thoughts on "Making Sense of Privacy and Publicity"

This year's SXSW keynote by danah boyd is probably one of the most insightful contributions to the debate on privacy and social networking. For those who have not yet seen it, the rough transcript can be found here. It puts a finger on so many important points, that it should be required reading for anybody who wants to work on consumer web services.

To summarize a key point: in real life things are usually not as simple as they seem. And that's bad news for the technocrats who typically build and run the virtual environments where social interactions are taking place online. Engineers and scientists like to simplify and standardize problems, apply Occam's razor, optimize systems along the dimensions of an assumed known quantitative model etc. The operators of today's large web properties study and analyze their users behavior and think to understand them better than the users understand themselves, but behind the user behavior observable from web logs are layers of significance and meaning which are completely hidden from this behavioral analysis.

In every society, people daily strike many a delicate balance between engagement and guardedness. We learn social rituals, what is appropriate in a given situation and what kind properties to expect from a certain context or environment. The boundaries of what is private and what is public are blurred. Some of the most private, intimate and confidential discussion happen in very public places - on park-benches on a warm summer night or in Washington D.C parking garages. In the offline world, experience allows us to judge how an environment will support our interactions - and in many jurisdictions, altering those properties through hidden means like microphones, telephoto lenses etc. is explicitly illegal as a violation of privacy rights.

In the online world things are a bit more tricky. We don't usually know as well how these virtual environments really behave and their properties are really easy to change by the people who build and run them. To make things worse, the Internet (almost) never forgets. While an intimate discussion years ago on park-bench has faded from the observable universe long ago, the same discussion done by IM may remain visible somewhere in a chat log forever. Users often assume rightfully that their daily lives are mundane enough that nobody will bother to explicitly track them through the digital noise. Safety by numbers and hiding in plain sight often works surprisingly well. Just because something is not explicitly block from access does not mean that the creator necessarily wants everybody to see it.

Assuming that users somehow find a delicate balance on how to operate within the virtual spaces formed by social networking and other web 2.0 richly interactive services, some of the worst things which the operators of these services can do to hurt their users is to change the rules on them. Since we do not know what kind of balance each user has found to make things work for themselves, it is hard to predict how any change might affect them. And since there is a lot of data in the system, changing the rules can even be retroactive: a spotlight suddenly shining into a corner of the virtual world where it was not supposed to, according to the expectations of the user.

Unfortunately the current best practice of software development is based on embracing change. The software as a service model allows to release early and release often, since nobody knows what will really resonate with users. In the end, we get services which are more sophisticated and more integrated with our daily lives than ever before, but at the cost of the eternal beta.

In order not to violate users sense of privacy, any changes which shift the fabric of our virtual online worlds that might affect the visibility or exposure of anything must be considered very carefully for unintended consequences.

As a fundamental principle, there should be no "ex post facto" or retroactive change to the visibility and exposure of anything without the most explicit and informed consent of the user. But even when operating diligently by such a high standard, accidents are bound to happen from misjudging the impact some changes may have in the user's world.

Ultimately as the Internet becomes more social, we need to better understand the social dynamics, conventions and rituals of its usage. But unfortunately that's not something that the introverted computer geeks who have so far built the foundations of the social web are particularly good at.

Thursday, May 20, 2010

From UGC to UCC

I have noticed, that a good part of the articles I read online have been suggested by members of my various social networks. Maybe it is a part of the true utility of social networks to be a platform for "User Curated Content".

While the web in its first phase tried to mirror the offline world, by moving every brick and mortar institution and service online, the so called web 2.0 promised a new world of participatory media, where everybody can create content. While digital media have drastically lowered the production costs, the web has driven distribution costs to near zero. Looking around on blogging sites, flickr, YouTube or other cornerstones of the "User Generated Content" revolution there are some seriously talented people out there! Some people have managed to make a mark, some even managed to make a living or become minor Internet celebrities in some field. Some other stuff is whimsical, funny or personal. There are unexpected viral hits or observers who happen to be at the right place at the right time and turn into citizen journalists. But the vast majority is just plain boring and inconsequential rubbish.

More so than ever, the problem has become discovery - i.e. finding the stuff that's relevant, interesting, worthwhile, stimulating, satisfying etc. at this point. When production and distribution of content is expensive as in traditional media, there are plenty of people whose primary role it is to make choices of what is being produced and distributed on behalf of their audience. They are called curators, editors, DJs, program directors, executive producers etc. and they are often the most most well know, prestigious (and feared/hated) people in their organization.

Curators make choices which works of art are on display and which ones are in storage, the ones at leading institutions even define what is considered art, based on what they acquire for their collections. Editors in chief decide what stories are being printed and define our perception of what is news. In a situation of scarce resources the difference between a curator and a censor are often only the nature of their intentions (educate and enlighten vs. oppress). On the Internet, the role of a curator is different. There is (near) infinite wall-space and everything which exists can be exposed - but because of that often not be seen or found by anybody in the sheer mass of stuff out there.

There are a few successful strategies for finding something in this giant heap of digital noise. Contextual search ranking revolutionized web-search in the late 90ies by creating algorithmic determinations of what is presumably more interesting or relevant to a particular question. Clustering algorithms can help find similar things to something we like and recommendation engines can suggest things which people like me have liked.

However, in a world dominated by the chatter of millions of undistinguished sources, search can often fail to find the nuggets in the trash heap. And recommendation engines only reinforce my current point of view. How can I learn, grow, be surprised and intrigued if I am only ever fed things recommended by people like me? Why should my taste and judgment be any good... I would rather get recommendations from people who are smarter, more knowledgeable, more stylish and more plugged in to a particular filed, but that's hard to decide by algorithmic means.

So we are back to curators. Or editors, guides, teachers, gurus, opinion-makers, trend-leaders, talent-scouts or whatever we want to call them. Relevance is no longer defined globally but based on people whose judgment we trust and respect. Social networks can be a source of such relevance, following the age-old patterns of world of mouth among friends and family. Or more powerful would be asymmetric social networks where we can find somebody whose judgment we respect and "follow them", without them necessarily having to know us.

In fact many blogger are in fact more editors or curators than creators of original content. This is even more so with micro-blogging systems like Twitter, which many consider the quintessential asymmetric social network. And Wikipedia is probably the most high profile project, where domain experts can live out their inner librarian and do so with great determination.

Given the importance of curators, I think there is still too much emphasis on content creation. The real challenge today is to mine the piles of digital trash for nuggets of gold which most certainly exist in numbers never seen before. We already have more content than we ever know what to do with, but there are not enough platforms and frameworks where people who would like to organize it could shine and be recognized. Part of the problem is that when it comes to derived works, copyright gets really murky and it's hard to say who should get credit (or even paid) for what. But it is time for online curators to get more respect and for librarians to step into the limelight. And this could easily be one of the next big things for online digital media platforms.

Tuesday, May 18, 2010

A day in the life of the Internet

Todays top suggestions on google.com search for "How do I" are:
  1. how do i delete my facebook account
  2. how do i find my ip address
  3. how do i get a passport
  4. how do i know if im pregnant
  5. how do i love thee
  6. how do i look
Out of which only #5 has a relatively straightforward answer:
... Let me count the ways.
I love thee to the depth and breadth and height
My soul can reach, when feeling out of sight
For the ends of Being and ideal Grace.
I love thee to the level of everyday's
Most quiet need, by sun and candle-light.
I love thee freely, as men strive for Right;
I love thee purely, as they turn from Praise.
I love thee with a passion put to use
In my old griefs, and with my childhood's faith.
I love thee with a love I seemed to lose
With my lost saints, --- I love thee with the breath,
Smiles, tears, of all my life! --- and, if God choose,
I shall but love thee better after death.

Sonnet 43, Elisabeth Barret Browning

Friday, May 7, 2010

IT != IT - the Case for a Differentiated Immigration Policy

The Swiss government recently reduced the quota for work-permits for applicants from so called 3rd-states - which typically means countries outside the EU and not covered by the free-trade treaties between the EU and Switzerland. After a highly publicized protest led by high-tech companies like Google, Microsoft and IBM, the Swiss government has rather quickly reverted its decision.

In the midst of a recession with higher than usual unemployment and increased levels of immigration from the EU following the free-trade agreements, the general mood in the population is not very supportive of any increase in immigration quotas. This is seen as yet another attempt by greedy corporation to undercut the Swiss standard of living by importing cheap labor from overseas - typically from south-east Asia in what is generally by called the IT or information technology sector. How can there be a shortage of IT labor, if almost everybody knows someone who is unemployed and supposedly somehow "in IT"?

As with most controversies, there may be some truth to the matter of companies trying to use immigration to depress labor costs, but the crucial core of the problem here is the ability for for companies who operate global R&D facilities in Switzerland (like the companies named above) to be able to attract the best possible talent in a particular field - regardless of skin-color or country of origin.

Since information technology has permeated about any aspect of not just business, but also increasingly personal live, the so called IT sector has become large and diverse that saying somebody is "in IT" is about as meaningful and descriptive as saying that somebody is working in an office.

The vast majority of IT jobs are about supporting and customizing systems and applications based on the specific needs of their users. The most visible IT workers are PC technicians or system administrators, which almost everybody knows first hand from their daily work. Or armies application developers which work on big in-house IT projects for large corporations like banks or insurance companies - either as employees or as contractors from large IT services companies like Accenture, IBM or Infosys on behalf of local clients. Most of what is going on here, is indeed not rocket science from a technical point of view. The key stake-holders are not technology companies, nor are they interested in technology per-se. They rather consider it a necessary evil, a cost center which they would like minimize. No matter how high-tech certain IT service providers are giving themselves for the public image, in reality they are trying to be as technically unspectacular and conventional as they can be in order to minimize the risks of implementing something which has basically been done before many times over in slightly different forms. As no two organizations are exactly the same, the IT systems used to support them are also slightly different which causes all this effort and duplication. For many of these jobs, the ability of communicating with the users of the system and understanding their application domain is a lot more important than raw technical skills and knowledge. It is rightfully debatable to what degree immigration vs. increased education and training should help resolve the general shortage in this still fast growing sector.

But there is also a very small segment of the IT industry, which is the true high-tech sector. This is where the technological innovation happens. These are the companies and people, why build the core pieces like operating systems, database engines, computer chips, programming frameworks or communication equipment. This is typically also where the value-add is most concentrated and companies like HP, Apple, Cisco, Intel, Microsoft, Oracle or Google have famously propelled their founders and investors into the top league of the worlds most wealthy people. This is where the coolest and sexist jobs are for the technically inclined, ambitious, talented and well educated in the IT workforce and the companies who are being seen as the avantguarde of technological innovation can typically get a pick of whom they want to hire globally.

This kind of jobs are also typically concentrated in a few select places around the world, most prominently in Silicon Valley, because this is where the necessary key talent can be found. However this is not because the people born in Santa Clara County are somehow smarter than the rest of the world, but because of migration. Silicon Valley has a share of foreign-born population way above the US average at about 40% (10% US average) an a rate of over 60% among engineers and scientists in the Valley. And this hides an equally significant domestic migration within the 300 million US population, where many with advanced degrees in engineering and science have moved to Silicon Valley from all over the country, if they want to play in the top league of their field.

Even though they pale in scale and importance next to Silicon Valley, there are a number of secondary clusters of high tech excellence around the world. Switzerland is reasonably well positioned with a strong tradition of industrial innovation going back to the industrial revolution in the 19th century, two technical universities who often appear among the highest non-anglosaxon institutions on many league-tables and a number of high-profile R&D labs. Some of them by domestic champions (pharma & machine industry) and some number of US high-tech companies (e.g. IBM, Google, Microsoft, Cisco) who have decided that Switzerland is a good place to hire some of the top talent, who for some reason doesn't want to move to Silicon Valley... It would be hubris for a country of barely 7 million to assume that a significant part of the worlds leading experts in any particular field could be produced domestically, no matter how strong a culture of excellence or how strong the confidence in the local education system.

Any organization which is at the same time highly specialized and world class must necessarily be able to recruit globally from the best talent possible in that particular field, whether it is a top-ranked symphony orchestra, a premier-league football club, an elite-university or a world-class industrial R&D lab.

The reasons why global companies have chosen Switzerland as a place for global R&D is only partly because of a strong technical tradition, good universities and some number of key talent already there, but also primarily because of the highly rated quality of life, including reliable public services, picturesque landscape, low personal income taxes, safety, stability and a generally pragmatic government. Basically a place where it is relatively easy to convince people to relocate to, who otherwise have plenty of choices, options and other offers.

In the grand scheme of things, these few world-class labs will only employ a few 100 to few 1000 highly educated specialists and as such have very little impact on the overall employment or immigration situation. But they generally contribute a disproportionate amount to the economic development through prestige or intensified interactions and networking with other local firms and universities. For the one key ingredient - the ability to attract top talent, they need to to be able to recruit internationally with minimal restrictions and interference. At this point it is up to the Swiss government to either leverage its current reasonably strong position in the competitive global knowledge economy or to snatch defeat from the jaws of victory. E.g. through misguided pandering to the populist right-wing on immigration and to the left in the form of misguided attempts at labor market protectionism.

Organizations who can credibly make the point that they are in that global league, recruiting for the top talent in their field, should be except from any restrictions and quotas. In addition there should generally be a priority visa category based on education, skills and experience compared to the best in their filed - similar to the "alient with extraordinary ability" visa in the US. To go a step further - anybody who graduates with distinction from any university in whatever to-N global league-table, should have the automatic pre-approved right to a work and residence permit should they choose to come. (To calm the shrieking voices of panic on the right: very few would actually come, since they typically have plenty of opportunities and lots of other good offers. )

Saturday, May 1, 2010

A Game Changer for Public Transportation Users

My favorite and most used app on Android isn't even an app, but rather a service. It is the Google Transit public transportation directions feature in Google maps, which can also be accessed more easily through the Maps application on Android - or on any other mobile platform which supports Google Maps for mobile.

In combination with the extremely dense and frequent network of public transportation in Switzerland, the transit directions on the phone offer a level of spontaneous mobility, which is generally associated with driving. When it came to using public transportation networks, people tended to know the routes by heart which they frequently travel (e.g. daily commute), while anything else required thorough planing by poring over books of printed time-table - an activity enjoyed only by the most hard-core train buffs.

The Swiss public transportation systems has always been particularly well integrated, across all providers and including everything from urban transit, buses, trains, boats to touristic mountain cable cars. The system-wide printed time-table the size of a phone-book has long been online and can be looked up at the SBB website. There is now also an Android app, Fahrplan CH - which acts as a query front-end to this site, but no official app yet as for the iPhone.

On the mobile maps application, finding next transit connections from the current locations is as easy as entering the destination, even a fuzzy one with some help from Google maps search & suggestions to find the exact destination address. The app shows the next few connections in an overview tab with total travel time and as a detailed list with connection times and description of each stop and carrier to take. A particular trip can also be viewed on the map, which is particularly useful for the first and last part which usually involves walking to and from the station or stop.

Unfortunate limitations of Google Transit is its model of treating each metro area in isolation, since the data usually comes from isolated transit authorities. Fortunately, Switzerland is represented as a single transit domain, but there is now way to display international directions - e.g. from a local address somewhere in Zürich to a local address somewhere Paris, which would involve taking a bus or tram to the main station, the TGV to Paris and a metro or bus connection towards the final destination. The SBB website does show basic international connection, but not at the level of every possible subway stop in every possible city, since that data-set really doesn't exist in a single integrated form.

Moving to a new city a few month ago, really was a good test for how well on-the-go transit routing and trip planing works. We didn't know our way around and often left the house only knowing where we wanted to go but not how to best get there or how to get back. But this only works in areas with very dense coverage of public transportation, where we can be sure that there is probably a connection every 15minutes or so and there will always be a way back when we want to and service does not suddenly stop in the middle of the afternoon.

It does not take a lot of imagination that mobility itself would be a killer application for mobile devices and dynamic public transportation routing in areas with good service coverage is clearly delivering on this promise.

Tuesday, January 12, 2010

One Password to rule them all

I am notoriously bad at memorizing. If not, I might have gone to medical school and chosen a more lucrative career than Engineering... But as the number of online services I uses increases, so does the number of account username and password combinations. I try to standardize as much as I can on the same usernames(s) but some sites make this really hard by requiring strange and unusual conventions (name must be at least 8 characters long and include at least special character and number??? Whose name looks like that?) or by dictating that the username be whatever 10 digit number their database uses as the unique key for the account record. Same drama for the passwords, except that using a standard password everywhere has the added disadvantage that once the password is compromised, the attacker would have access to all my various online service accounts - if sHe could guess those bloody convoluted usernames... ;-)

Things got particularly bad for accounts, which I use very rarely and which have complicated account recovery procedures - typically hours spent in AVS and call-center limbo. I admit that I even committed the exemplary no-no of IT security 101: writing my username/passwd combinations on post-it notes! [Which is not as bad as it sounds, since our apartment is not a highly public place and if broken into, I would likely have bigger problems and hassles than changing a few account passwords.]

Before our recent move, I finally adopted a more reasonable strategy for password management and storage. I consolidated my post-it notes into Password Gorilla, which is a multi-platform application which uses the same secure storage file format as the Password Safe application. The original Password Safe application was designed by cryptographer and computer security expert Bruce Schneier - which hopefully should ensure that there are no obvious design-flaws in the encrypted storage format.

Since Password Gorilla has a built-in feature to easily auto generate a different random password for each entry it is really easy to choose unique and very strong passwords for each service. However then having reliably access to the database anywhere and at any time becomes crucial.

Using a format which is a quasi open-source standard, supported by many different applications on different platform, should increase the chance, that I would always be able to find an application somewhere to read and decode the password database, even if something really bad happened to my computer

A convenient way to keep both a backup copy of the database is to store it in the memory of my mobile phone and even better, to have access to passwords anytime on the road is to use an application which can open and decode the password file directly on the phone.

For a while, I have been using Android Password Safe - or rather a not yet released experimental version which allows to import/export an existing password database, which is absolutely essential for sharing the database between the phone and my main computer. However this has been the state for over a year now and it seems as if the author has abandoned the project.

I am glad to see that very recently a new version of a Password Safe compatible application has been released: PasswdSafe. It is a viewer only, which is perfectly fine for my use-case, where the master database is always on the computer at home and the phone is a read-only backup copy. Because it is read-only, the records are displayed nicely in a compact way, even with a quick way to show & hide the password itself.

Some additional features I would like to see is a timeout based auto-lock, which locks the database again after some time, if it is left open and a way to import/export databases into the phone's internal memory instead of reading them from the removable SD-card. Granted, all android phones can be "rooted", after which the user has unlimited access to the phone's internal memory as well through the USB serial-port, but put putting some additional effort before a potential attacker gets their hands on the password database can't hurt... Besides, half the battle is knowing when the database has been stolen - swapping an SD-card and/or quickly copying a file could be done in a few seconds, when the phone is left unattended.

Once a potential attacker has gained access to the password database, the weakest link to crack the database open is by guessing the master password. Since I need to be able to remember it, quite likely it has significantly less entropy than could be contained in a randomly chosen 128 or 256bit twofish key, used to encrypt the database.

But then, my accounts might not be worth the effort of even a systematic password cracking attack, in case somebody technically sophisticated enough manages to steal my phone - but if I were an anesthesiologist, this might be a different picture...


Sunday, January 10, 2010

G1 to Nexus One: a review

By any measure, the new Nexus One is a very nice phone: large brilliant display, high-performance CPU core, high-res digital camera, solid low-profile body. With this kind of hardware spec, the Nexus One establishes itself as the current flagship among Android phones. To my liking, it does not have a physical keyboard, which makes for a much more slim and solid feeling body than the G1 for example. In that sense the Nexus One is what I had hoped for in my original review of the early G1.

In the about 1.5 years since the G1 came out, Android has come a long way. 3 major releases of the platform have added important missing features like an on-screen soft-keyboard and helped harden the platform based on experience in the field. At the same time, developers have contributed a wide range of expected and unexpected applications, and learned how to write them so that they don't drain the battery within minutes.

There are now half a dozen or so Android phones on the market and many more in the pipeline. In particular HTC, previously a pure windows mobile shop, seems to be confident enough in the future of Android to release it on their most advanced hardware, a move which would certainly upset their strategic OEM partner Microsoft.

With the success of Android, there also comes the increasing risk of fragmentation into a plethora of mutually incompatible vendor and carrier specific versions. Due to the permissive nature of the Apache open-source license, this cannot really be avoided. By putting out the Nexus One as a leading example of"Android done right, according to Google", Google has now one more way to coerce the members of the open handset alliance to follow its lead and not produce restricted, proprietary and limited versions of Android. In the end it might matter less how many unlocked Nexus Ones Google actually sells, but the fact that they are available might have some effects in keeping carriers and vendors honest.

Compared to the G1, the Nexus One feels very snappy, thanks to the faster CPU and increased memory. While the G1 had physical buttons for standard android operations like "home", "back" or "menu", the Nexus One has dedicated touch buttons at the bottom of the screen. The green & red call control buttons are now missing, which means that all phone operations must be done from the touch-screen. While on the G1, pressing the call button always was a shortcut to launch the dialer, this has to be done explicitly on a Nexus One, either from the app panel or a home-screen shortcut icon. Instead the Nexus One has a dedicated "search" button, which shows the crucial importance of search in "Android according to Google" (more so than making a call, apparently...). Without the physical buttons command buttons, there is now the need for a dedicated on/off and sleep/wakeup button, which is awkwardly placed at the top edge of the phone. Since I always unlock/lock the phone before/after any usage, the location of this button is unergonomic for how I typically hold the phone and is a bit of a hassle. The Nexus One still has the trackball, which I hardly use and which takes up significant real estate on the phone and introduces potential fracture points in the casing (the faceplate of my G1 had cracked along the trackball opening in less than a year). I would happily trade it for sleep/wakeup button on the faceplate or simple reduce the size of the phone by as much.

The physical design of the Nexus One is low key and unspectacular: a flat, sleek shape with rounded edges. But it feels nice and solid in the palm of my hand - how a palmtop computer should feel like. Fortunately gone is the ugly and awkward "Android chin" of earlier HTC devices. So far the only flaw in the case is the somewhat sharp edge of the protruding camera lens.

The charger/USB port has changed from mini-USB on the G1 to the new standard micro-USB on the Nexus One, which unfortunately means that existing G1 chargers and USB cables cannot be reused.

On the software side, Android 2.1 offers gmail support for multiple accounts and a contact applications which can sync to multiple sources (multiple google accounts, facebook, exchange). The home screen application, app tray, dialer and contacts applications have received a significant redesign and face lift, but otherwise the changes are rather minor compared to Android 1.6 released not too long ago.

Overall, I am very happy with the Nexus one as an everyday phone. It is a very capable high-end consumer smartphone and probably the first Android based phone that is clearly in the same league as the IPhone.

Wednesday, January 6, 2010

Nexus One: the good, the bad and the ugly

So it doesn't cure cancer, solve world hunger or even global climate change - but it's still a pretty nice phone!

At first glance, the large, crisp high-res display get's all the Uhs and Ahs, including how snappy the UI responds thanks to the 1Ghz Snapdragon chipset. With its speed and responsiveness, the phone is a pleasure to use! The form-factor is thin, sleek with ergonomically rounded corners and lies well in the palm of ones hand. The teflon coated plastic case gives it a nice high-end feeling texture. The biggest improvement in software features for my use case is the ability to sync multiple accounts for the contacts and the gmail app - now I can get notifications for email arriving on any of my gmail accounts.

My only serious gripe so far is with the placement of the on/off button, which I need to press each time before and after using the phone. Its location at the top edge, is very un-ergonomic for single handed use - i.e. fishing the phone out of the pocket with one hand, turn on the display, balance it on the palm while using the thumb to swipe the unlock pattern and do the basic navigation. I am also not too thrilled about the protruding camera lens and the trackball, which I hardly ever use, distracting from an otherwise very slick and smooth case.

The hardware specs are probably at this point the most impressive of any phone on the market and the Nexus One should be a serious cure for iPhone envy among consumer-smartphone users who for one reason or another don't want to get an iPhone.