Monday, October 22, 2012

Look Ma, no Wires! Raspberry Pi Bluetooth tethering

Bluetooth is a relatively low-powered short-range wireless technology, which can among others provide a networking connection between devices and is widely supported in portable devices like laptops and mobile phones.  Low power and ubiquitous support should in principle make Bluetooth an ideal solution for tethering.

Nothing beats the simplicity, robustness and performance of a simple Ethernet cable for a tethered network connection - as long as both devices support it. For the upcoming Raspberry Pi model A or certain ultra-portable devices without Ethernet port, Bluetooth might be serious option.

Wi-fi would probably be the most obvious choice of wireless networking technology. However using the ad-hoc wi-fi hotspot option for tethering on my Android phone turns it into a pocket heater and drains the battery in no time  - while not so much when using Bluetooth tethering instead. Besides wi-fi networking is very well documented, while there is very little documentation for Bluetooth networking on Linux in general and for Raspberry Pi in particular.

For the experiment, I am using a cheap no-name USB Bluetooth dongle which identifies itself as "ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode)" with lsusb. Luckily it seems to just work with the standard Raspbian kernel, which isn't always a given for exotic hardware under Linux.

The next steps is installing all the Bluetooth support software:

sudo apt-get install bluetooth bluez-utils bluez-compat

Once the Bluetooth subsystem is started, the adaptor should to start blinking a blue light, if all goes well.

The Bluetooth security model requires that 2 devices must be "paired' by the user before they can connect to each other automatically. Unfortunately, this procedures seem to still be a bit of black-magic on Linux, specially when only using command-line tools. The following two different approaches have worked to pair an Android phone as a device to the Raspberry Pi as well as pairing the Raspberry Pi as a device to a Apple Powerbook.

To pair with Android phone (4.1.2):

  1. make phone visible for Bluetooth scanning (settings -> Bluetooth, click on device name to toggle visibility)
  2. scan for new devices: hcitool scan (note Bluetooth address of phone - e.g. C8:BC:C8:E1:E5:C5)
  3. pair with new device by address: bluez-simple-agent hci0 C8:BC:C8:E1:E5:C5

To pair with Powerbook:
  1. make raspberry pi visible to Bluetooth scans: sudo hciconfig hci0 piscan
  2. accept pairing requests (arbitrary PIN): sudo bluetooth-agent 1234
  3. on powerbook, open Bluetooth setup and scan for new devices, pair raspberry pi
  4. hide raspberry pi from further scans: sudo hciconfig hci0 noscan
In order to show all the devices which are currently paired, i.e. available for connection if within reach:

sudo bluez-test-device list

And finally used pand to create a PAN (Personal Area Network) connection:

sudo pand -c C8:BC:C8:E1:E5:C5 --role PANU --persist 30

which should result in creating a new network interface bnep0 (see with ifconfig -a). Since both Android and OS X Internet connection sharing support address configuration via DHCP, we can add the following line to /etc/netwok/interfaces:

iface bnep0 inet dhcp

After adding the pand connect command to /etc/rc.local, the connection is established on startup if the other device is in reach. With the persist option, the PAN demon will attempt to reconnect whenever the connection is interrupted.

A small but significant disadvantage over the current wired setup is that mDNS zero-conf does not seem to work properly. Even though the Raspberry Pi is sending mDNS announcement from avahi on the PAN interface, the bonjour service database does not get populated on the Powerbook - but maybe this is an issue with my particular configuration.

Overall, the Bluetooth networking setup seems stable enough to establish the connection without user intervention on startup, which is essential to operate in a tethered, headless mode. And the power consumption of the Bluetooth dongle is still low enough to to power the Raspberry Pi with both Bluetooth & Ethernet connections from the laptop USB port...

Sunday, September 9, 2012

Raspberry Pi tethering

 After some initial experiments with using a Raspberry Pi as a home server, I wanted to try out some configurations  useful to the intended purpose of the Raspberry Pi: promote computer technology literacy among students.

At least some of the initial feedback from trying to deploy Raspberry Pis in a classroom settings seems to indicate that getting the necessary hardware to operate the Pis is non-obvious. Setting up a room full of Pi based workstations requires HDMI or composite connected TVs or monitors, USB mice and keyboards, micro-USB power supplies and ethernet connectivity routed to each workstation. The cost and logistical complexity of doing that seems to defeat the initial low-cost advantage of the Raspberry Pi.

Furthermore, many schools might already have fully equipped computer facilities with incompatible equipment (e.g. SVGA monitors, standard on any not-so-new PC), which could be used, but just not "messed with" for low-level programming and system administration experiments.

Companies often replace their computers as frequently as every 3 years and as this example shows, schools could easily get their hands on a heterogeneous mix of somewhat old, but still working desktop PCs or laptops more than good enough for light usage.

These days, many students might also have access at home to a PC, netbook or tablet, complete with Internet connection, which can be used for many things, except experimenting with computer technology.

For any of those reasons, I think that the scenario of interfacing and accessing a Raspberri Pi board through an existing standard PC or tablet is going to be a very common use-case, if not in the end, most common one.

There are two common network configuration scenarios for remote access:
  1. connect the Raspberry Pi via Ethernet to an existing wired network, e.g. directly to a network port on a home-gateway router/firewall/wifi-basestation and connect to it via ssh from any device on the same network. This is what I have been doing so far.
  2. connect the Raspberry Pi via Ethernet directly to an unused port on the host computer
Using an Apple PowerBook running Mac OS X 10.6 as the host, I was trying to create a setup which allows for easy plug & play network connectivity. The criteria for success would be the usability for doing some Python programming using IDLE, the native Python IDE bundled with the standard Python distribution. The starting point is an SD card with a stock Raspbian "wheezy" image downloaded from the official site. Most of the approaches described here, should also work on Windows, but I can't verify that since I don't have access to any windows machines.

Add caption
For the initial customization of the image, I connected the Pi board initially to the home router. Fortunately, the standard image already boots with SSH enabled, but does not offer any support to locate the device on the network. Since my wife runs production network operations at our house, I don't have root access to the router, so I was using Fing on Android to scan the network for all available hosts and find the IP address of the new device with a MAC address issued by the Raspberry Pi foundation.

After a first ssh connection into the new device,  we run
sudo raspi-config
and choose the options to extend the root partition to the full size of the flash card and maximize the memory available to the CPU, as we don't intent to use the video subsystem.

The most important step is to install Avahi, the mDNS zero-conf demon, which will advertise the presence and address of the device on the local network. To install and activate, type:
sudo apt-get install avahi-daemon
sudo update-rc.d avahi-daemon defaults
It might be a good idea at this point, to give the device a new, unique name as this will be used to connect to it in the future, by replacing the default hostname in /etc/hostname and /etc/hosts - e.g. "rpi" instead of "raspberrypi".

After reboot, we should now be able to ssh into the new device by typing
ssh pi@rpi.local
in a window of the OS X Terminal application (hidden in Applications/Utilities). For added nicety and to have the device appear automatically in the connection selection of some ssh clients (e.g. on iPad), we can also add the ssh service to be explicitly announced by avahi-daemon, by adding a new file /etc/avahi/services/ssh.service:
<?xml version="1.0" standalone='no'?>
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
  <name replace-wildcards="yes">%h SSH</name>

With an application like BonjourBrowser on OS X, we can see what is being advertised on the network, including the newly configured service.

Before connecting the new device directly to the Ethernet port of the Powerbook, we need to choose one of two network auto-config methods:
  1. enable IPv6, which by standard includes self-assigned link-local addresses for all interfaces. To enable IPv6 at boot,  add ipv6 at the bottom of /etc/modules.
  2. install an IPv4LL auto-conf daemon (sudo apt-get install avahi-autoipd) which on failure of the DHCP client to get an address wil auto-assign one from the address pace, designated for the purposed of IPv4LL self-assigned link-local addresses.
I personally prefer the IPv6 solution, as it is a lot less hacky.

After those changes, we can now connect the new Raspberry Pi device directly to the Powerbook with any Cat5 Ethernet cable (no cross-over cable required as the ports are auto-sensing). In the pictures above,  the PI board is powered directly from the laptop using a USB-micro-USB cable. Power consumption appears to be low enough, without the additional power consumption of USB connected peripherals or the HDMI port. (Since this is not recommended, don't try this at home - or at least at your own risk...).

We should now be able to ssh into the directly attached board by typing:
ssh -6 pi@rpi.local
into the Terminal window (-6 option for ssh over IPv6).

For some added magic and convenience, we could also enable Internet connection sharing on the mac host (e.g.  follow these instructions), assuming it is connected to the Internet using the other, wifi network interface.  In this configuration, the  now also acts as a router and DHCP server issuing an IPv4 address configuration to the Raspberry Pi and routing/masquerading its traffic as its own (since this is using NAT, some services not compatible with NAT might not work).

Such a setup is highly portable and can be taken on the road for demos, workshops or between school & home as it does not depend on any infrastructure than a wifi network and power outlet for the laptop's power adapter.

For graphical applications, like games or even IDLE, the Python IDE, we are relying on the X Window System (X11), a widely supported standard for supporting graphical user interfaces. While not natively X11 based, Mac OS X can also support X11 based UI applications - on some versions the X11 server application is pre-installed (in Applications/Utilities) or the latext XQuartz server application can be installed from here. XQuartz offers a (near) seemless side-by-side of native and X11 applications on the same desktop and provides optimal usability and performance, as most of the resource intensive UI processing is happening on the host PC and not the Raspberry Pi board.

To run native X11 apps, simple enable to option for X11 tunneling in ssh:
ssh -X -6 pi@rpi.local

As an alternate solution, we could also use remote desktop access using the VNC protocol, supported by Mac OS X for its own screen-sharing. See here for some very detailed instructions on how to set up file-sharing and tightvnc for exporting a full desktop to the remote desktop client on the mac. However since in this case the X11 server doing all the UI processing and rendering is running on the Raspberry Pi CPU, this results in a very sluggish experience compared to running a native X11 server on the host.

The image below shows among others an ssh session into the Raspberry Pi running IDLE to display via local X11 server, the remote screen sharing app displaying a full desktop of the VNC based X11 server running on the Raspberry Pi itself and the home directory on the PI appearing as a shared folder in the Finder window.

I have not tried any of this on other possible host platforms, like any version of Microsoft Windows. Yet all of this should be possible as well, as Windows supports IPv4, IPv4LL, Internet connection sharing natively as well as SSH (e.g PuTTY), X11 (e.g.  Xming) and VNC through additional applications.

Saturday, August 25, 2012

Automated Dependency Injection

In the tradition of modular and object oriented programming, we have long learned to design software by hierarchical decomposition - divide and conquer engineering, where each module/object has a clear function/responsibility. Complex functionality is achieved by delegating some sub-functionality to other modules/objects.

In the above example, module A achieves its functionality with the help of B, C and so one. When these functions become stateful, abstract data types or objects, "wiring" up this dependency tree to enable the access to the right instances of data at each level can become non-trivial in large projects. The dependencies can be hidden and encapsulated hierarchically such that if an application needs an "A", creating "A" in turn triggers the creation of the appropriate "B", "C", "D" and "E", hiding all the complexity of the decomposition from the user of "A".

However this static setup can pose some challenges for unit-testing. The leave-nodes can usually be quite easily unit-tested in isolation, as well as higher-level modules which don't depend on anything which creates explicit external interactions or dependencies. But if for examples, "D" is a database client and "E" a nuclear reactor controller, then "C" and "A" can't certainly be tested in such a naive manner. The solution for this dilemma is typically to introduce special testing code in either "C" or "D" and "E" to fake part of the functionality without external dependency. In complex systems and without any further support, testing often degenerates into unit-testing only for the basic low-level modules in combination with automated system or sub-system test scenarios using complex simulators to resolve dependencies on an external environment.

In languages which easily support interface inheritance and runtime polymorphism (e.g. Java, Python and to a lesser degree C++), we can easily do better for unit-testing at every level and without mixing production and testing code. However, for that we have to get away from dependency encapsulation to dependency injection.

For example, instead of having "A" create an instance of "B" and "C" as needed, they could be passed in as arguments to the constructor of "A". This then allows to unit-test "A" in isolation by injecting mock version of "B" and "C" for the test. There are a few framework, which help to automate and simplify greatly the creation of such mock objects (e.g. EasyMock or Mockito).

While dependency injection and mocking greatly simplifies testing, it makes the actual production code more complex. Instead of getting an abstract and encapsulated "A" somewhere in the code, we now need to deal with setting up the entire dependency tree of "A" each time and everywhere we need an instance o "A", making all the dependencies of "A" explicit and visible. This seems a step in the wrong direction...

An alternative to manually "wiring up" object dependency trees, there are frameworks for automating this process. The only one I am really familiar with is Guice for Java. With Guice object runtime dependencies are defined through a combination of annotations and declarative java code, which can be hierarchically decomposed, typically at package level and include definitions of lifecycles (scopes) and how interface dependencies should be satisfied by concreate implementations. At application runtime, the Guice injector is then responsible for constructing and providing the right kind of object graphs depending on those specifications.

Using Guice makes dependency injection nearly as easy to use as  statically creating objects hierarchically the old-fashioned way. However, using Guice introduces a high level of blackbox magic, a non-trivial learning curve and has the nasty habit of moving what used to be compile time checked dependencies to runtime.

Most users of automated dependency injection have at least an uneasy ambivalence towards it, and some despise it with a passion programmers otherwise reserve for editors or programming languages... After heavily using Guice for a few years, I have come to accept and even recommend it as a reasonable standard tool for complex Java projects and a price to pay for the ability to more easily test and mock objects at any level of the hierarchy.

Sunday, August 12, 2012

Kugelbot - or what to do with a Raspberry Pi

With the Raspberry Pi board now up and running on the network, I needed something "reasonable" for it to do. Maybe an homage to the famous Trojan room coffe pot camera - 20 years later, at a fraction of the cost? Hosting a download mirror for Raspberry Pi boot images on a Raspberry Pi? A probe for network performance monitoring? A twitter robot which recites The Iliad 140 characters at a time?

Finally, I settled for a robot which reposted a summary and link to all my public Google+ postings to my otherwise unused Twitter account.

In addition to Python 2.7 already included in the boot image, the following ingredients were used:

In order to read public posts via the Google+ API, no authentication is required, but a developer key is needed for quota tracking, which can be requested/registered here for any valid Google account. In order to access the Twitter API, a new app first needs to be registered here, after which a set of static OAuth credentials can be generated for the owner of the app, which is good enough here, as this robot only needs to be able to access my own account. It also uses the Google URL shortener API to shorten the long-ish Google+ post URLs into something more appropriate in for the spartan Twitter interface (same client library and developer API key).

The following script is largely stitched together from the samples provided with the tweepy and google api client packages. It uses a Sqlite3 database to store the association between Google+ posts and tweets, acts as a queue of pending tweets and as a way to detect new posts on Google+ through polling.  The state of the system can be inspected anytime using the sqlite3 command-line interface (install by apt-get sqlite3). It can run as a daemon and roughly every 40min, checks for new Goog+ posts and sends at most one tweet only from the queue. Creating a 140 character tweet from the content of each post is done in a less than elegant way, typically by truncating into an elipsis on a series of what might be considered phrase terminating characters (punctuation or even white spaces). Generating more "engaging" and relevant snippets from a post might be an interesting exercise in natural language processing, but a bit beyond the scope of a weekend project.

Known to Twitter as "Kugelbot", this script running on the Raspberry Pi has been tweeting its way slowly through a backlog of 180 messages. In the process acquiring more follows in a day than I had before and getting the Twitter->Facebook auto-posting agent black-listed by exceeding 50 posts in a day.

And once it gets to this post, it will reach a meta-moment: a robot posting its own source-code...

# -*- coding: utf-8 -*-

import apiclient.discovery
import daemon
import gflags
import HTMLParser
import logging
import logging.handlers
import os
import random
import sqlite3
import sys
import time
import tweepy

FLAGS = gflags.FLAGS

# The gflags module makes defining command-line options easy for
# applications. Run this program with the '--help' argument to see
# all the flags that it understands.
gflags.DEFINE_enum('logging_level', 'INFO',
    'Set the level of logging detail.')

gflags.DEFINE_string('api_key', 'xxx',
                    'Google API key')
gflags.DEFINE_string('user_id', 'xxx',
                     'Google+ user/profile ID')

gflags.DEFINE_string('db', 'posts.db',
                     'database of posts to tweet mappings')

gflags.DEFINE_string('pidfile', '',
                    'pidfile if process should run as daemon')

gflags.DEFINE_integer('sleep_time', 1200,
                      'min time between tweets')

class PostsDb(object):
  SQLite database containing the G+ to tweet mapping state.
  def __init__(self, dbname):
    self._conn = sqlite3.connect(dbname)
    c = self._conn.cursor()
    c.execute('create table if not exists posts (post_id text, post_date text, tweet_id text, tweet_date text, content text)')

  def insert(self, post_id, date, text):
    Insert a new post to be sent to twitter.
    Return True if the post is new, False otherwise.
    c = self._conn.cursor()
    if c.execute('SELECT post_id from posts where post_id=?', (post_id, )).fetchone():
      return False
    c.execute('INSERT INTO posts VALUES (?,?,?,?,?)', (post_id, date, '', '', text))
    return True

  def next(self):
    Return the tuple of (post_id, text) for the oldest post which has not yet been tweeted.
    c = self._conn.cursor()
    post = c.execute('''SELECT post_id, content FROM posts WHERE tweet_id = '' ORDER BY post_date LIMIT 1''').fetchone()
    return post

  def tweet(self, post, tweet_id, date):
    Record a tweet in the database.
    c = self._conn.cursor()
    c.execute('UPDATE posts SET tweet_id=?, tweet_date=? WHERE post_id=?', (tweet_id, date, post))

class MLStripper(HTMLParser.HTMLParser):
  Trivial HTML parser, which returns only the text without any markup.
  def __init__(self):
    self.fed = []
  def handle_data(self, d):
  def get_data(self):
    return ''.join(self.fed)

def strip_html(s):
  Remove any HTML markup and coding/escaping.
  if s:
    stripper = MLStripper()
    s = stripper.get_data()
  if not s:
    return 'untitled'
    return s

def make_tweet(url, text):
  Format a tween with text, URL and static #gplus hash-tag. Shorten text to elipsis, if nece..
  tail = ' ' + url + ' #gplus'
  text_size = 140 - len(tail)
  text = strip_html(text)
  if len(text) > text_size:
    text = text[:text_size - 2]
    # shorten string to end in one of N characters and keep the shortest
    shortest = text
    for c in ('! ', '. ', '; ', ' - ', ' '):
      candidate = text.rsplit(c, 1)[0]
      if len(candidate) < len(shortest):
        shortest = candidate
    text = shortest + '..'
  return text + tail

def load_posts(db):
  Traverse G+ stream for new public posts not yet in the database and shorten into tweets
  gplus_service ="plus", "v1", developerKey=FLAGS.api_key)
  url_service ='urlshortener', 'v1', developerKey=FLAGS.api_key)

  # Public posts of a given G+ user (ID is number in profile URL)
  request = gplus_service.activities().list(
        userId=FLAGS.user_id, collection='public')

  while (request != None):
    activities_doc = request.execute()
    for item in activities_doc.get('items', []):
      shorturl = url_service.url().insert(body={'longUrl': item['url']}).execute()['id']
      content = item['object']['content']
      if item['title'].startswith('Reshared'):
        content = 'Reshared: ' + content
      tweet = make_tweet(shorturl, content)

      # insert new post and exist if it already exists
      if not db.insert(item['id'], item['published'], tweet):
        return'inserted %s: "%s"', item['published'], tweet)
    request = gplus_service.activities().list_next(request, activities_doc)

def tweet(db):
  Send a single untweeted entry from the database to twitter account. 
  # The consumer keys can be found on your application's Details
  # page located at (under "OAuth settings")
  # The access tokens can be found on your applications's Details
  # page located at (located 
  # under "Your access token")
  # If there is no untweeted post, skip and do nothing
  post =
  if not post:

  # API authentication with static OAuth access token
  auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
  auth.set_access_token(access_token, access_token_secret)
  api = tweepy.API(auth)

  tweet = api.update_status(post[1])'tweeted "%s"', tweet.text)
  db.tweet(post[0],, tweet.created_at)

def main(argv):
  # Let the gflags module process the command-line arguments
    argv = FLAGS(argv)
  except gflags.FlagsError, e:
    print '%s\\nUsage: %s ARGS\\n%s' % (e, argv[0], FLAGS)
  # Set the logging according to the command-line flag and send logs to syslog
  logging.getLogger().setLevel(getattr(logging, FLAGS.logging_level))
  syslog = logging.handlers.SysLogHandler(address='/dev/log')
  syslog.setFormatter(logging.Formatter('kugelbot: %(levelname)s %(message)s'))

  db = PostsDb(FLAGS.db)

  if FLAGS.pidfile:
    daemon.daemonize(FLAGS.pidfile)'daemonized with pidfile %s', FLAGS.pidfile)

  # Main loop - repeat forever
  while True:
      time.sleep(random.randint(FLAGS.sleep_time, FLAGS.sleep_time * 3))
      tweet(db) # One tweet only, please...
    except (KeyboardInterrupt, SystemExit):
      logging.exception('error in main loop')

if __name__ == '__main__':

Saturday, August 11, 2012

Raspberry Pi - unbagging and first impressions

Looking for low-cost linux hardware, I had come across the Raspberry Pi project a few months ago and been hopefully intrigued by its goals to promote "computer literacy" (whatever that means).

Now that you can actually get them more easily, I ordered myself one from Farnell and surprisingly it arrived in a few days. I am glad to see that the popularity of the Raspberry Pi device is creating an active community, where detailed help and instructions are easily available - not an obvious thing for other niche and esoteric hardware.

I was going to set it up as a network server and access it from my PowerBook via ssh, X11 and/or VNC. Getting a bootable SD card was very easy, also in part thanks to such detailed instructions, but requires access to another computer with SD-card reader and Internet access. Using a spare micro-USB cellphone-charger and an ethernet cable to connect directly to the home router was all that is needed to complete the setup. After that, it got a bit trickier: since I couldn't easily figure out the IP address and try to see if ssh access was enabled by default, the path of least resistance was to connect the TV via HDMI to see how/if the Raspberry Pi had booted (it did...) and use a keyboard to complete the config menu and dorp into the shell to see its network config. HDMI cable and keyboard were temporarily borrowed from the mac mini, which sits next to the router and TV in the living room.

After that, the experience was quite smooth - logging in via ssh, installing some new packages and as a test-project, setting up an Apple air-print (instructions here) and Google cloudprint (and here) proxy to our existing network printer went without a glitch. And to top it up, export the ssh service via Bonjour/mDNS (instructions here) so that the device can be reached via ssh raspberry.local no matter what strange IP the router decides to assign to it next.

Through the command line at least, the Raspberry Pi is a surprisingly capable general purpose computer. It feels at least as fast as some of the PC hardware I had run linux on in the early 90ies... More so, the combination of low-cost hardware and filesystem on removable flash card makes it very hackable, without any fear of destroying or "bricking" anything. I have never before used "sudo" with so little hesitation...

The cheap, almost "disposable" hardware in combination with the commodity removable storage, really helps to make the Raspberry Pi a safely "hackable" device, in the tradition of the ROM + floppy-drive based home computers of the eighties.

It seems that the primary use-case which the creators of the Raspberry Pi had in mind was that of a possibly even non-networked (model A) desktop using USB keyboard, mouse and a TV as monitor, kind of like the 80ies home computers. Given the ubiquity of computers today, I don't know how common the headless, networked usage would be - by either connecting it to the home router as I did, or directly to the now mostly unused ethernet port of a PC or netbook.

Maybe it would be worthwhile to improve the bootstrapping of this headless, networked configuration in the standard boot image by automatically announcing ssh via mDNS for both IPv4 (if there is a dhcp server on the network) or via IPv6 using the link-local zero-conf addresses. That way I could connect a new Raspberry Pi board directly to the ethernet port of a mac at least (and hopefully windows PCs as well) and ssh into it without the need of any further network configuration or knowledge.

Thursday, June 28, 2012

Google+ - Social Network?

It's been about a year since Google+ launched with great fanfare and even greater anticipation in the tech media. Being late to the party of the currently much hyped "social networking" space, Google+ has the benefit of being of being a more polished and more thought out platform than some of the older, more established players. Being a very flexible and generic platform, launched without a particular pre-imposed usage or application, it was interesting to see how its usage would pan out.

Before Google+, I was using at some point or another
  • Linkedin for keeping track of the changing fortunes of my former colleagues
  • Facebook for goofing around and keeping an eye on what friends and acquaintances were up to
  • Blogger for potentially pubic postings which hardly anybody reads
  • Flickr for public photo-sharing
  • Twitter to try out what all the fuss was about 
  • email for most of the purposeful communication with the inner circles of my social graph.
On Google+ I was trying out both a very private use-case of sharing personal stuff with people I know in real-life as well as the public one of maintaining a public profile - i.e. a combination of the Facebook and blogger use-cases. My posting doesn't make much use of the sophistication of circles - it's either private or public.

Like most people, I found that for the private use-case, there were not enough of the people in my real-life social graph active on Google+, as most of them were already using Facebook for that and didn't see any benefit of investing in yet another place for the same purpose. I personally prefer the user-experience and visual appearance of Google+ over Facebook - specially for photos, but that doesn't really help if everybody else is staying on Facebook. I am still hoping that this might change a bit more in the future...

The real reason why I keep using Google+ is to consume a stream of news, commentary and faits divers from a variety of sources. The use-case is much like a next-generation, prettier RSS feed reader.

There are established news outlets from different countries, some doing a better job of posting a steady stream of interesting and/or intriguing material than others, there are some new media people, bloggers and online creator promoting their own stuff as they do on pretty much any channel available, and there are some people who are posting links to interesting stuff they find.

The third group might be the most intriguing and novel use-case for Google+ and similar platforms: the ability to easily organize, curate, editorialize and contextualize content from other sources.

Appart from a few personal ones, my circles are not reflecting my real-life social network, but rather a list of media-subscriptions and interests. Some of the people or organizations I am following have millions of followers and would not recognize me if we bumped into each other in the street...

As a platform, Google+ is flexible enough that it could be used to directly create content on the site, i.e being a very crude and restrictive blogging or content management system. Or it could be used twitter style as a news-wire service for head-lines and to advertise off-site content. The most common usage pattern in my input stream seem to be posts which augment off-site content (or on-site re-sharing) with a bit of commentary. The site encourages that as a post is most naturally either a link, a video or some pictures with a bit of accompanying text. This is restrictive, though not as restrictive as the famous 140 character limit of Twitter, but it ensures that post look reasonably good on any channel Google+ supports, the the web-site as well as specialized app based experiences for mobile phones and tablets, which already represent a significant part of the usage.

What Google+ and maybe other similar sites seem to be good at is to remove friction. Users need an account, a profile, an identity to enter the site, but once this hurdle is passed, it is very easy to create or share content and to "engage" with what others have shared by commenting on it. Commenting is as old as web2.0, but outside a single platform (e.g. blogger or YouTube) or without identity syndication, the barrier to entry is just too high for most people. On the publisher or "sharer" side, the barrier is similarly lowered. Setting up a blog or CMS for online publication is daunting and requires a certain level of commitment - having a smooth and good-looking dedicated IOS and Android app challenges even large publishers.

Large and established publishers like traditional media and celebrity blogger, seem to use Google+ mostly as yet another medium to promote their online-brand, maybe adjusting a bit for the culture, tone and style which seem to have emerged among the current users of each platform. But the low barrier to entry for posting content and the emphasis on "sharing" rather than "creating" creates opportunity to act as guide, editor, commentator and curator for online content created by others and many Google+ users seem to do that quite successfully.

In this post from 2 years ago, I had pointed out that curating online content might become one of the next frontiers for helping users to cope with the information overload of the Internet. Maybe Google+ is at its core the first platform which is optimized for editors, commentators, curators and not primarily for content creators?

Is it a new social network?  From my current usage and applying a strict old-fashioned definition of a social network as a platform for interaction in some way with people which I know in real life, then no. But in a year, Google+ has managed to become a platform where some users have found a distinctive new voice and been able to build a loyal audience for that. Maybe we just need a new label for that...

Sunday, March 18, 2012

The Evolution of the FOR loop

The most widely and commercially used languages today are largely based on the imperative and structured programming paradigms and many have a direct and strong roots in the C language of the early seventies.

Even though there seems to have been little fundamental change in main stream programming languages over the  last 40 years, there have been subtle shifts in usage patterns, even for things as simple and fundamentally low-level as doing some things repeatedly or in a loop.

As a perfectly and randomly useless example to illustrate the evolution of looping, we are creating a list of squared numbers from the positive members of an input list.

In C or any similar imperative language of its time, the most basic way to do something like this would be something like:

out_size = 0;
for (i = 0; i < in_size; i++) {
  if (a[i] > 0) {
    b[out_size++] = a[i] * a[i];

Languages like C++ and Java, which includes a standard library of higher level collection data types, this basic pattern of an indexing counting loop is typically replaced by some sort of iteration support of the input collection.

for (std::vector<int>::iterator it = a.begin(); it != a.end(); ++it) {
  if (*it > 0) {
    b.push_back(*it * *it);

Which is still basically the same counting loop disguised as type-safe pointer arithmetic. Java 5 offers a slightly more elegant syntax for iteration of types supporting the Iterable interface and the new for-each variant of the for loop:

for (Integer num : a) {
  if (num > 0) {
    b.add(num * num);

In Python, this idiom would read like:

b = []
for num in a:
  if num > 0:
    b.append(num * num)

However for usage patterns like this, which are essentially transforming an input iterable into an output, some languages also support a more functionally inspired idiom. In Python originally using the map and filter functions:

b = map(lambda x: x * x, filter(lambda x: x > 0, a))

And since the introduction of generator expressions and list comprehensions as:

b = (num * num for num in a if num > 0)

The advantage of the functional model is that in its purest forms, it allows for deferred evaluation, has generally no side effects, describes more specifically the nature of the operation than would the use of general flow-control structures and thus could more easily be parallelized by a smart runtime.

A more functional style is also possible in Java using some 3rd party libraries as Iterables from the Guava library. However without support for some sort of compact lambda function definition in current Java, this would result in significantly more complex and probably less readable code than the currently most standard variant using the for-each iteration. Until then, even the authors of the Guava library recommend to not use a functional idiom for most cases...

Saturday, March 3, 2012

Learning Computers

On Feb 29, the Raspberry Pi foundation launched the sales of a $25 credit-card sized computer, which sold out in minutes. The goal of the Raspberry Pi is to stimulate computer literacy education in schools, inspired by the impact which the BBC Micro computer had on schools in the UK - compared with a similar wave of popular home-computers elsewhere - e.g. the Commodore 64.

I understand the elements of nostalgia of people growing up around the 1980ies for the time when computers where new and exciting. Around that time, I thought myself programming in C and M68000 assembler on a Commodore Amiga and probably learned more in basic understanding about programming and computer architecture than during the years afterwards in engineering school. The home-computers of that era were hackable enough to encourage tinkering and simple enough to allow really getting to the bottom of how they worked. The gap between what could be produced by determined hobbyists and professional software publishers was not that great.

By today, computers have become so ubiquitous and commonplace that they are no longer considered exciting. Students today use computers in the form of PCs, tablets, game consoles or smart-phones for researching or writing as naturally as we used to use books, pen and paper or pocket calculators. But many of todays computer platforms are also much more complex, sophisticated, closed and don't invite to tinkering with the underlying technology. Children today are growing up in a way much more computer literate (as users) than any generation before, but may know or care much less about how computers actually work than their parents generation.

Maybe there is a natural cycle of when technology is attractive and inviting to hobbyists. While in the 60ies many teenage kids would be tinkering with old cars in the backyard, the grease monkey sub-culture has largely come out of main-stream fashion, as cars today just work and are sealed black-boxes which don't allow for much tinkering without access to proprietary diagnostic systems. For the same reason, tinkering with computers might have been a think unique to the end of the last century and might not come back despite efforts like the Raspberry Pi project.

For those of us interested in getting today's youth generation interested in information technology, the question remains on what would be the most suitable way. Maybe I am myself blinded by nostalgia, but I am optimistically hopefully for the impact of something like the Raspberry Pi, as it seems both sufficiently accessible and hackable. The bare-board without a case may help to encourage an interest to go beyond the surface of how computers work - even though the highly integrated SoC architecture allows less so than ever to understand computer architecture from a visual/physical reality.

From my limited experience, it seems important to avoid creating dumbed-down "toy technologies" without real-world usage when trying to get children and young people interested in technology. Instead, we should encourage the use of real-world technologies and platforms which are easily accessible, have a low barrier to entry learning curve and are open and flexible enough to encourage experiment for some applications which are cool and intriguing for today's generation.

This chart from the Economist gives and indication that today there is a similar frenzied innovation in mobile computing platforms as there was in personal computing in the 1980ies.

Maybe today's generation would more likely tinker around with a mobile device than a desktop computer and programming education should start with the development of apps for mobile devices. Both Android and iOS platforms allow for relatively easy application development, even though the device platforms on which users can easily get their hands on tend to be closed and locked down for a variety of reasons. Android is free and open-source, but only to those who own a cellphone factory and not to the end-user who buys a smart-phone, maybe even subsidized through a carrier. Also the tool-chain for mobile development is quite heavy and requires access to a PC typically - self-hosted development on a mobile platform so far is not typically an option.

Another obvious choice would be web applications. The browser is increasingly becoming the real platform for which applications are written, abstracting hardware and operating system differences. Cloud platforms like Google AppEngine and similar alternatives have essentially a very low barrier to entry in both cost and complexity for building and running a web application on the Internet. The modern browser standards with HTML5, CSS & JavaScript present a powerful and feature rich environment for application development with a high level of abstraction. However, the level of abstraction is so high and so far removed from the physical reality of the underlying hardware that web programmers may never need or want to understand how computers actually work.

Another area with lots of exciting potential might be robotics. Robotics encourages or even forces a confrontation with the physical reality of the systems we are developing. Given the improvements in miniaturization and power efficiency of computing hardware, there is much exciting activity and progress in the field across commercial development, cutting edge academic research and even hobbyist activities. Givent the combination of low-cost, low-power small size for a decently powerful compute platform, robotics might be the real sweet-spot for the new Raspberry Pi board in computer literacy education.

From my experience as a judge on a FIRST robotics competition, kids of a certain nerdy/technical inclination can certainly be excited about building robots. I would love to see integration and interoperability of the Raspberry Pi board e.g. with the Lego Minstorms robotics toolkit.

Friday, March 2, 2012

Tiny, low-cost Linux Device

Recently I was looking around for cheapest, smallest device which easily can run linux and which can be bought in small quantities down to one. The particular application in mind was to build a linux based print-server which exposes legacy USB or network connected printers via Apple AirPrint for iPads and iPhones or Google CloudPrint for ChromeBooks or some Android devices and maybe could provide some other services to such thin-client, "cloud-top" devices (local file server and/or backup, wifi/network gateway etc.).

A few years ago, I had a first compact, fan-less home-server in the form-factor of a mac-mini, i.e. about the foot-print of the optical drive it contains. Like the mac-mini, this was basically a compact PC, made largely out of laptop parts.

It seems that the clear winner in terms of cost today is the new Raspberry Pi board at $25/$35, whose launch this week caused an Apple-style opening-hour stampede on the online store selling its first production batch. It is a minimalist bare-board computer, about the size of a credit-card and based on a Broadcom BCM2835 embedded multi-media engine with a 700 MHZ ARM core. The Board a USB port/ HDMI & composite video output, some low-level IO (GPIO, I2C, SPI etc.) as well as 10/100 Ethernet port on the $35 version. Not included in the price is an external 5V micro-USB standard cell-phone power-supply, an SD card and maybe a case. With these extras the final price would likely be in the $50-$100 range. The heavy emphasis on video would make the Raspberry Pi ideal for applications which require a TV connection or some other form of display.

An alternative for less visible, network connected servers could be to use a plug computer. They seem to have typically a beefier CPU core, more networking options and often come in a case with integrated power-supply and wall-plug, so that they can be plugged directly into a power-outlet. Prices for a single unit seem to range from about $100 to $250.

However, since brand-new AirPrint capable printers are starting at around $100 as well, this project seems not worth its while, at least not just for re-using low-end home printers, e.g when moving to an iPad as the primary home-computer.

Saturday, February 18, 2012

Securing Gmail

For heavy users of Google services, the gmail account has over the years evolved into a "Google account" and holds the key to an increasing amount of our online activities and presence. Judging from a random sampling of the gmail support forum or some reports in the press (E.g. this recent article from the Atlantic Magazine) - gmail account hijacking is an increasingly widespread and serious problem.

Stolen account IDs from major web-mail providers (gmail, hotmail, yahoo mail etc.) seem to be collected and used at industrial scale for spam generation and fishing for 419 style advance fee fraud schemes like the infamous "mugged in London" scam described in the article above.

The mechanics of some common threats used to steel account passwords is described in this blog post in some detail, but in short it boils down to weak passwords, password re-use and password sniffing malware. Given how prevalent malware infestations are on major OS platforms, even users who are careful in their password policies cannot reasonably ensure that their account is not being hijacked.

Fortunately, Google has recently introduced a 2-factor authentication option for its gmail/Google accounts. Two-factor authentication is typically based on a secret the user knows (the traditional password) plus something the user owns - in this case their cell-phone or a paper with some pre-generated one-time use validation codes. Even if the password is compromised, the second factor would have to be stolen at the same time for the attack to be successful.

Multi-factor authentication has long been used for online-banking as well as for security conscious applications in corporate or government IT services, but the additional security comes at the cost of reduced usability and increased risk of lock-out (e.g. by forgetting, loosing or breaking the security token/device).

In the case of gmail 2-step verification, the usability nuisance is relatively minor: the security token is the user's cellphone - a device which most of us carry around religiously anyway by now and one verification can be extended for 30 days on trusted computers as an option on the login screen - e.g. on my computer at home, but not on a public computer in a library or internet cafe. For users of Android or iPhone smart-phones, there is even an Authenticator app, which turns the phone into a security token without the need for network connectivity and the ability to receive SMS messages with the login verification code. The authenticator app is easily seeded/configured to the account by scanning a bar-code with the phone camera during setup of the 2-step verification (may need additional barcode scanner app).

My biggest concern is the increased risk of lock-out from this more complex setup. Specially since the practical absence of support from Google for the free consumer gmail accounts (for supported accounts with SLA, see here), a lock-out which the user can't recover from is likely a permanent loss of the account - but so might be a malicious case of hijacking.

For existing accounts, the activation of 2-step verification is fairly hidden on the account settings page. However the setup is fairly easy and self-guided once the setup page is reached.
Before enabling 2-step verification, make sure that recovery/verification email and phone settings are correct and reachable, just in case. During activation of the 2-step verification, generate a set of backup codes and store in a safe location accessible without the cellphone or gmail account  (E.g. print out or use a password safe for this) - before trying to re-login with the new account settings.

Enabling 2-step verification on an existing account will also lead to failure of any existing programmatic login setups, e.g. for Android/iPhone mail/calendar sync, POP/IMAP access, chrome sync, cloud print etc. Some of these now use oAuth delegation which can be re-authorized with 2-step verification after enabling it or may require generating an application specific password to get working again. Since application specific passwords are long random strings and thus hard to guess or crack and are not typed in other than during setup, they are not as much at risk for hijacking - unless the computer where they are stored is compromised (e.g. laptop stolen) or transmitted in clear over an insecure network (e.g. airport wifi - make sure to enable SSL in mail clients!) - and their usage is limited on a few programatic login interfaces and not for general web login.

Sunday, January 22, 2012

GWT - An Experience Report

As noted before, I am not a big fan of JavaScript as a language for complex web application projects. Recently I got the chance to get some first-hand, comparative experience with GWT (Google Web Toollkit) as part of an application re-write/upgrade.

The original system was a web-app built web 1.5 style in Java, on top of the OpenSymphony WebWork framework, combined with an XML based template engine and guice for dependency injection on the server side. On the client side, there was a growing amount of JavaScript code for each page, using the Closure JavaScript compiler and library. The app is reasonably non-trivial, resulting in about 40k client-side Java code after the rewrite.

For a project of this nature and complexity, I am very positively surprised and impressed with GWT. For the base architecture of the new client, we had basically followed some of the best practices advice for large-scale GWT applications from GoogleIO talks in 2009, 2011 or in this document: use MVP to isolate UI code from business logic for testability, use Gin/Guice DI and the event-bus for dependency management, use UIBinder to push as much of HTML & CSS stuff into templates as possible and use the new Activities & Places framework for history management, navigation and the basic layout structure of the application. For the rest, we tried to stay as close to the most naive plain vanilla implementation (e.g. standard GWT-RPC services and view constructed bottoms up from widgets). So far this first-cut implementation has held up well enough without need for re-writes and optimization, which is quit impressive for a first use of a reasonably complex new technology.

What we wanted to get out of a migration to GWT was the ability to use same language, tools and software engineering techniques for both client and server as well as the ability to share as much of the actual code between client and server. The second part turns out to be the much harder one...

For somebody who generally likes working in the Java dev ecosystem, working with GWT is quite pleasant. Much of the tools and techniques carry over effortlessly and at least when using the emulated development mode, the high-level abstractions rarely break down. HTML & CSS are still largely browser-magic, but can at least be largely contained to the leave-nodes of the UI object tree - typically in the form of widgets. Because there is still a lot of missing functionality in native GWT libraries and there is a lot of potential custom JavaScript to be integrated with, the use of the JSNI JavaScript native interface within GWT is likely to be used more often than comparable low-level breakout mechanisms would be needed in more complete and dominant development environments. The probably biggest complaint when developing large-ish applications in GWT is the speed (or rather lack thereof) of the GWT compiler and the somewhat sluggish execution of the emulated development mode. In all fairness, when using development mode, recompilations is often not required to make changes effective, just a reload of the application host-page URL.

The ability to use the same language, tools and software engineering techniques on both client and server is already a huge benefit in a large project, but sharing actual code would be even better. To enable that, GWT attempts to provide support for a large part of the Java standard platform runtime and library, within the limits of a compiled, non JVM framework (e.g. no reflection) or limitations of the browser environment (e.g. no multithreading). Besides not using any of these features in code, it also starts to get tricky when using libraries which are not available in source form or which use themselves features which are not supported in the GWT environment. There are ways of providing custom GWT emulations for JRE classes and custom serializations, the way GWT uses internally for implementing some standard library functionality, but that approach is a bit too low-level for everyday use in application development projects.

The most common use-case for sharing typically centers around using the same set of classes in the client-side model, the RPC interface and the server-side data-model, including interfaces to databases and other backend services. Besides raw data definition, some behavior needed both on client & server should likely also be sharable.

In order for a class to be usable in GWT-RPC it must implement either the standard Java Serializable interface (with some caviats) or the GWT IsSerializable marker interface. One of the major annoyances for people who like immutable data classes, is that there is no serialization for final fields.

Without making use of partial emulation (leaving the contentious functionality unimplemented in the emulated version) some classes which are entangled with some unsupported server side framework (e.g. inherit from or provide serialization/deserialization to a persistence framework) may need to be heavily refactored, to split the  framework dependencies out.

Somewhat unrelated of the technology used, the move towards a single page "thick client" web-app suddenly makes keeping track of per-client session state trivial, without the database and caches necessary in a server-side LAMP web-app, since the browser is a single-threaded, single user environment and the lifecycle of the session state is naturally tied to the lifetime of the client application in the users browser.

The biggest weakness of GWT is that it does not easily scale up from 0. It is a complex and heavy technology which requires a lot of upfront planing and architecture. As many UI frameworks, a reasonably complete minimal "hello world" app would probably be a few hundred lines of setup and boiler-plate. If the job is to attach a few bits of client side customization to an otherwise classic HTML web-page, the GWT is clearly not the right choice.

Based on this experience, I find GWT quite an ideal choice for massive and complex "thick client" apps, specially when backed by a java server. Assuming a reasonably "unsexy" enterprise application development environment with focus on complex functionality and business logic and without need for extreme optimization, extreme customization of exploiting the latest browser tricks, basically anywhere, where people would not consider using C or assembler otherwise...

Wednesday, January 18, 2012

Why Time is hard

At least since the "Y2K problem" entered the public consciousness around the turn of the last century, nobody doubts that correctly representing time in computer systems is somehow hard. While today nobody is hopefully trying to save a few bytes by representing years in 2 digits, the state of time computations in many programming environment is still over-simplistic to say the least.

Having (almost) gotten caught by surprise by last years changes to civil time in Russia, this article is an attempt to understand, what it takes to handle time somewhat correctly for business related computer applications.

There are 2 common uses of time in computer systems:
  1. a monotonically increasing measure representing a global reference clock of some sorts and which can be used to determine an absolute ordering of all events in the system as well as their relative duration.
  2. Representation of civil time as it is used by communities of people living in some particular place to go about their daily lives and converting between multiple such references.
While 1. is an interesting technological problem, 2. is typically the focus, when computers programs are used to solve some practical everyday problem, which is probably the case for the majority of people writing software today.

Modern time measurement is based on the SI second which since 1967 is defined a the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom. At sea level, at rest and at a temperature of 0K. Before that the common definitions of time used to be either defined based on astronomical observations.

For the sake of some sanity in a globalized world, politicians at some point in the 19th century agreed on a single coordinated time reference system which is based on an approximation of average solar time at the Greenwich Observatory near London, setting location of Greenwich as the global prime meridian and the Greenwich Mean Time (GMT) as the global reference time.Toady's global time reference is called Coordinated Universal Time (UTC) and is piece-wise identical to the International Atomic Time (TAI), derived from  the average of some 200 atomic clocks world-wide. The difference between TAI and UTC comes from the occasional insertion of a leap-seconds in order to keep UTC in line with UT1, an idealized model of astronomical time at the prime meridian.

Starting from the prime meridian at Greenwich, the world is then partitioned into 24 reference time-zones, each at 1h increment from the previous one, defining a standard local time up to about 30min different from the local solar time. These timezones are typically names "UTC +/- x" or sometimes "GMT +/- x". Most standard timezones or combinations of timezones have more or less obvious names and abbreviations. E.g. EST stands for Eastern Standard Time and refers to UTC - 5,  the timezone used roughly in winter along the US eastern seaboard, but the exact list of applicability is frankly a bit confusing. Also timezone abbreviations are not unique and don't follow a logical pattern. E.g. BST stands for both British Summer Time (UTC + 1) and Bangladesh Standard Time (UTC + 6).

And if that were not yet complicated enough politician proceeded to make a complete hash of things by choosing for a variety of political entities (countries, states, provinces, towns, who knows what...), which timezone it should belong to - sometimes coinciding with the reference timezone this entity was located in, sometimes not. To make things even worse, they proceeded to invent a thing which is called "summer time" in some places "daylight savings time" in others, which is typically done by shifting the local time back and forth by 1h at random times in the spring and fall. This ritual is supposed to have some benefits, but most likely just drives people and livestock mad...

Another problem with daylight savings time is that it breaks the intuitive assumption of time being at some reasonable level continuous and monotonically increasing. However in all those parts of the world which observe some form of DST, this assumption is broken twice a year when the clocks are moved forward resp. backwards at some particular points in time. This means that some specific representations of local time are invalid, as they do not exist, i.e. not correspond to any valid point in time expressed in UTC or any other global reference time. E.g. 2012-03-25T02:30 does not exist as a valid local time in Z├╝rich, as it is being skipped during the wintertime to summertime switch. Similarly 2011-10-30T02:30 in local time for the same region is ambiguous, as it corresponds to 2 different points in time due to the clocks being set back by 1 hour.

And since politicians need to keep busy, they sometimes change any of those rules arbitrarily whenever they feel like, causing frantic activities of software updates and all kinds of malfunctions in software which is dealing with representation of local time.

While obtaining a decent enough approximation of UTC is not a big challenge anymore for most computer systems (e.g. through GPS or NTP), figuring out what time a clock should show on the wall of any arbitrary place in the world is still a hard problem, thanks to our politicians.

In the absence of an authoritative standard of all timezone definitions, each software package which does local time comparisons and computations needs to somehow be changed and updated when any timezone related rule changes anywhere in the world - for some arbitrary definitions of "any"...

It seems that most popular time handling libraries, which are sophisticated enough to handle these issues anywhere near correctly, rely on a group of volunteers, which maintain the open-source tz database and associated tools, now also called the IANA Time Zone Database. It uses a particular definition of timezone (from tz project page):

"Each location in the database represents a national region where all clocks keeping local time have agreed since 1970. Locations are identified by continent or ocean and then by the name of the location, which is typically the largest city within the region. For example, America/New_York represents most of the US eastern time zone; America/Phoenix represents most of Arizona, which uses mountain time without daylight saving time (DST); America/Detroit represents most of Michigan, which uses eastern time but with different DST rules in 1975; and other entries represent smaller regions like Starke County, Indiana, which switched from central to eastern time in 1991 and switched back in 2006."

Judging from the traffic on the mailing list, there seem to be some change somewhere in the world every few months, which we can either choose to ignore or which may require an upgrade of the timezone definition database.