Monday, October 13, 2014

Why Open-Source Software works

Many people, specially outside the technology industry are surprised that something like open-source software could exist, specially at the scale and sophistication it quite obviously does. Many vendors of proprietary - i.e. closed-source software, have long tried to discredit open-source as either suspicious and subversive "Commie-stuff" or as hobbyist toys and playthings, not fit for serious applications.

Looking at the development of human knowledge and culture throughout history, it should not be surprising that open-source software would exist and be thriving in the long run. On the contrary, closed-source software, like any idea or innovation that is kept secret essentially dies with its inventor, cannot influence new ideas and ongoing innovation and essentially becomes an evolutionary dead-end. In the short-lived word of technology, this can happen within decades rather than centuries, as historically no tech company has managed to stay dominant for more than a few decades at most - even the once mighty IBM, DEC or AT&T.

Unix and C, probably the most influential operating system and programming language respectively, were developed around 1970 at the AT&T Bell Laboratories. Since regulations more or less prevented the monopoly phone company to enter the computer business, it more or less openly shared it innovations in that area with the academic research community and thus they became part of the record of human knowledge and an influential source of innovation.

Some people argue that for example VMS, a competing operating system from Digital Equipment Corporation, was technically superior, but since the once mighty DEC went defunct in 1998 VMS is today pretty much irrelevant and ancient history.

Today, AT&T Bell Labs itself is not doing much better. Split into N parts by antitrust regulators and a brutally competitive market have turned it into a faint shadow of its former self.

Abandoned former Bell Labs research site in Holmdel, NJ - where I started my career (source Metropolis/Rob Dobi
We can only imagine how much poorer the state of the art of technology would be, if AT&T had been allowed to fully commercialize many of its inventions.

In the early days of computing, most software was open-source by necessity - for distribution, portability and because it was important for users to help each other in a yet immature and emerging field. Software was not yet a business and mostly written by users and/or researchers.  The research community considered access to source code as simply an essential and natural extension of academic freedom and the scientific method to the new field of software - i.e. the right to examine and investigate.

The emergence of software as a business clashed with this academic culture of openness, which lead to a series of high profile legal and cultural conflicts and led to the creation of a more formal and organized movement around Free and Open-Source Software - now with capital letters, organizations, foundations and its own legal frameworks and licenses (e.g. BSD license or GPL).

The impression that open-source is purely a playground for academia or a fringe of idealists and hobbyists is however largely wrong - at least for the large and significant open-source projects today. For example the most recent report on Linux kernel development from the Linux Foundation in 2013 shows that at least 80% of the contributors are payed for their work on the Linux kernel by their respective employers.

Specially in the areas of critical technical infrastructure like operating systems, programming languages, databases, web servers and other Internet infrastructure, open-source allows for a strange agreement of cooper-tition. While many companies critically rely on such infrastructure, very few could develop it all in house and very few would want to depend on a potential competitor for such critical parts of their infrastructure. So they rather allow some of their employees to work on what looks from the outside like a giant non-profit volunteer effort.

We are now again in a situation where an increasing number of software is written by technically sophisticated organizations which are not in the business of selling software, but rather products or services which is strongly dependent on software. I don't know if open-source software has in part undermined the prospects of software as a business or whether this shift in business focus has helped open-source software to become as relevant and significant as it is today.

Saturday, July 5, 2014


When looking for a platform to learn Linux and Python programming, Raspberry Pi would be on top of most peoples mind today.

The success of the Raspberry Pi brand has created an active and novice friendly community, where help and advice is easy to find.

However, just using a Raspberry Pi as a low-end desktop computer is surprisingly complex and setting up a room for a programming workshop requires a few crates of additional equipment (keyboards, mice, monitors etc.).

It seems that the strength of Raspberry Pi platform is in low-cost, low-power embedded and physical computing applications, like robotics, control and measurements systems.

Despite the enthusiasm for Raspberry Pi, it may not be the best choice for organizing a programming class/workshop, specially if the school already has plenty of PC hardware available.

The Lernstick is a Debian live-image based distribution which can be booted off a DVD or USB stick. It is being developed by the School for Teacher Education at the University of Applied Sciences and Arts Northwestern Switzerland specifically to support computing applications in schools. The standard distribution includes a lot of applications useful for this educational context, both learning apps and standard productivity tools.

The idea behind Lernstick is to give students a personalized and customizable Linux system they can carry with them (even if it's only the disk) and use it on any standard PC hardware available to them. Switzerland having one of the highes per-capita penetration of PCs (over 85% as of 10 years ago), chances are, that most students would have access to a PC either at school, at home or both. However these PCs would most likely be running windows, be locked down and not allow or encourage tinkering and experimentation with the computing platform itself which is so important to get students beyond being simply users.

To make it suitable for classroom use, there is a lot of emphasis on keeping a low barrier to entry in terms of complexity of installation, administration and deployment, which typically would fall on the teacher. Part of the distribution is a self-hosted management utility (see on screenshot below) which allows to clone the installation onto other UBS drives, including any customizations made like network or printer setup.

To bootstrap the installation process one first needs to download a DVD ISO image and run the installation on a USB drive from it. On my target computer, a refurbished HP EliteBook 6030, I couldn't boot the DVD but had not problems with the USB drives. Since built-in DVD drives are not as common any more, it would be nice to have power user option to download a raw disk image to burn directly on a flash drive, similar to the process for Raspbian for example.

Contrary to Raspbian images, the Lernstick installation uses static system partition that is union-mounted with a data partition for customization and user data. Not really sure why this setup was chosen. Maybe to allow a combination of DVD boot with a smaller USB drive as data partition or an easier upgrade and recovery mode. Given how much larger and faster modern USB drives are compared to DVDs, I am not sure this choice still makes much sense.

While the Lernstick is not primarily designed to teach programming, it comes with a few programming tools pre-installed. For example Scratch and as part of the base operating system installation also  the Python interpreter and C compiler. Maybe most relevant in its absence is IDLE, the simple and novice friendly Python IDE, which is often recommended as a good starting point for learning Python. Fortunately, any missing packages can easily be installed using the standard Debian package management system (e.g. sudo apt-get install idle).

I tried out Lernstick on both a high-end, high-speed USB 3.0 as well as a generic no-name USB stick. Both work quite well, but the faster one is booting 30% faster and the Debian desktop is feeling noticeably snappier - a reminder on how disk-heavy modern desktop environments really are.

Even for schools which are committed to using Linux and open-source applications, the simplicity of Lernstick might still be a good choice, given the challenges of managing fleets of public PCs and network or cloud based setup of user account and data access, with all the data-protection issues that specially Europeans are so concerned about.

I don't know how well Lernstick would perform in a day to day classroom setting, e.g. how often they would break or get lost. On the other hand, the personalization and ownership might lead to a flexibility and motivation that goes beyond what is possible with just having access to a room of shared computers. Similar to the Raspberry Pi, having all the system state on a easily changeable and restorable commodity flash drive creates that "don't worry, you can't really break this" atmosphere which might be important to encourage students to start tinkering and experimenting.

Sunday, January 26, 2014

Raspberry Pi Temperature Data Recorder - Part III: Visualization

<- Part II: Data Collection

Once we have accumulated some data in the RRD database, we can start generating some plots with the RRDTools built-in graph function. The graph above shows the values of the DS18B20 temperature sensors over 3 unseasonably warm days in January. Because of the short cables, the sensors are not optimally placed. E.g. the inside temp sensor is relatively close to the radiator, the outside one is on the ledge, just outside the window and still behind a partially closed shutter in a narrow alley between 2 old poorly isolated buildings. According to the weather report, the current outside temperature is about 3-4 C less than what the sensor shows. From the chart, we can see that the heater feed temperature seems to fluctuate a bit, maybe a sign of hysteresis in the burner controller. The heater temperature is also lowered a bit for about 6h during each night, but this seems not to have any noticeable effect on the room temperature. There are small gaps in the graph, which are caused by read errors or other lapses in the data collection. The chart is generated by running the following command:

rrdtool graph /var/www/temp_graph.png \
-w 1024 -h 400 -a PNG --slope-mode \
--start -3d --end now \
--vertical-label "temperature (°C)" \
DEF:in=data/templog.rrd:internal:AVERAGE \
DEF:out=data/templog.rrd:external:AVERAGE \
DEF:heat=data/templog.rrd:heat:AVERAGE \
LINE2:in#00ff00:"inside" \
LINE2:out#0000ff:"outside" \

A graph command roughly requires 3 types of parameters: general setup, data-source definitions and drawing commands. For more details, see the RRDTool graph documentation.

In order to view the graph on a computer, tablet or smartphone connected to the same wifi network, we can most easily export it through a web-server. Among the major web servers, lighttpd is probably the one with the smallest resource footprint. We can install it like this:
sudo apt-get install lighttpd
and then make sure that the file temp_graph.png in the web servers root directory is writable by the pi user:
sudo touch /var/www/temp_graph.png
sudo chown pi:pi /var/www/temp_graph.png
After running the rrdtool graph script, we should be able to see the graph in a browser as http://<address-of-pi>/temp_graph.png .

We could simply regenerate the graph periodically with another cron job and be done with it or we could go ahead and build a small web application, which generates one or potentially several kinds of graphs on demand.

Among the many available frameworks which help simplify building web applications in python, we somewhat arbitrarily choose for its simple URL routing and request/response management.

The following simple web-app handles 2 URL servlets - graph & view each supporting a scale parameter. While graph calls rrdtool graph to generate a PNG image, view generates a web page around it including a menu to toggle the scale parameter from a daily to a yearly granularity.


import os
import rrdtool
import tempfile
import web

# app URL routing setup
URLS = ('/graph.png', 'Graph',
        '/view', 'Page')

RRD_FILE = '/opt/templog/data/templog.rrd'
SCALES = ('day', 'week', 'month', 'quarter', 'half', 'year')
RESOLUTIONS = {'day': '-26hours', 'week':'-8d', 'month':'-35d', 'quarter':'-90d',
  'half':'-6months', 'year':'-1y'}

class Page:
    def GET(self):
        scale = web.input(scale='day').scale.lower()
        if scale not in SCALES:
            scale = SCALES[0]
        result = '<html><head><title>Temp Logger</title></head><h4>'
        for tag in SCALES:
            if tag == scale:
                result += '| %s |' % (tag,)
                result += '| <a href="./view?scale=%s">%s</a> |' % (tag, tag)
        result += '</h4>'
        result += '<img src="./graph.png?scale=%s">' % (scale, )
        result += '</html>'
        web.header('Content-Type', 'text/html')
        return result
class Graph:
  def GET(self):
      scale = web.input(scale='day').scale.lower()
      if scale not in SCALES:
          scale = SCALES[0]
      fd,path = tempfile.mkstemp('.png')
                    '-w 900',  '-h',  '400', '-a', 'PNG',
                    '--start',  ',%s' % (RESOLUTIONS[scale], ),
                    '--end', 'now',
                    '--vertical-label',  'temperature (C)',
                    'DEF:in=%s:internal:AVERAGE' % (RRD_FILE, ),
                    'DEF:out=%s:external:AVERAGE' % (RRD_FILE, ),
                    'DEF:heat=%s:heat:AVERAGE' % (RRD_FILE, ),
                    'LINE2:in#00ff00:inside ',
                    'GPRINT:in:AVERAGE:Avg\: %8.2lf',
                    'LINE2:heat#ff0000:heat  ',
      data = open(path, 'r').read()
      web.header('Content-Type', 'image/png')
      return data

if __name__ == "__main__":
    web.application(URLS, globals()).run()

Running this app directly as /opt/templog/python/ creates a web-server listening on port 8080 and two working URLs - /view and /graph.png

The local web-server is at least very useful for testing and we could launch it as a service from /etc/init.d . But since we already have lightthpd running, we can as well serve this app through it via the fastcgi support. For that we need to modify the config in /etc/lightthpd/lighttpd.conf by adding "mod_fastcgi" to server.modules and add the following section to the file:
fastcgi.server = ( "/templogger" =>     
 (( "socket" => "/tmp/fastcgi.socket",
    "bin-path" => "/opt/templog/python/",
    "check-local" => "disable",
    "max-procs" => 1

After restarting the server with
sudo /etc/init.d/lighttpd restart
our app is now mapped unter  /templogger/... into the URL-space of the server, so that it can be accessed like this:

With this, we conclude the basic setup of a temperature monitoring system and any further ideas on what else could be done with this data is (for now) left as an exercise to the reader...

Saturday, January 25, 2014

Raspberry Pi Temperature Data Recorder - Part II: Data Collection

<- Part I: Hardware

In the previous part of this tutorial, we looked at how to connect a few DS18B20 digital temperature sensors to a Raspberry Pi and read their values.

The next part of the problem is periodically collect and record these measurements and store them for graphing and further analysis.

It is always possible to store measurement data in a general purpose database e.g. SQLite or MySQL and then plot the data for example with the Google charts API.

For this project however, we are going to use RRDTool, which is a special purpose database optimized for recording, aggregating and graphing time-series data. It is particularly popular for network and system monitoring applications and for example at the base of smokeping  which we used in an earlier example. However this time, we need to configure and setup our own database from scratch.

Some of the reasons why RRDTool is particularly nice for this type of application:
  • Fixed-size, fixed-interval sliding window database always stores the N most recent data-points, which means that the data does not grow unbounded and does not need to be deleted.
  • Powerful built-in graph generation (see next part)
  • Multiple levels of aggregation allow a level of granularity which naturally match the resolution of display: very fine-grain for the last N days, more course grain for the last N months or years.
  • Handles gaps in the data without skewing statistics
  • Compact and efficient storage format for time-series data
Each RRDTool database is configured with a base interval in seconds, defining the pulse of the measurement system at which the data is supposed to be sampled. For each interval, it can determine the average rate from two or more samples of a counter or as for our case, the average value of readings of a thermometer.

For each database we can define a number of data-sources, which can be sampled during each base interval. The data for each data-source is then recorded in a series of round-robin archives, each aggregating over a number of base intervals with potentially a different aggregation function (e.g. avg, max, min) and storing a different number of last N values in a round-robin or sliding window fashion.

For our temperature recorder application, we define 3 data sources, one for each of the sensors: inside, outside and heater-feed temperature. As the base clock of the system, we somewhat arbitrarily choose 5min (300 seconds) which should be a compromise between dynamic sensitivity and not trashing the CF storage card unnecessarily.

In order to be able to look at the data over different time-scales, we want to collect:
  • with 5 minute granularity (1 base interval) for  two days (12*24*2=576 samples)
  • with 15 minute granularity (3 base intervals) for  two weeks (4*24*7*2=1344 samples)
  • with 1 hour granularity (12 base intervals) for two months (24*31*2=1488 samples)
  • with 6 hour granularity (72 base intervals) for 16 months (4*31*16=1984 samples)
All the aggregations should be averaged, except for the longest one, where we want to keep average, max and min for each of the 6 hour intervals.

The following scripts creates an RRDTool database with these properties:

rrdtool create /opt/templog/data/templog.rrd --step 300   \
DS:internal:GAUGE:600:-55:125  \
DS:external:GAUGE:600:-55:125  \
DS:heat:GAUGE:600:-55:125  \
RRA:AVERAGE:0.5:1:576    \
RRA:AVERAGE:0.5:3:1344   \
RRA:AVERAGE:0.5:12:1488  \
RRA:AVERAGE:0.5:72:1984  \
RRA:MIN:0.5:72:1984      \

Where DS defines the 3 data-sources and RRA the different round-robin archives at various granularities, retention and aggregation types. The data sources are of type GAUGE, which means that the absolute values are used and not the rate/delta-increments as is the main mode of operation for RRDTools. The additional arguments define an update timeout (not really relevant for GAUGE types) and the estimated max/min range of the values, which in this case are the supported range of the DS18B20 sensor according to the datasheet.

The round-robin archives within the database are configured with an aggregation function (avg, max, min), a fudge-factor to define how many missing base samples we can tolerate before the aggregate itself becomes unknown as well as the number of base intervals to to be aggregated and how many of the values should be kept.

Now we need to set up a job which periodically, at least every 5min reads the temperature sensors and inserts the measurements into the database created above. The easiest/most robust way to do that on Linux is through cron, i.e. crontab -e and add the following line:
*/4 * * * * /opt/templog/python/

The */4 setting schedules the collection job to run every 4 minutes, which is a little faster than required, but helps reduce the risk that we miss any sample period. RRDTool will automatically create an average value for each base sampling interval for which we record at least one data-point (otherwise the value is unknown). One of the advantages of RRDTool is the proper handling of missing values. Those are simply ignored and create gaps in the graphs and don't affect the aggregated values.

Assuming the files for this application are going to live in /opt/templog and are running as the user pi, we can create this directory with
sudo mkdir -p /opt/templog
sudo chown pi:pi /opt/templog

And create the following script in /opt/templog/python/

import logging
import logging.handlers
import rrdtool
import temp_sensor
import time
import sys

def do_update():
  timestamp = time.time()
  internal = temp_sensor.TempSensor("28-000005303678")
  external = temp_sensor.TempSensor("28-000005604c61")
  heater = temp_sensor.TempSensor("28-000005610c53")
# in case of error, retry after 5 seconds
  for retry in (5, 1):
                     "%d:%s:%s:%s" % (timestamp,
      logging.exception("retry in %is because of: ", retry)
      time.sleep(retry * 1000)

# set up logging to syslog to avoid output being captured by cron
syslog = logging.handlers.SysLogHandler(address="/dev/log")
syslog.setFormatter(logging.Formatter("templogger: %(levelname)s %(message)s"))


This script sets up logging to syslog, as we want to avoid any output to stderr in cron. We declare the 3 sensor access objects based on the ID which correspond to each sensor (see part I). Since reading a sensor can occasionally fail, we can retry a second time.

Now that we have data accumulating in the RRDTool time-series database, we will be looking at visualizing the data in the next part.

Tuesday, January 21, 2014

Raspberry Pi Temperature data recorder - Part I: Hardware

The Raspberry Pi seems ideal for all kinds of "physical computing" applications, as it is small, cheap, low-powered and yet more powerful and feature rich than a traditional micro-controller.

One way to showcase such applications in an educational context could be to control science experiments which require long term measurements and data collection.

One of the easiest measurement sensors to connect to a Raspberry Pi is the DS18B20 digital thermometer,. It can be read out via a multi-device 1-wire bus that is directly supported by a driver in the Linux kernel. Several sensors can be connected in parallel to the same data-wire and read out individually over the bus interface by their hard-coded IDs. All we need to connect one or more DS18B20 sensors to a Raspberry Pi, is to connect the VCC pin to 3.3V, GND to GND and data to GPIO4 on the Raspberry Pi GPIO header as well as connect a 4.7k Ohm resistor between the VCC and data lines of the sensor. The sensor is available among others as a basic board mounted package as well as a water-proof assembly with a ca. 1m long isolated cable.

The whole setup can be assembled without any soldering, using some prototyping tools from Adafruit Industries, a great supplier of electronics parts for hacking, making and education, based in NYC but who also ships world-wide.

In addition to the Raspberry Pi with CF card, micro-usb cellphone charger and EW-7811Um USB wifi adapter (see configuration here), the shopping list for this project includes:
This great tutorial explains in much detail how to connect both types of DS18B20 sensors using the breadboard & breakout connector. The ribbon cable on fits one-way into the breakout connector due to the notch on one side, but should be mounted to the Raspberry Pi with the cable pointing away as in the picture above and with the different colored wire towards the edge where the CF card slot and micro-USB power connectors are.

For the experiment, we want to connect at least 3 temperature sensors to simultaneously monitor:
  • inside/room temperature (board mounted sensor)
  • outside temperature
  • heater temperature
Using the data collected for inside & outside temperature and a lot of simplifying assumptions, we could for example estimate the amount of thermal flow out of the room and thus how much heating energy the room has used during the measurement period. Or try to reverse-engineer the control function of the heater thermostat by looking at the relationship of outside, inside and heater feed temperature.

Our house is a roughly 120 year old apartment building with a radiator based central heating system. There are exposed riser pipes which distribute the hot water from the boiler in the basement to the radiators and the apartments above us. This is the place where we can connect the heater temperature sensor, as it will show the feed temperature, regardless of whether the radiator in the room is turned on or off. Tin-foil can make a great heat-conducting sleeve to connect the sensor to the tube. The outside sensor is simply stuck outside the window with the cable pinched by the window frame in less than professional manner...

Once we have carefully assembled all the parts, powered up the Raspbery Pi and made sure that none of the sensors are starting to overhead from wrong polarity, we can test if the driver is working properly:
pi@my-pi ~ $ sudo modprobe w1-gpio
pi@my-pi ~ $ sudo modprobe w1-therm
pi@my-pi ~ $ cd /sys/bus/w1/devices
pi@my-pi /sys/bus/w1/devices $ ls
28-000005303678  28-000005604c61  28-000005610c53  w1_bus_master1
pi@my-pi /sys/bus/w1/devices $ cat 28-*/w1_slave
79 01 4b 46 7f ff 07 10 0a : crc=0a YES
79 01 4b 46 7f ff 07 10 0a t=23562
83 00 4b 46 7f ff 0d 10 5b : crc=5b YES
83 00 4b 46 7f ff 0d 10 5b t=8187
5c 02 4b 46 7f ff 04 10 e6 : crc=e6 YES
5c 02 4b 46 7f ff 04 10 e6 t=37750

In order to make sure that the drivers for the GPIO 1wire interface and the DS18B20 protocol are loaded at boot-time going forward, we need to add them to /etc/modules:
sudo -s
echo w1-gpio >> /etc/modules
echo w1-therm >> /etc/modules

Then we create the following simple driver class in that is going to be used to read and record the sensor data in the second part of this tutorial:

import re

class TempSensor():
    Read data from DS18B20 Temperature sensor via 1-wire interface.
    _DRIVER = "/sys/bus/w1/devices/%s/w1_slave"
    _TEMP_PATTERN = re.compile("t=(\d+)")

    def __init__(self, sensor_id):
        self._id = sensor_id

    def _read_data(self):
        data_file = open(self._DRIVER % (self._id, ), "r")

    def get_temperature(self):
        data = self._read_data()
        # data should contain 2 lines of text like this:
        # 86 01 4b 46 7f ff 0a 10 5e : crc=5e YES
        # 86 01 4b 46 7f ff 0a 10 5e t=24375
        # Temperature reading is value of t= in milli-degrees C
        m =
        if not m:
            raise IOError("Invalid data for sensor " + self._id)
        return float( / 1000.0

if __name__ == '__main__':
    import sys
    for sensor_id in sys.argv[1:]:
        sensor = TempSensor(sensor_id)
        print sensor.get_temperature()

In order to test the driver, we can also execute it directly:
pi@my-pi $ ./python/ 28-000005303678 28-000005604c61 28-000005610c53
Here we can also identify which ID corresponds to which sensor in our setup. Since it is winter right now, we can assume that 23.5C is the inside temperature, 7.8C the outside and 39.5C is the temperature of the heating pipe. If testing first on a workbench, we can also touch one of the sensors and see which one of the sensor readings is going up towards 37C as a consequence.

Right now, we can simultaneously measure the current temperature in each of the 3 zones, but in the next part, we are going to record time-series based measurements into a database for graphing and further analysis.

Part II: Data Collection ->

Thursday, January 16, 2014

Wi-Pi : 802.11 Networking for Raspberry Pi (EW-7811Un)

One of the most conspicuously absent standard interfaces on the Raspberry Pi is built-in support for 802.11 WiFi wireless LAN networking.

A low-cost, low-power way to remedy this is for example the Edimax EW-781Un USB WiFi adapter which plugs easily into one of the USB ports on the Raspberry Pi and is supported out of the box by the current Raspbian distribution.

It seems to be very popular for use with Raspberry Pi and is available for about $10 in many places where Raspberry Pi are sold.

WiFi Client

Connecting to an existing WiFi network is trivial, once we know the SSID and access password for the network we are trying to connect to. After plugging in the adapter, it should be automatically recognized by the linux kernel - the output of lsusb should contain an entry like this:
Bus 001 Device 004: ID 7392:7811 Edimax Technology Co., Ltd EW-7811Un 802.11n Wireless Adapter [Realtek RTL8188CUS]

In order to connect to a typical home wifi network, we only need to add the following to /etc/networks/interfaces :
allow-hotplug wlan0

iface wlan0 inet dhcp
wpa-ssid <SSID>
wpa-psk <wifi-key>

substituting <SSID> with the "name" (SSID) of the wifi network and <wifi-key> wifi access code or password configured in the router. After rebooting or running sudo ifup wlan0, the interface should be connected and configured with an address from the gateway via DHCP.

WiFi Access Point

Configuring an EW-781Un adapter as a WiFi access point is not as easy, as the RTL8188CUS chipset is not supported by the standard version of hostapd. Some tutorials like here or here are explaining how to install hostapd and replace it with a custom version which supports the chipset.

For example:
sudo apt-get install hostapd

sudo mv hostapd /usr/sbin/hostapd
chomd+x /usr/sbin/hostapd

One of the simplest use-case for an access point is to connect a tablet, phone or laptop to a Raspberry Pi as a console or controller for some application which doesn't require Internet access - e.g. a robot or a monitoring device of some sorts.

To allow most standard devices to connect to the Raspberry Pi access point, we have to configure a static IP address on the interface and set up a DHCP server to push IP interface configuration to the devices which are connecting.

For a simple static IP address add the following to /etc/networks/interfaces :
allow-hotplug wlan0

iface wlan0 inet static

Dnsmasq is a small footprint DHCP server and DNS server/proxy for small networks connected to the Internet masquerading behind a  NAT firewall.
sudo apt-get install dnsmasq

It can also easily serve DNS names for the local network by picking up names from the static /etc/hosts file. In order to have dnsmasq server dynamic IP addresses and DNS names in a pseudo-domain .home to devices connecting to the access point on wlan0, change /etc/dnsmask.conf to the following:

For example, to create a network called TempSensor, create the following hostapd configuration in /etc/hostapd/hostapd.conf:

and set DAEMON_CONF="/etc/hostapd/hostapd.conf"
in /etc/default/hostapd and restart.