Tuesday, March 31, 2009

Startups: Technology Execution Play

At the opposite end of the spectrum from the concept and Zeitgeist heavy startups of the web age, is the kind of startup without neither any particularly great new idea, nor a secret new technology, simply doing something which is really hard to do and only very few people would know how.

During the late 1990ies, the Internet had been growing by leaps and bounds, requiring a doubling in capacity every couple of months for many types of networks and networking gear was constantly running out of steam and needed to be upgraded with the next generation of higher capacity equipment. So just building the next bigger, better gear sounded like a reasonable thing to do, except that it was easier said than done - specially at a breakneck pace of 18-24 months development cycles, barely ahead of Moore's law. Until recently, building telecom and networking equipment had been a relatively specialized niche craft, practiced mostly in the R&D labs of a small number of companies, selling to a boring, utility-like industry. With a small pool of people who would know how to build something like that, those crazy enough to try had a pretty good chance to succeed - if they could pull it off and execute. Many would succeed quite well (in terms of return on investment) without even having to build a real business to sell their product - they would be acquired by some established equipment manufacturer who desperately needed something like this, but whose internal R&D was years behind schedule - partly because, their most experienced staff had run off to start companies - often getting bought back by the companies they had left.

This was the climate in which we started Xebeo Communications, to build the next generation packet switch for carrier networks. We had no experience or track-record in business, other than having worked on the development of similar systems before. Nevertheless we raised some double-digit millions in venture capital funding on our technology expertise alone (Ok, those were crazy times and people got a lot more money to go sell dogfood on the Internet...).

The idea was simple, but the execution required a large and highly skilled team with a broad range of expertise: VLSI chip design, electro-optical componentry, hardware systems and circuit design, high-speed signals and thermal flow simulations, mechanical engineering, embedded high-availability software and development of specialized communications protocol software. And all this had to be put together into a working system in less time than any reasonable estimate, while pushing the technology close to the edge of what is possible at the time. For example, the contract manufacturer asked to keep one of the circuit boards for display in their lobby - it had been the most complicated one they had ever built thus far...

It was basically build it and they will come - we actually DID build it, but they never came. The bottom had fallen out underneath the tech market in ca 2001 leaving tons of unused equipment around at fire sale prices. Nobody needed to double any capacity anymore for quite some time. The company was acquired for cents on the dollar and after some some time of trying to find a niche for it, the project was eventually canceled - a white elephant from a bygone area whose time had never really come. [They had ultimately failed to see or take advantage of the real value of what they had aquired - not the product, already obsolete by then - but the team who could build it.]

Sunday, March 29, 2009

On the Value of Tools

Maybe to a fault, I tend to think that tools play a big role in the success of a software development projects. The benefits can largely be summarized under the following categories:
  • leverage or force multiplication
  • positive reinforcement or behavioral modification
The first one is the primary reason for using tools ever since early hominids started to pick up rocks or sticks and using them as tools. They allow us to go beyond the immediate capacity of our hands or our brains. Even though according to hacker folklore, Real Programmers need nothing but
cat > a.out
to write code, but the days of writing programs in raw binary form by flipping switches or by punching cards are over. High-level languages and interactive programming - i.e. using a computer workstation to write, compile and test programs in quick iterations, have brought such a leap in programmer productivity, that without it, we could hardly manage the complexity of some of the software systems we are working on today.

The second one might be more subtle and harder to explain. Software development beyond a certain scale and complexity requires discipline and most likely collaboration. There are some rules, we all know should be followed, but sometimes laziness or expedience is getting the better of us. Good tools should prevent us from us from cheating, reduce temptation to cut corners by make it easier to follow the rules than not to or mercilessly expose us if we do break the rules. For example only part of the reason for having an automated build system is to let everybody know when the build is broken, to avoid wasting time working off a broken baseline, the other part is to shame people who do break the build so that it happens less frequently.

The value of tools which provide leverage and increase our individual productivity is easy to see, the value of tools which encourage us to play by the rules may be equally important but depend on what we value as the right thing to do both as individuals and and as a team.Their effectiveness depends on how well in tune they are with the processes and software development culture of a particular team.

Tuesday, March 17, 2009

Essential Startup Software Development Infrastructure - 2000 Edition

When we started a company in early days of 2000, I spent some time setting up what should become our minimal IT infrastructure and software development environment (That's how I ended up with UID 500...). Since we did not have any money (yet), it had to be free/open-source software, and since we did not have any time for evaluation or in-depth research, we tried to go with what seemed to be the most obvious, conservative or mainstream choice at the time for each piece of the solution.

Initially our entire server infrastructure was based on a single Linux box from Penguin Computing since that was about all we could afford with an empty bank account. In the hope there would soon be more machines to come, it was running a NIS and NFS server for a centralized network wide login, DHCP and DNS (bind) servers for IP network configuration, a http server (apache) as the intranet homepage and SMTP (sendmail), POP and IMAP servers for basic email service. Many of these initial choices were undone again, once we had a real professional Unix sysadmin.

On top of that we built the initial infrastructure to support the software development team. From day one, we wanted the team to work a certain way. E.g. by to put working code in the center of attention. Always move the system in small increments from one working state to a new working state. And only what is integrated into the central repository really exists. Make changing things as easy and risk-free as possible - etc. The common development infrastructure should support this way of working and make it easy to follow these principles.

The key pieces of this initial infrastructure were:
  • Email including archived mailing lists
  • Version control system
  • Document sharing
  • Build and Test automation
  • Issue tracking
Email is probably the most essential tools to support team collaboration, not just for software development. Archived mailing lists provide an instant and effortless audit-trail of any discussion as it unfolds. And emails is also a very convenient way to distribute automated notifications. For our first mailing lists, we used very simply the built-in alias functionality of the mail delivery system itself (sendmail) and MHonArc as the web-based mail archive tool. All the setup is manual, but since we expected the team to change very slowly - reaching about 20 members at the peak.

At the time, the only serious open-source contender for software version control systems was CVS. The version control system is the vault where the crown jewels are kep and it is the most mission-ctritical piece of infrastructure. As soon as we had some money in the bank, we replaced CVS with Perforce, since we were familiar and comfortable with its model of operation (same advantages as CVS but keep meta state on the server, commit atomic sets of changes, etc.). We added a web-based repository browser and notification email support, sending out a mail for each submitted change, with a link to this particular change in the web based repository browser. The source-code repository was meant to be the most openly public part of the infrastructure and nobody should be able to sneak in a change unseen.

Our document sharing system was very simple. Since we already had version control as the central piece of our workflow, we simply used the version control system to stage our entire intranet website. To add or update a document, check in the new version and if necessary hand-edit the html link of some page where it should appear. This sounds crude, but we were all programmers after all and editing some html did not particularly bother us. The website provided easy access to the current version of any document and the version control system backing it provided all the history of necessary.

The build and test automation was essentially home grown (loosely inspired by DejaGnu). At its core was a Python script called runtest, which parsed a hierarchy of test definition files within the source tree and ran any test executable specified there. Test-cases had to generate output containing PASS or FAIL and each occurrence of such a keyword would count as a test-case. For the official automated build, runtest would log its results to a MySQL database, but the same script could also be used interactively by anybody in the team to make sure tests always worked or to troubleshoot breakages. The automated master build itself was simply a scrip which ran in an loop doing a checkout from the source control system and if there was any change, ran a clean build (using a combination of gmake and jam) and execute runtest on the full test-suite. As a framework, this was extremely flexible. Tests could be written in any language as long as they could write PASS or FAIL to the console and exit cleanly at the end. For example we ended up with a very powerful but rather unwieldy network simulation framework written in bash for running high-level integration tests, which could easily be run as part of the runtest suite.

The issue tracking system was not part of the inital setup but followed soon therafter with the conversion from CVS to Perforce. We were using Bugzilla (probably again the only viable free choice at the time) with a set of patches to integrate it closely with Perforce. By automatically enforcing that each checkin into the source control repository had to be linked to a ticket in the issue tracking system. This provided a very rudimentary workflow and scheduling system for keeping track of work items and for linking source changes to the reason why they were being made.

Sunday, March 8, 2009

FIRST robotics competition

I was volunteering today at a robot competition for high-school age kids organized by FIRST, a non-profit to promote interest in science and engineering among high-school students. They organize a series of robotics tournaments, where teams of middle-school or high-school age students have to build a robot in 6 weeks to compete in a particular challenge. The teams work with adult mentors, who are typically real-life engineers or scientists.

I was impressed by the quality of the work the students brought to today's NY regional competition at the Javits convention center. Most of the robots where highly functional and held up well through multiple rounds of competition.

With the disappearance of the industrial middle-class in the US, education has become the single biggest factor in economic success (other than simply being born rich). The service economy consists of at one end of the spectrum of gold-color jobs which typically require advance college degrees and McJobs at the other end, but with very little possibilities to work you way up between the two.

Kids at this age may not fully understand yet, how crucial education has become in their future lifes, but a lack of interest and engagement at this age is very hard to correct later. In the NY area, some of the teams participating in these robotics competitions come from schools with very low graduation rates, but some of the long-time mentors claim that the graduation rates among the members of the robotics teams is significantly (many 10s of percents) higher than the school average. Maybe there is a selection bias - i.e. kids who would participate in such a nerdy activity would have a higher chance or graduating anyway. But maybe getting to interact seriously with people from a technical profession gives some kids an idea that there are ways out of poverty other than aspiring to become a gangster, drug-dealer, rap-star or professional athlete (even if this path is unglamorous and petit-bourgeois...).

But even if there is a small chance that exposing kids to the possibility that a career in technology might be an option for them to consider, then this seems like a pretty good use of our time.

Friday, March 6, 2009

SMS Remote Control for Android Apps

I wanted to add an remote control feature to the NoiseAlert application for Android, where menu options could be triggered remotely by sending an SMS to the phone. SMS messages should be delivered to the application only if it is running and the commands should be executed by the foreground activity.

Instead of registering a BroadcastReceiver globally in the AndroidManifest.xml file, the following object can dynamically register and unregister itself for receiving all SMS during the time it is active. All incoming SMS are passed to the objects onReceive method, encoded as an Intent in slightly obscure ways:
public class SmsRemote  extends BroadcastReceiver {
Boolean mActive = false;
Context mContext;

@Override
public void onReceive(Context context, Intent intent) {
Bundle bundle = intent.getExtras();
if (bundle == null) return;

Object pdus[] = (Object[]) bundle.get("pdus");
for (int n = 0; n < pdus.length; n++) {
SmsMessage message = SmsMessage.createFromPdu((byte[]) pdus[n]);

String msg = message.getDisplayMessageBody();
/* check if text of SMS matches remote control command
* and trigger appropriate action.
*/
}
}

public void register(Context context) {
mContext = context;
if(mActive) return;
IntentFilter smsFilter = new IntentFilter("android.provider.Telephony.SMS_RECEIVED");
context.registerReceiver(this, smsFilter);
mActive = true;
};

public void deregister() {
if (!mActive) return;
mContext.unregisterReceiver(this);
mActive = false;
}
}

The context is provided by the foreground Activity which can also provide a callback to execute the commands which are to triggered by the SMS. Permission to intercept incoming SMS still needs to be requested in the AndroidManifest.xml file:
<uses-permission android:name="android.permission.RECEIVE_SMS" />

Wednesday, March 4, 2009

Source-Code Samples in Blogger

Blogger makes it a bit hard to include properly formated source-code snippets in postings as it does not have a mode for entering raw pre-formatted text, which should not be molested by any of the further processing and rendering.

You can always use the raw HTML edit mode, but then all the all the html and xml-isms have to be escaped before pasting in the code sample. Fortunately there is a convenient online service at formatmysourcecode.blogspot.com which does just that. Here is an example of how the resulting output looks:

main() {
printf("hello, world");
}

Monday, March 2, 2009

The other Benefit of Open-Source

Software development must be one of the fields where the gap between best practices and average practices is the widest. A poll in 2001 showed that only about two thirds of software development teams are using version control and only about one third use some kind of bug tracking system. C'mon people how many high-rise window cleaning crews are working without a safety harness?

Open-source projects with many collaborators distributed throughout the world generally need to adopt solid collaborative development practices and often build themselves the tools to support collaboration at such a large scale.

With the popularity of open-source software, an increasing number of people in the technical community have been exposed to the ways these projects operate and to the tools they use. Today, it is a lot harder to find fresh college grads who would not find it completely naturally to use version control, after all this is how you get the pre-release version of "insert-your-favorite-open-source-project-here". At the same time, they are naturally familiar with the idea of a release and that large, complex software systems don't just come together by themselves.

When I was in college, I don't think the word version control was ever mentioned. We learned to program in obscure and irrelevant languages - which is not necessarily a bad thing, since this helps to build a meta-level understanding of programming languages. I guess it was just assumed that those of us who would choose a career in software development would learn their trade on the job, once we got out into the industry. On the other hand, since not all industrial software projects are necessarily that well run, bad habits are propagated as much and as quickly as good ones. Since successful teams tend to stick together a lot longer than the ones which fail, maybe the bad habits spread even faster.

My first exposure to industrial software development was at a very reputable technology company. The kind of company, where you would expect using the most effective software development practices would be a given. It turns out it wasn't and each project had to figure it out for themselves - not uncommon in large companies. Our project ended up to be a classic death march: overly ambitious schedule, a team of fresh bodies assembled to quickly (hundreds of people towards the end), no particular method to the madness, builds started to take hours, dinners and week-ends at work became routine. Many of us were young and we made up in energy and enthousism what we lacked in experience - the few experienced software developers had either left in disgust, as nobody listened to them or kept a low profile, knowing well enough they couldn't influence much the inevitable course of events.. Yes, we had somehow heard that using version control (the company had even invented a few of them) was aparently useful and yes, documentation too - but we didn't have much of clue on how to put it all together.

When it had all come to and end, some of us refused to accept that this should really be the best way possible to do software development. If our current environment not teach us how to do it better, we had to look elsewhere for inspiration. We couldn't see how other companies were doing things better, but there certainly were a few open-source projects who seemed to be building software systems of comparable scale and complexity a lot more smoothly.

Open-source projects provide a unique insight into some very large and long running software development efforts - some of them like the Linux kernel development have gone on for decades and have produced millions of lines of working code. Most commercial software development projects are a lot smaller than that, but by adoptiong tools and practices which work for mega-projects like that, we can be reasonably certain that they will not run out of steam during the lifetime of our project. Furthermore, open-source can become a shared frame of reference on practical software development issues for professionals accross different organizations - hopefully helping to raise the standards of how software development is practiced throughout the industry.