Technetra

Archive for August, 2005

Modernizing Open Source Licenses

Wednesday, August 24th, 2005

At OSCON this summer, the Open Source Initiative (OSI) publicly described its efforts to bring order and reason to the crazy panoply of open source software licenses. A week later at LinuxWorld San Francisco, the Free Software Foundation (FSF) detailed its own vision to improve license sanity. The OSI is attempting to corral its plethora of 58 licenses and the FSF is launching a widely inclusive process to update the 14 year old GPL 2.0.

Proliferation, compatibility and modernization are the big issues in all open source license debates.

Proliferation

Many in the industry decry the range of open source licenses while some say we could use even more!

“In the open source world, each discrete license effectively partitions the commons and reduces the collaboration possible between projects.”

Today’s 58 OSI licenses reflect the real or imagined needs of their contributors. A license solves particular problems important for an author of a software work. There are thousands of proprietary licenses for the same reason. In the open source world, each discrete license effectively partitions the commons and reduces the collaboration possible between projects.

To stop this wasteful proliferation, one approach proposed is to templatize open source licensing, much like the Creative Commons has attempted for document content licenses. Simple aspects of licensing, such as organization names, product identification and jurisdictional references can be templatized. But other complexities of software copyright licensing are not amenable to easy templating.

There are problems of granularity: how are software components with different licenses combined for copyright purposes and even what units of software (source file, object module, runtime link, or SOAP-based web services transaction) constitute the scope of the license?

Additionally, the terms for sharing source code and the degree of reciprocity to be required over distinct portions of the covered work can cause complex and perhaps unintended or even conflicting results.

License Models

In general, there are two broad models of open source copyright licenses, epitomized by The GNU General Public License (GPL) and the Berkeley Software Distribution (BSD) license respectively.

GPL style licenses require author attribution and offer the freedom to run, study, modify, and redistribute the copyrighted work. No warranty is provided. Furthermore, and most controversially, the reciprocity, or so-called copyleft provision, of the GPL requires that derivative works of software must also be covered by the GPL.

In the GPL, software is considered a GPL work for copyright purposes if any part uses or links into the GPL part. Therefore GPL code cannot be used downstream by non-GPL code. However it is important to note that, under certain compatibility guidelines, GPL code can use non-GPL software or libraries. Broader compatibility with other licenses is one of the primary objectives of GPL 3.0. For example, the FSF wishes to address the inability to combine Apache libraries, licensed under the Apache License 2.0, with software licensed under the GPL. GPL software cannot legally use Apache libraries because of subtle incompatibilities in the patent protection clauses of each license. Downstream compatibility must be a goal, where more restrictive code can benefit from less restrictive code. This is important so that, for example, X-Windows libraries can flow into GPL projects even though the reverse is not possible.

The other general open source licensing model is attribution or BSD style licenses. These licenses typically require author attribution as well as a simple non-endorsement and warranty disclaimer. Derivatives of BSD works need not be licensed under the BSD.

Compatibility

Unlike proprietary licenses, open source licenses are intended to be inclusive, cleverly exploiting the ownership privileges behind copyright laws to reverse their effect. Instead of restricting the rights of the recipients of copyrighted works they attempt to expand those rights for both use and redistribution. Nonetheless, as Professor Eben Moglen of FSF is keenly aware, licenses partition recipients into communities of common interest. In this role, open source licenses can become divisive, especially when used for competitive marketing position.

For example, Sun’s Common Development and Distribution License (CDDL) is regarded by many as encouraging fragmentation. CDDL is characterized as a weak copyleft open source license but it is not compatible with the GPL for a variety of reasons, many of which are inherited from the earlier Mozilla Public License (MPL). To become compatible with the GPL, CDDL could adopt, as did MPL, a dual-licensing strategy allowing both its own license and the GPL as options.

Promotion and Protection

Licenses serve variously to promote and protect, but also to partition the commons. Simple attribution licenses promote the commons but do not protect it. Non-copyleft licenses allow reuse and redistribution of software without preserving availability of software in its derivative form. With healthy collaboration, the commons is usually expanded by attribution style licenses. But because collaboration is not enforced, closed products and technologies can emerge. A notable example is Mac OS X, based on a BSD core, which has produced a niche market but not an open pool of resources from which everyone benefits.

The stronger, copyleft licenses promote the commons while also protecting it. Unfortunately, these same licenses also partition the commons because software resources under incompatible licenses cannot be incorporated for redistribution, diminishing the network effect of the commons.

“No license, not even the GPL, is sufficient to preserve a healthy software community.”

Fragmentation of OSS licenses reflects and reinforces natural project boundaries. For example, the kernel developers of Linux (GPL-based) and OpenSolaris (CDDL-based) are completely distinct. Disjoint OSS licenses reinforce disjoint engagement models surrounding distinct pools of common resources whose elements are both technological as well as social. Resource sharing is doubly inhibited, constrained both by project identity and by license. But no license, not even the GPL, is sufficient to preserve a healthy software community. Witness the strain in the Mambo community and the ill-will among the contributors to the Sveasoft project. Various GPL code may be allowed legally to work together but still may end up being disengaged, disjoint or just plain dysfunctional for practical reasons. Therefore, while fragmentation of licenses clearly inhibits resource sharing, uniformity of licenses cannot guarantee greater collaboration. Concrete project goals and good will are also important.

Modernization

OSS Licensing must be brought up to date with the global evolution of copyright and other IP issues.

Modern software copyright licenses, whether proprietary or open source, depend upon the fundamentally restrictive copyright regimes enshrined in the WTO Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) and the newer WIPO Copyright Treaty. All software licenses are designed to establish and protect a market — or community in the case of open source — governed by the owner of the copyrighted work. In the case of proprietary licenses, copyright restrictions are intended to be exclusive and, thanks to WIPO, they now incorporate anti-circumvention laws to prevent digital piracy.

“Complex new issues like DRM, web services and patents must be addressed to preserve the benefits of OSS licensing.”

The new world envisioned by Intellectual Property proponents has become more sophisticated, forcing bodies like the OSI and FSF also to become more sophisticated. Interaction with wider issues like digital rights management (DRM), web services and patent protection must now be considered carefully in order to preserve and maximize the benefits that open source licensing can provide.

The danger of modernization is that skyrocketing complexity may lead to a combinatorial nightmare of more licenses and license changes. This would usher in a new era of potential license proliferation and incompatibility. It is, of course, unrealistic to expect that there can ever be one universal open source license. Nonetheless agreement on broad categories of licenses from the spectrum of reciprocity requirements — attribution only (BSD), weak copyleft (”middle way” or CDDL, LGPL), and strong copyleft (GPL) — is possible. However, the tricky new issues of DRM, web services, and patent peace provisions may leave no easy answers that could foster compatibility among licenses, especially as license language varies in degree of constraints and compliance.

Everyone interested in improving the open source license situation will do well to follow the global public discussion that is about to begin to upgrade the GPL over the next year. Even if you are reluctant to adopt the GPL for your own software work, the exchange of ideas and deliberation of details will benefit all open source license proponents and the open source community as a whole. The results can be used as a tool to hone other license variations toward greater compatibility and promote a more harmonious open source commons.

Fun at Foo Camp

Monday, August 22nd, 2005

This past weekend I was invited to participate at the ‘Foo Camp’ at O’Reilly’s campus in Sebastopol, California. 225 hackers, writers, inventors and idea-makers were invited to this exclusive camp embodying remix and fusion in action. People working on web services, search, open source software development, hardware, security, mapping and much more had all gathered to form this ‘wiki-in-person’.

Creative Thinking

The format of the camp was very creative — starting with each person introducing themselves with three keywords, pitching tents on the campus grounds and apple orchards, and spontaneously identifying talks on cool ideas to warm one’s imagination. The list of talks were created on-the-fly and all talks vied with each other for neat titles and interesting topics. It was hard to decide what to skip.

Cool Talks

Topics ranged from Open Source Java, Open Source Telephony, RFID Physics, Identity 2.0, High Speed Flash Photography to Leadership Hacks, Creating Passionate Users, State of the Blogosphere and Women in OSS. On public demand, my talk on ‘Open Source in India’ turned into a great interactive session on how open source is starting to play an important role in the Indian technology services industry and developer pool.

“Eternal vigilance is the price of success.”

A session on ‘Money, Power and Open Source projects’ by Mitchell Baker of Mozilla, Ted Leung of Apache and Geir Magnusson of Harmony centered on the delicate balancing act needed to avoid open source projects being taken over by corporations as they contribute code and resources. The fear is that once a project is adopted by special interests it will no longer remain open. It’s a daily struggle for projects like Apache to maintain a level playing field in its community when so much code and money comes in from special interests. Another variation of the same problem is PostgreSQL. This project is unable to absorb the continuous and rapid enhancements requested from commercial companies and constantly faces the threat of forking. Eternal vigilance is the price of success. One suggestion was to increase communication across open source projects to join forces against a common threat.

The conference also examined making money with open source startups. The reality faced by many open source projects is that services on top of a commoditized software application does not result in quick profit. There is a steep OSS adoption curve in most markets. Customers are hesitant and nobody is being able to sell open source solutions as fast as they would like. FUD factors and piracy add friction. The conference urged startups to figure out solutions before they run out of venture capital.

Fun Activities

From early morning to late evenings, not one person wanted to waste a single precious moment. There were even toys for everyone to try. The motorized Segways were all the rage with almost everyone trying to learn how to ride them. Breakfast, Talks, Demos, Lunch, Talks, Panels, Huddles, Frisbee, Dinner, Gaming contests, Ping Pong, Conversations, Segways, More Conversations, Brewing Fests … 24 hours felt too short for a day.

“A Foo Camp is a ‘wiki-in-person’.”

Foo Campers used the ‘Foo Wiki’ to introduce their interests and ideas for talks. The wiki was the online-heartbeat of the camp as it progressed. Campers posted ideas for sessions, things to do, quotable quotes and more.

Why Foo?

Foo camp remixed brainstorming amongst peers. Focused yet relaxed discussions over a couple of days resulted in ideas and solutions, strengthened social networks and crystalized thoughts into next steps for all. It was a stimulating way to grow a community and scout for new ideas.

Backups are a snap with rsnapshot

Saturday, August 20th, 2005

We’ve all heard the reasons for backing up our data regularly — accidental deletion of files (rm -rf *), corrupted files from crashed applications, the dreaded hard disk failure, the list goes on. Nevertheless, on average, only 25 per cent of computer users perform routine backups of their data, as shown by a recent Harris Interactive survey. So why do the remaining 75 per cent put off this important task? Well, manual backups are often an adhoc measure, unreliable, and time-consuming. Automating an otherwise tedious backup process is key to producing routine and reliable backups. With that in mind, we’ll take a look at rsnapshot, a handy backup utility based on rsync, a well-known open source tool.

rsnapshot was written by Nathan Rosenquist as a replacement for a patchwork of complex shell scripts he had crafted to do rsync backups. Any changes to the backup scheme meant manually editing the scripts, making sure no bugs were introduced. rsnapshot was a great improvement over this process, it was easy to configure, portable across different operating systems, supported remote backups, and best of all, automated the entire backup process.

rsnapshot enables users to keep multiple backups of their data, from local or remote systems, readily accessible. Each backup is a complete snapshot of the data at a specific point in time. rsnapshot minimizes disk space usage by utilizing hard links (multiple entries in the file system to share a single data entity) and rsync. Thus, the total amount of disk space used is the space for one full backup, plus any incremental snapshots.

Since rsnapshot is written entirely in Perl, its a snap to install on most modern versions of Linux or BSD. In fact, rsnapshot comes pre-installed on Debian, Gentoo, FreeBSD, OpenBSD, and NetBSD. Users with other distributions can compile and install rsnapshot by downloading the latest version from www.rsnapshot.org.

Install rsnapshot

To get started I will download and install rsnapshot (v1.2.1) on my Fedora Core 4 system (mango). If you’re are using a distribution that already has rsnapshot installed, just skip to the next section.

To install rsnapshot you will need to have both perl (v5.004+) and rsync available on your system. Although, not required, it helps to have OpenSSH, BSD logger, GNU cp, and GNU du, available as well. If you have perl and rsync on your system, follow the simple instructions below to install rsnapshot.

$ wget -q http://www.rsnapshot.org/downloads/rsnapshot-1.2.1.tar.gz
$ tar xzf rsnapshot-1.2.1.tar.gz
$ cd rsnapshot-1.2.1
$ ./configure --prefix=/usr/local --sysconfdir=/etc

The --sysconfdir=/etc parameter above tells rsnapshot to look for its configuration file (rsnapshot.conf) in /etc. Installing rsnapshot requires root privileges.

$ make install

Make sure rsnapshot is available in your command search path.

$ whereis rsnapshot
rsnapshot: /usr/local/bin/rsnapshot

Configure rsnapshot

For the purposes of this article, we will use rsnapshot to backup data from one Linux system (kiwi) to another (mango). rsnapshot will run on mango, which will also host the backup archives. Both systems should have rsync and ssh installed.

All configuration parameters of rsnapshot are controlled via the rsnapshot.conf file. Before we setup rsnapshot, we’ll copy the default configuration file /etc/rsnapshot.conf.default and save it as /etc/rsnapshot.conf. This way we can revert back to a clean configuration if we mangle our config file.

Now, let’s edit rsnapshot.conf on mango to setup our backup system. Most of the parameter defaults do not need modification, so we’ll just focus on those that do.

Where will backups be stored?

The snapshot_root parameter in the SNAPSHOT ROOT DIRECTORY section specifies the directory where rsnapshot will place backup snapshots as they are created. Make sure you select a disk partition with adequate free space to hold your backups.

# Note: Use TABS (not spaces) to separate
# the configuration directive and the value.
# If specifying a directory, put a
# slash at the end.

snapshot_root    /usr2/snapshots/

If you plan on using an USB/FireWire hard disk for storing backups, then the no_create_root parameter should be set to 0. This tells rsnapshot to create the snapshot root directory if it doesn’t already exist.

Which external programs will rsnapshot use?

Next, the EXTERNAL PROGRAM DEPENDENCIES section contains parameters to specify paths for optional external tools that rsnapshot depends on to provide certain features. Be sure to uncomment the lines starting with cmd_cp, cmd_ssh, and cmd_du by removing the hash (#) mark at the beginning of the line.

# use GNU cp
cmd_cp     /bin/cp

# use ssh for secure remote backups
cmd_ssh    /usr/bin/ssh

# use GNU du to check disk space usage
cmd_du     /usr/bin/du

How often will backups happen?

The configuration parameters in the BACKUP INTERVALS section determine how often rsnapshot will perform backups and how many snapshots will be kept. The keyword interval is followed by an alphanumeric label, followed by a number, signifying how many intervals to keep.

In our backup system, we want to take a snapshot of kiwi every 3 hours, so that’s 8 snapshots per day. Each time rsnapshot hourly is executed, it will create a new snapshot, rotate the old ones, and retain the 8 most recent (hourly.0 - hourly.7) snapshots. We also want to take a daily snapshot, and keep a week’s (7 days) worth of snapshots.

#interval    minutes    6
interval     hourly     8
interval     daily      7
#interval    weekly     4

The order of the interval definitions is very important. The first interval line must represent the smallest unit of time, with each subsequent line representing a larger interval. If you were to add a weekly interval, it would appear after the daily interval. Similarly, a minutes interval would appear before hourly.

What is included or excluded from the backup?

Most of the parameters in the GLOBAL OPTIONS section can be left at their default values. However, there are two parameters that you can use to include or exclude files from the backup. Both parameters get passed directly to rsync, so take a look at the --include and --exclude options in the rsync man page for a thorough explanation of how to construct match patterns. If you prefer listing all your include/exclude patterns in separate files, specify them using the include_file and exclude_file parameters.

Here are some simple examples to get you started.

# exclude anything starting with a dot character (.)
exclude    .*

# exclude anything ending with a tilde character (~)
exclude    *~

# include .ssh directory
include    /home/nsharma/.ssh/

What should be backed up?

The BACKUP POINTS / SCRIPTS section tells rsnapshot what is to be backed up and where the backup snapshot is stored. This part is very important, so pay attention. We will use rsync over ssh to backup two directories and a file from the system named kiwi, and store the snapshots in a directory named kiwi_backups. The hostname kiwi must resolve to an IP address, either via DNS or the /etc/hosts file.

# two directories (/home/nsharma, /my_articles)
backup    root@kiwi:/home/nsharma/       kiwi_backups/
backup    root@kiwi:/my_articles/        kiwi_backups/

# one file
backup    root@kiwi:/etc/passwd          kiwi_backups/

The configuration above will only work if we can login (without manually entering passwords) to kiwi as root via ssh. The easiest way to setup access is by creating “passphraseless” keys with ssh-keygen, and here’s how to do it.

Setting up “passphraseless” keys

Login as root on mango

Use the ssh-keygen program to create a public/private key pair with Digital Signature Algorithm (DSA) encryption

$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):<HIT ENTER>
Enter same passphrase again:<HIT ENTER>
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:0d:f0:ea:bc:b8:0d:69:c6:6d:e0:59:c2:ee:31:4d:90
root@mango.private.dom

Transfer public key from mango to kiwi using scp

$ scp .ssh/id_dsa.pub root@kiwi.private.dom:mango.pub
root@kiwi.private.dom's password:<TYPE kiwi’s root PASSWORD><HIT ENTER>
id_dsa.pub                    100%  619     0.6KB/s   00:00

Login as root on kiwi

Install mango public key

$ cat mango.pub >> /root/.ssh/authorized_keys

Delete mango.pub file from kiwi

$ rm -f mango.pub

We should now be able to login to kiwi as root from mango without being prompted for a password.

If you’re uncomfortable with the idea of “passphraseless” keys, then take a look at the ssh-agent man page and a utility called keychain available at www.gentoo.org/proj/en/keychain/index.xml.

Testing our configuration

Before we run rsnapshot for the first time, we should make sure the syntax of our configuration file is correct, and execute a dry run of each interval we have defined.

Checking for correct syntax

$ rsnapshot configtest

rsnapshot will either show you the errors, or a Syntax OK message if there are no errors.

Dry run for each interval

# test run for 'hourly' interval
$ rsnapshot -t hourly

# test run for 'daily' interval
$ rsnapshot -t daily

The output from each command will show you exactly what rsnapshot will do for the specified intervals.

Automating the backup process

Our next and final step is to automate the execution of rsnapshot on mango. We’ll add two entries to the cron scheduling server to request execution of rsnapshot every 3 hours on the hour, for the hourly interval, and every night at 11:00 pm, for the daily interval. Logged in as root on mango, we’ll invoke the crontab program with the edit (-e) option. The crontab invokes the default editor, as specified using the VISUAL or EDITOR shell environment variables.

$ crontab -e

Now, we add the following entries and save and close the file.

0 */3 * * * /usr/local/bin/rsnapshot hourly
0 23 * * * /usr/local/bin/rsnapshot daily

That’s it, we now have a fully automated backup system which creates hourly and daily snapshots of our data. For detailed documentation about rsnapshot, check out the rsnapshot man page and the rsnapshot website at www.rsnapshot.org.

Conclusion

Knowing what data to preserve and how to recover it in an emergency is critical to having a solid backup plan. Using the right tools to implement that backup plan is just as important. Take control of your backups with rsnapshot!

Before we finish, here’s an actual run of rsnapshot against the hourly interval.

$ rsnapshot -v hourly
echo 19462 > /var/run/rsnapshot.pid
mkdir -m 0755 -p /usr2/snapshots/hourly.0/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    --include=/home/nsharma/.ssh/ --exclude=.* --exclude=*~ \
    --rsh=/usr/bin/ssh root@kiwi:/home/nsharma/ \
    /usr2/snapshots/hourly.0/kiwi_backups/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    --include=/home/nsharma/.ssh/ --exclude=.* --exclude=*~ \
    --rsh=/usr/bin/ssh root@kiwi:/my_articles/ \
    /usr2/snapshots/hourly.0/kiwi_backups/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    --include=/home/nsharma/.ssh/ --exclude=.* --exclude=*~ \
    --rsh=/usr/bin/ssh root@kiwi:/etc/passwd \
    /usr2/snapshots/hourly.0/kiwi_backups/
touch /usr2/snapshots/hourly.0/
rm -f /var/run/rsnapshot.pid

Abandoning “Skim & Abandon”

Wednesday, August 17th, 2005

The truth is, in developing economies, there is little support for any kind of software, whether it’s open source or proprietary. Even where labor is cheap, the tracks are blocked for the development of a strong software industry. Let’s see why.

“You can see the lack of automation in the banks whose branches still run on paper chits, in the judicial system that holds records on scraps of paper piled on the floors of courtrooms…”

The cost of most commercial software in developing countries is typically undiscounted. Inequities in price parity make commercial software unaffordable for most applications. High cost simply stunts the growth of an ICT infrastructure. You can see this everywhere — in the banks whose branches still run on paper chits, in the judicial system that holds records on scraps of paper piled on the floors of courtrooms, in shops where cash transactions are done without receipts. The examples are endless. While the software required to automate these processes exists in the form of open source solutions as well as proprietary products, the business processes themselves do not exist.

So how can automation be spawned?

Many leaders in developing countries believe that allowing competitive market dynamics to take their course is the best way to ensure the emergence of ICT and the spread of its benefits. Often that means proprietary vendors are encouraged to engage in various private-public partnerships because they are perceived to be a source of technology expertise as well as to have the financial strength to ensure follow-through and accountability in major projects.

Unfortunately in developing economies, the leading problem with proprietary solutions is that they seek to produce wealth where there is none. They skim off the top of what is possible and leave the vast majority of practical requirements unresourced. The budget assigned for a typical ICT project is exhausted upfront for paying license fees and other initial fixed costs. Insufficient funds are left for implementation and deployment. After successfully capturing initial resources, the project is left to fend for itself and usually fails. Meanwhile this pattern is practiced over and over again.

A Better Solution

A better solution is to use zero-cost software and invest the savings into building the basic business processes and automation that fit into the local economic reality. This can only be done with open source software.

When the pattern of ’skim-and-abandon’ is broken, only then will automating business processes and government services succeed. Then an ICT support industry can begin to evolve. Support will emerge because it is needed by every operational program. The first projects will kick-start the local service and support industries and then the reinforcing effects of success will take over.

“OSS is the only real choice for developing economies.”

Establishing practical support in developing countries for open source software requires nothing less than building support simultaneously for basic automation. New business and government processes, friendly to ICT, can only be nurtured by lowering the barriers to entry. Automation has to be made as easy and as cheap as possible. Otherwise the inertia of transforming largely manual processes will be insurmountable. Automation cannot get rolling if it faces initial obstacles posed by high costs or by proprietary solutions.

After witnessing so many ICT projects start and then stall, each launched with great promises yet ending up nowhere, the leaders of developing countries must realize that inflated costs and proprietary solutions pose a death embrace for the start up of any automation in their countries.

This is the real reason that open source software is the only choice for developing economies.

At LinuxWorld 2005

Friday, August 12th, 2005

Once again LinuxWorld, held at San Francisco’s still glamorous Moscone West, represents Open Source’s hottest foundry for forging a durable blend of corporation and community. The resulting alloy may seem less noble than many may wish for, but the show did provide a peek into the progress of technology funded by the deep pockets of significant business interests.

A highlight of the event was the ‘.org pavilion’ located on the second floor lobby and separated from the main exposition area. Showing off many of the projects that form the foundation of the open source community, the .org pavilion formed a hub of social and technology networking. Projects included Fedora, Eclipse, Gentoo, Debian, Mozilla, X.org, KDE, EFF, and FSG.

Demonstrating the global diversity of Linux users and sharing the floor with the .org pavilion, a China Linux beachhead was organized by the Beijing Software Industry Productivity Center (BSIPC). This China pavilion was promoting Beijing as the new Linux Capital of Asia. A variety of Linux based software outsourcing opportunities as well as products were being marketed by vendors such as Red Flag Software, Sun Wah Linux, Redflag Chinese 2000 Software, and Beijing Co-Create Open Source Software. Unfortunately, if Beijing’s intent is to successfully compete in the markets of the West, it must improve its English language skills.

The centerpoint of LinuxWorld — ‘the Expo’, featured large booths from the usual crowd of commercial organizations including Red Hat, MySQL, IBM, AMD, Sun, SAP, and many others. Booths buzzed with demos and talks, displayed big-iron hardware and showcased both open and proprietary applications as well as support services. Clustering, virtualization, system and application management were hot. But hold on… right in the middle of this hubbub of activity, a wild-west type of rodeo ride drew in the curious and convinced the brave among them to ride the rowdy mechanical bull. Linux is truly cowboy country.

Among the best presentations of the show, Eben Moglen’s talk on the future of GPL3 highlighted the work required to modernize the GPL 3 and get it adopted amidst the tremendous expectations and pressures from industry and the community.

Significant announcements at the show included creation of OSDL’s Patent Commons project, Novell’s opensuse.org project and the delay of Red Hat’s Fedora foundation.

The ‘Golden Penguin Bowl’ trivia game themed as ‘good vs. evil’ (played as ‘Google vs. Microsoft’) was moderated by Jeremy Allison of Samba fame. The game highlighted the fun side of the conference. Fortunately, Google’s ‘Geek Squad’ won the day.

From hot hardware to hot ideas, LinuxWorld often transcended the simple marketing hyperbole common at conferences such as this. Still, at other times, some of the largest sponsors oversold their wares and services and lost credibility. As with everything, there’s always room for improvement. Perhaps next time, vendors will strike an even better balance between sales and genuine engagement. After all, Open Source and Linux and LinuxWorld are all about collaboration.

The Future is Open at OSCON

Monday, August 8th, 2005

The O’Reilly Open Source Convention (OSCON) in Portland this year witnessed more than 2000 hackers, geeks and entrepreneurs. The vastness of the Portland Convention Center seemed to engulf the close-knit open source community and made the event seem both larger yet sparser than previous years. Nonetheless, the excitement at the conference continued to reflect the phenomenal growth of open source.

High quality tech-talks have always been a hallmark of OSCON. Excellent tutorials and talks at OSCON highlighted the most popular technologies — Ruby on Rails, Apache Harmony, AJAX, XUL and SWIK. There was something for everyone — topics ranged from newbie to advanced interests. Most presentations were informative and well done, but there were so many tracks — Linux, Apache, XML, Databases, PHP, Python, Perl, Java, Ruby, Security and Emerging topics — I felt like I had too much to do in too little time.

“Good ideas flow up from the bottom rather than flowing down from the top” said Paul Graham in his keynote. The entrepreneur turned writer noted that people work harder on things they like and that open source and blogging are examples of having fun in what people work on. Paul also lamented that businesses today are run more like communist states than free markets. David Heinemeier Hansson, in his keynote on Ruby on Rails, talked about what made his project a success. The elements of success for his project included supporting a minimal configuration, not having to recompile to reflect changes, and maximal integration within components from front-end (GUI) to back-end (databases).

A business track titled ‘Open Source Business Review’ for CIOs and managers, was new to the convention this year. Topics included enterprise barriers to OSS adoption, applications, licensing, large scale project collaboration, and risk management. A panel on European Software Patents by Harmut Pilch of FFII, Marten Mickos of MySQL and Michael Tiemann of Red Hat emphasized that despite the FOSS community’s success in being able to temporarily halt software patents in Europe, this win represented just one skirmish in what will be a long fight.

OSCON is great for catching up on the latest buzz in ‘hallway talks’ — discussing the new Mozilla Corporation, what’s happening on Harmony, hot startups in the Valley, Google’s ‘Summer of Code’, 10 years of PHP, new features in Apache 2.2 and Perl, and much more. Informal meetings and social networking are an essential part of everyone’s OSCON experience.

The birds-of-a-feather (BOF) meetings in the evenings were another integral component of OSCON. Usually based on popular topics, BOFs provide a forum for exciting and often productive community interaction.

On the last day of OSCON I participated in a panel about “Women in OSS“. It was intriguing to explore why there are less than 2 percent women in open source despite a greater participation of women in technology areas in general. Food for thought and action.

Ready to Invest in Open Source CMS?

Saturday, August 6th, 2005

Commoditization has invaded one of the wealthiest segments of information technology. The $2.5 billion dollar enterprise content management market is being tossed on its head because commoditization means narrowing of profit margins. However, the good news is that recent developments in the market illustrate how open source will provide a solid foundation for investment going forward.

The Current Scene

Content management has become a mature market in the last ten years. Today the field is characterized by lots of activity at the bottom while, at the same time, a continuing influx of money coming in at the top. This reflects enduring interest in content management solutions across a wide spectrum of customer requirements. It also demonstrates an ability to generate wealth that makes it an attractive area of investment opportunities.

As a VC where do I place my bets?

A venture capitalist (VC) looks for innovation that can be leveraged into rapidly growing business opportunities. Is Alfresco, the latest open source content management system, such an innovation? Accel Partners of London seems to think so. In June, Accel ploughed $2 million dollars into Alfresco.

“A new investment mantra for the VC is taking shape.”

Alfresco’s business model is compelling. It combines the strength of high-quality open source code with the industry expertise of the founders and their business savvy in the enterprise market. It tries to strike the market at a point different from other open source CMS projects — entering the market from top down rather than from bottom up. Its target is to compete with high priced, proprietary products like Oracle and Documentum right off the bat. In short, Alfresco combines sophisticated open source software, recognized vertical expertise, and a top down sales approach.

How is this model different from the others?

The proprietary vendors are expensive and have saturated the high-end of the market with their current offerings. The open source vendors are inexpensive but lack critical mass as well as many of the high-end features needed to win enterprise customers. In the open source CMS market, many projects support commercial services but lack the sophistication of the proprietary vendors. Therefore their ability to drive commercial value is limited. The Alfresco model seeks to remedy these shortcomings.

Alfresco’s model offers the best of both worlds — an inexpensive toolset with high-end features along with proven expertise to provide services for customization, integration and deployment that are commonly demanded by the enterprise.

In the CMS market, sales of expensive licenses and growth of associated services has been limited by new trends in the overall software industry. Customers are beginning to realize a growing freedom of choice, in part because of the commercialization of open source solutions. The same trend that has occurred in the market for server operating systems (Linux, *BSDs) and for relational databases (MySQL, PostgreSQL) is now happening in the content management market.

Taking advantage of the demand for radically cheaper solutions and commoditization of software in a maturing market, the Alfresco open source model provides the same solution value but at a lower price. By utilizing the framework of a lower margin business and easier customer experimentation and adoption, this model allows Alfresco to expand into customer opportunities in the middle tier. This middle tier is inaccessible to the high-margin proprietary vendors and equally inaccessible to the less sophisticated open source projects.

New Game

The investment mantra for the VC is simple. The companies to invest in are those that will survive in a proven but maturing market by extracting value from the twin dynamics of software commoditization and expanding customer choice. The returns on investment for the successful players can be high because market penetration expands many times over. Meanwhile the old money in the market fails to grow and is even wasted as legacy businesses spend more and more to prop up their shrinking high-margin market segment.

For these legacy software markets, a business model that works today is to use low-cost, innovative open source software as the technology base and combine it with strong sales and deployment management expertise to provide enterprise level services. It is working for Red Hat, Novell, IBM, HP and others. It is working for MySQL. Let us see if it works for Alfresco. As we build our software investment portfolios, finding opportunities that reflect this new model may just be the key to high returns.

© 2000-2010 Technetra. All rights reserved. Contact | Terms of Use

WordPress