This paper provides quantitative data that, in many cases, using open source software / free software is a reasonable or even superior approach to using their proprietary competition according to various measures. This paper examines market share, reliability, performance, scalability, security, and total cost of ownership. It also has sections on non-quantitative issues, unnecessary fears, usage reports, other sites providing related information, and ends with some conclusions. You can view this paper at http://www.dwheeler.com/oss_fs_why.html (HTML format). Palm PDA users can view it in Plucker format (you will also need Plucker to read it). Old archived copies are also available.
Open Source Software / Free Software (OSS/FS) has risen to great prominence. Briefly, OSS/FS programs are programs whose licenses give users the freedom to run the program for any purpose, to study and modify the program, and to freely redistribute copies of either the original or modified program (without having to pay royalties to previous developers).
This goal of this paper is to show that you should consider using OSS/FS when you’re looking for software, based on quantitative measures. Some sites provide a few anecdotes on why you should use OSS/FS, but for many that’s not enough information to justify using OSS/FS. Instead, this paper emphasizes quantitative measures (such as experiments and market studies) on why using OSS/FS products is, in a number of circumstances, a reasonable or even superior approach. I should note that while I find much to like about OSS/FS, I’m not a rabid advocate; I use both proprietary and OSS/FS products myself. Vendors of proprietary products often work hard to find numbers to support their claims; this page provides a useful antidote of hard figures to aid in comparing proprietary products to OSS/FS.
Note that this paper’s goal is not to show that all OSS/FS is better than all proprietary software. Certainly, there are many who believe this is true from ethical, moral, or social grounds. However, no numbers could prove such broad statements. Instead, I’ll simply compare commonly-used OSS/FS software with commonly-used proprietary software, to show that at least in certain situations and by certain measures, some OSS/FS software is at least as good or better than its proprietary competition. Of course, some OSS/FS software is technically poor, just as some proprietary software is technically poor, and even very good software may not fit your specific needs. But although most people understand the need to compare proprietary products before using them, many people fail to even consider OSS/FS products. This paper is intended to explain why acquirers should consider OSS/FS alternatives.
I’ll emphasize the GNU/Linux operating system (which some abbreviate as “Linux”) and the Apache web server, since these are some of the most visible OSS/FS projects. I’ll also primarily compare OSS/FS software to Microsoft’s products (such as Windows and IIS), since Windows has a significant market share and Microsoft is one of proprietary software’s strongest proponents. I’ll mention Unix systems in passing as well, though the situation with Unix is more complex; many Unix systems include a number of OSS/FS components or software primarily derived from OSS/FS components. Thus, comparing proprietary Unix systems to OSS/FS systems (when examined as entire systems) is often not as clear-cut. I use the term “Unix-like” to mean systems intentionally similar to Unix; both Unix and GNU/Linux are “Unix-like” systems. The most recent Apple Macintosh operating system (MacOS OS X) presents the same kind of complications; older versions of MacOS were entirely proprietary, but Apple’s operating system has been redesigned so that it’s now based on a Unix system with a substantial contribution from OSS/FS programs. Indeed, Apple is now openly encouraging collaboration with OSS/FS developers. I include data over a series of years, not just the past year; I believe that all relevant data should be considered when making a decision, instead of arbitrarily ignoring older data, and the older data shows that OSS/FS has a history of many positive traits.
You can get a more detailed explanation of the terms “open source software” and “Free Software”, as well as related information, from my list of Open Source Software / Free Software (OSS/FS) references at http://www.dwheeler.com/oss_fs_refs.html. Note that those who use the term “open source software” tend to emphasize technical advantages of such software (such as better reliability and security), while those who use the term “Free Software” tend to emphasize freedom from control by another and/or ethical issues. The opposite of OSS/FS is “closed” or “proprietary” software. Software for which the source code that can be viewed, but cannot modified and redistributed without further limitation (e.g., “source viewable” or “open box” software, including “shared source” and “community” licenses), are not considered here since they don’t meet the previously-given definition of OSS/FS. Note that many OSS/FS programs are commercial programs, so don’t make the mistake of calling OSS/FS software “non-commercial.” Almost no OSS/FS programs are in the “public domain” (which has a specific legal meaning), so avoid that term as well. Other alternative terms for OSS/FS software include “libre software” (where libre means free as in freedom), free/libre and open source software (FLOSS), open source / Free Software (OS/FS), open-source software (indeed, “open-source” is often used as a general adjective), “freed software,” and even “public service software” (since often these software projects are designed to serve the public at large).
Below is data discussing market share, reliability, performance, scalability, security, and total cost of ownership. I close with a brief discussion of non-quantitative issues, unnecessary fears, usage reports, other sites providing related information, and conclusions.
Many people believe that a product is only a winner if it has significant market share. This is lemming-like, but there’s some rationale for this: products with big market shares get applications, trained users, and momentum that reduces future risk. Some writers argue against OSS/FS or GNU/Linux as “not being mainstream”, but if their use is widespread then such statements reflect the past, not the present. There’s excellent evidence that OSS/FS has significant market share in numerous markets:
More recently, Netcraft has been trying to separately count “active” web sites. The problem is that many web sites have been created that are simply “placeholder” sites (i.e., their domain names have been reserved but they are not being used); such sites are termed “inactive.” Netcraft’s count of only the active sites is a more relevant figure, since this shows the web server selected by those who choose to develop a web site. When counting active sites, Apache does even better; in September 2002, Apache had 66.04% of the web server market, Microsoft had 24.18%, iPlanet had 1.57%, and Zeus had 1.34%.
Netcraft’s September 2002 survey also reported on websites based on their “IP address” instead of the host name; this has the effect of removing “parked” (unused addresses), computers used to serve multiple sites, and sites with multiple names. When counting by IP address, Apache has shown a slow increase from 51% at the start of 2001 to 54%, while Microsoft was unchanged at 35%.
The same overall result has been determined independently by E-soft - their report on web server market share published October 1, 2002 surveyed 9,045,027 web servers in September 2002 and found that Apache was #1 (66.75%), with Microsoft IIS being #2 (21.83%). E-soft also reports specifically on secure servers (web servers supporting SSL/TLS, such as e-commerce sites), and even here Apache has a commanding 51.26% market share, as compared to Microsoft’s 34.85%, Netscape/iPlanet’s 5.68%, and Stronghold’s 2.71%. Indeed, since Stronghold is a repackaging of Apache, Apache’s real market share is at least 53.97%.
Obviously these figures fluctuate monthly; see Netcraft and E-soft for the latest survey figures.
Therefore, Netcraft developed a technique that indicates the number of actual computers being used as Web servers, together with the operating system and web server software used. The technique is based on arranging a number of IP addresses to send packets to Netcraft nearly simultaneously; low level TCP/IP characteristics can be used to work out if those packets originate from the same computer by checking for similarities in a number of TCP/IP protocol header fields. This is a statistical approach, so many visits to the site are used over a month to build up sufficient certainty. This technique has its weaknesses; Round robin DNS, reverse web proxies, some load balancing/failover products like Cisco LocalDirector and BIG-IP, and some connection level firewalls hide a number of web servers behind a hostname. Only a single “front” web server will be counted, and with some of these products the operating system detected is that of the “front” device rather than the web server behind. Still, Netcraft believes that the error margins world-wide are well within the order of plus or minus 10%, and this is the best available survey of such data.
Before presenting the data, it’s important to explain Netcraft’s system for dating the data. Netcraft dates their information based on the web server surveys (not the publication date), and they only report operating system summaries from an earlier month. Thus, the survey dated “June 2001” was published in July and covers operating system survey results of March 2001, while the survey dated “September 2001” was published in October and covers the operating system survey results of June 2001.
Here’s a summary of Netcraft’s study results:
OS group | Percentage (March) | Percentage (June) | Composition |
---|---|---|---|
Windows | 49.2% | 49.6% | Windows 2000, NT4, NT3, Windows 95, Windows 98 |
[GNU/]Linux | 28.5% | 29.6% | [GNU/]Linux |
Solaris | 7.6% | 7.1% | Solaris 2, Solaris 7, Solaris 8 |
BSD | 6.3% | 6.1% | BSDI BSD/OS, FreeBSD, NetBSD, OpenBSD |
Other Unix | 2.4% | 2.2% | AIX, Compaq Tru64, HP-UX, IRIX, SCO Unix, SunOS 4 and others |
Other non-Unix | 2.5% | 2.4% | MacOS, NetWare, proprietary IBM OSs |
Unknown | 3.6% | 3.0% | not identified by Netcraft operating system detector |
Much depends on what you want to measure. Several of the BSDs (FreeBSD, NetBSD, and OpenBSD) are OSS/FS as well; so at least a portion of the 6.1% for BSD should be added to GNU/Linux’s 29.6% to determine the percentage of OSS/FS operating systems being used as web servers. Thus, it’s likely that approximately one-third of web serving computers use OSS/FS operating systems. There are also regional differences, for example, GNU/Linux leads Windows in Germany, Hungary, the Czech Republic, and Poland.
Well-known web sites using OSS/FS include Google (GNU/Linux) and Yahoo (FreeBSD).
If you really want to know about the web server market breakdown of “Unix vs. Windows,” you can find that also in this study. All of the various Windows operating systems are rolled into a single number (even Windows 95/98 and Windows 2000/NT4/NT3 are merged together, although they are fundamentally very different systems). Merging all the Unix-like systems in a similar way produces a total of 44.8% for Unix-like systems (compared to Windows’ 49.2%) in March 2001.
Note that these figures would probably be quite different if they were based on web addresses instead of physical computers; in such a case, the clear majority of web sites are hosted by Unix-like systems. As stated by Netcraft, “Although Apache running on various Unix systems runs more sites than Windows, Apache is heavily deployed at hosting companies and ISPs who strive to run as many sites as possible on a single computer to save costs.”
Here’s how the various operating systems fared in the study:
Market Share | Operating System | Composition |
---|---|---|
GNU/Linux | 28.5% | GNU/Linux |
Windows | 24.4% | All Windows combined (including 95, 98, NT) |
Sun | 17.7% | Sun Solaris or SunOS |
BSD | 15.0% | BSD Family (FreeBSD, NetBSD, OpenBSD, BSDI, ...) |
IRIX | 5.3% | SGI IRIX |
A portion of the BSD family is also OSS/FS, so the OSS/FS operating system total is even higher; if over 2/3 of the BSDs are OSS/FS, then the total share of OSS/FS would be about 40%. Advocates of Unix-like systems will notice that the majority (around 66%) were running Unix-like systems, while only around 24% ran a Microsoft Windows variant.
IDC released a similar study on January 17, 2001 titled “Server Operating Environments: 2000 Year in Review”. On the server, Windows accounted for 41% of new server operating system sales in 2000, growing by 20% - but GNU/Linux accounted for 27% and grew even faster, by 24%. Other major Unixes had 13%.
IDC’s 2002 report found that Linux held its own in 2001 at 25%. All of this is particularly intriguing since GNU/Linux had 0.5% of the market in 1995, according to a Forbes quote of IDC. Data such as these (and the TCO data shown later) have inspired statements such as this one from IT-Director on November 12, 2001: “Linux on the desktop is still too early to call, but on the server it now looks to be unstoppable.”
These measures do not measure all server systems installed that year; some Windows systems are not paid for (they’re illegally pirated), and OSS/FS operating systems such as GNU/Linux and the BSDs are often downloaded and installed on multiple systems (since it’s legal and free to do so).
Later data seems to confirm this, for example, the Japanese Linux white paper 2003 found that 49.3% of IT solution vendors support Linux in Japan.
The survey has two parts, user and vendor. In “Part I : User enterprise”, they surveyed 729 enterprises that use servers. In “Part II : Vendor enterprise”, they surveyed 276 vendor enterprises who supply server computers, including system integrators, software developers, IT service suppliers, and hardware resellers. The most interesting results are those that discuss the use of Linux servers in user enterprises, the support of Linux servers by vendors, and Linux server adoption in system integration projects.
First, the use of Linux servers in user enterprises:
System | 2002 | 2001 |
---|---|---|
Linux server | 64.3% | 35.5% |
Windows 2000 Server | 59.9% | 37.0% |
Windows NT Server | 64.3% | 74.2% |
Commercial Unix server | 37.7% | 31.2% |
And specifically, here’s the average use in 2002:
System | Ave. units | # samples |
---|---|---|
Linux server | 13.4 | N=429 (5.3 in 2001) |
Windows 2000 Server | 24.6 | N=380 |
Windows NT Server | 4.5 | N=413 |
Commercial Unix server | 6.9 | N=233 |
Second, note the support of GNU/Linux servers by vendors:
System | Year 2002 Support |
---|---|
Windows NT/2000 Server | 66.7% |
Linux server | 49.3% |
Commercial Unix server | 38.0% |
Increase of importance in the future | 44.1% |
Requirement from their customers | 41.2% |
Major OS in their market | 38.2% |
Free of licence fee | 37.5% |
Most reasonable OS for their purpose | 36.0% |
Open source | 34.6% |
High reliability | 27.2% |
Third, note the rate of Linux server adoption in system integration projects:
Project Size (Million Yen) | Linux | Win2000 | Unix | |
---|---|---|---|---|
2002 | 2001 | 2002 | 2002 | |
0-3 | 62.7% | 65.7% | 53.8% | 15.4% |
3-10 | 51.5% | 53.7% | 56.3% | 37.1% |
10-50 | 38.3% | 48.9% | 55.8% | 55.8% |
50-100 | 39.0% | 20.0% | 45.8% | 74.6% |
100+ | 24.4% | 9.1% | 51.1% | 80.0% |
This makes sense given that GNU/Linux is a more recent competitor to Windows and Linux. No (rational) organization is going to commit its largest projects to a new server system immediately; instead, they will try it on small projects, use it more often on small projects if that succeeds, and then gradually use the product on larger projects if it appears to be successful and scaleable. The trend here shows GNU/Linux already dominant on small projects, and growing rapidly on the larger ones.
Expected GNU/Linux Use | Small Business | Midsize Business | Large Business | Total |
---|---|---|---|---|
50% increase | 21.0% | 16% | 19.0% | 19% |
10-25% increase | 30.5% | 42% | 56.5% | 44% |
No growth | 45.5% | 42% | 24.5% | 36% |
Reduction | 3.0% | 0% | 0% | 1% |
According to the June 2000 IDC survey of 1999 licenses for client machines, GNU/Linux had 80% as many client shipments in 1999 as Apple’s MacOS (5.0% for Mac OS, 4.1% for GNU/Linux). More recent figures in 2002 suggest that GNU/Linux has 1.7% or 3.8% of the client OS market (depending on which quote you believe). Obviously, while this shows that there are many users (because there are so many client systems), this is still small compared to Microsoft’s effective monopoly on the client operating system market.
But this should not be surprising, because before 2002 OSS/FS systems like GNU/Linux could not really meet the requirements for a client system. Few users can even consider buying a client system without basic client applications, since that system won’t meet their fundamental requirements. As a practical matter, client systems must be compatible with the market leader (e.g., the office suite must be able to read and write Microsoft Office formats); before 2002 the most available products could not do this well. Finally, for systems like GNU/Linux to compete with its competitors, the basic client applications and environment have to be OSS/FS as well, and this is a point not often understood. There have been proprietary basic client applications for GNU/Linux for several years, but they don’t really help GNU/Linux; a GNU/Linux system combined with a proprietary basic client applications still lacks the freedoms and low cost of purely OSS/FS systems, and the combination has to compete with established proprietary systems which have many more applications available to them. This doesn’t mean that GNU/Linux can’t support proprietary programs; certainly some people will buy proprietary basic client applications, and many people have already decided to buy many other kinds of proprietary applications and run them on a GNU/Linux system. However, few will find that a GNU/Linux system with proprietary basic client applications has an advantage over its competition. After all, the result is still proprietary, and since there are fewer desktop applications on GNU/Linux, many capabilities have been lost, little has been gained, and the switching costs will dwarf those minute gains.
However, the situation is changing dramatically, due to three factors: OSS/FS basic client software is now available, Microsoft is raising prices, and governments want open systems:
There are other plausible alternatives for client applications as well, such as Evolution (an excellent mail reader), Abiword (a lighter-weight but less capable word processor which also released its version 1.0 in 2002), Gnumeric (a spreadsheet), and KOffice (an office suite).
However, I will emphasize Mozilla and Open Office, for two reasons. First, they also run on Microsoft Windows, which makes it much it easier to transition users from the competition (this enables users to migrate a step at a time, instead of making a single massive change). Second, they are full-featured, including compatibility with Microsoft’s products; many users want to use fully-featured products since they don’t want to switch programs just to get a particular feature. In short, it looks like there are now several OSS/FS products that have begun to rival their proprietary competitors in both usability and in the functionality that people need, including some very capable programs.
Gartner’s review of Star Office (Sun’s variant of Open Office) also noted that Microsoft’s recent licensing policies may accelerate moving away from Microsoft. As Gartner notes, “This [new license program] has engendered a lot of resentment among Microsoft’s customers, and Gartner has experienced a marked increase in the number of clients inquiring about alternatives to Microsoft’s Office suite... enterprises are realizing that the majority of their users are consumers or light producers of information, and that these users do not require all of the advanced features of each new version of Office... unless Microsoft makes significant concessions in its new office licensing policies, Sun’s StarOffice will gain at least 10 percent market share at the expense of Microsoft Office by year-end 2004 (0.6 probability).” They also note that “Because of these licensing policies, by year-end 2003, more than 50 percent of enterprises will have an official strategy that mixes versions of office automation products - i.e., between multiple Microsoft Office versions or vendor products (0.7 probability).”
Indeed, the advantages of OSS/FS to governments are clear, especially to non-U.S. governments. No government wants their computing infrastructure controlled by a single company (and outside the U.S., a foreign company at that). Jiang Guangzhi, director of a software development center in Shanghai, emphasized that the Chinese government did not want one company “to manipulate or dominate the Chinese market.” IBM signed a Linux deal with Germany; Germany’s Interior Minister, Otto Schilly, said the move would help cut costs, improve security in the nation’s computer networks, and lower dependence on a single supplier. Ralph Nader’s Consumer Project on Technology gives reasons the U.S. government should encourage OSS/FS. Many countries favor or are considering favoring OSS/FS in some way, such as Peru, the UK, South Africa, and Taiwan. An older but broad survey was published in 2001 by CNet.
Indeed, so many governments have begun enacting preferences for OSS/FS that Microsoft has sponsored an organization called the Initiative for Software Choice. This organization makes many nice-sounding statements, but it appears that the real purpose of this organization is to forbid governments from considering software licenses when they procure software and to encourage standards that lock out OSS/FS. An opposing group, founded by Bruce Perens, is Sincere Choice.org, which advocates that there be a “fair, competitive market for computer software, both proprietary and Open Source.” Bruce Perens has published an article discussing why “Software Choice” is not what it first appears to be.
There are some interesting hints that GNU/Linux is already starting to gain on the client. Some organizations, such as TrustCommerce and the city of Largo, Florida, report that they’ve successfully transitioned to using Linux on the desktop.
There’s already some evidence that others anticipate this; Richard Thwaite, director of IT for Ford Europe, stated in 2001 that an open source desktop is their goal, and that they expect the industry to eventually go there (he controls 33,000 desktops, so this would not be a trivial move). It could be argued that this is just a ploy for negotiation with Microsoft - but such ploys only work if they’re credible.
There are other sources of information on OSS/FS or GNU/Linux for clients. Desktoplinux.com is a web site devoted to the use of GNU/Linux on the desktop; they state that “We believe Linux is ready now for widespread use as a desktop operating system, and we have created this website to help spread the word and accelerate the transition to a more open desktop, one that offers greater freedom and choice for both personal and business users.”
Indeed, it appears that many users are considering such a transition. ZDNet published survey results on August 22, 2002, which asked “Would your company switch its desktop PCs from Windows to Linux if Windows apps could run on Linux?” Of the more than 15,000 respondents, 58% said they’d make the switch immediately; another 25% said they’d consider dumping Windows in favor of Linux within a year. While all such surveys need to be taken with a grain of salt, still, these are not the kind of responses you would see from users happy with their current situation. They also noted that ZDNet Australia found that 55% of the surveyed IT managers were considering switching from Microsoft products.
There are a lot of anecdotal stories that OSS/FS is more reliable, but finally there is quantitative data confirming that mature OSS/FS programs are more reliable:
OSS/FS had higher reliability by this measure. It states in section 2.3.1 that:
It is also interesting to compare results of testing the commercial systems to the results from testing “freeware” GNU and Linux. The seven commercial systems in the 1995 study have an average failure rate of 23%, while Linux has a failure rate of 9% and the GNU utilities have a failure rate of only 6%. It is reasonable to ask why a globally scattered group of programmers, with no formal testing support or software engineering standards can produce code that is more reliable (at least, by our measure) than commercially produced code. Even if you consider only the utilities that were available from GNU or Linux, the failure rates for these two systems are better than the other systems.
There is evidence that Windows applications have similar reliability to the proprietary Unix software (e.g., less reliable than the OSS/FS software). A later paper, “An Empirical Study of the Robustness of Windows NT Applications Using Random Testing”, found that with Windows NT GUI applications, they could crash 21% of the applications they tested, hang an additional 24% of the applications, and could crash or hang all the tested applications when subjecting them to random Win32 messages. Thus, there’s no evidence that proprietary Windows software is more reliable than OSS/FS by this measure. Yes, Windows has progressed since that time - but so have the OSS/FS programs.
Although this experiment was done in 1995, nothing that’s happened since suggests that proprietary software has become much better than OSS/FS programs since then. Indeed, since 1995 there’s been an increased interest and participation in OSS/FS, resulting in far more “eyeballs” examining and improving the reliability of OSS/FS programs.
The fuzz paper’s authors found that proprietary software vendors generally didn’t fix the problems identified in an earlier version of their paper, and found that concerning. In contrast, Scott Maxwell led an effort to remove every flaw identified in the OSS/FS software in the 1995 fuzz paper, and eventually fixed every flaw. Thus, the OSS/FS community’s response shows why, at least in part, OSS/FS programs have such an edge in reliability; if problems are found, they’re often fixed. Even more intriguingly, the person who spearheaded ensuring that these problems were fixed wasn’t an original developer of the programs - a situation only possible with OSS/FS.
Now be careful: OSS/FS is not magic pixie dust; beta software of any kind is still buggy! However, the 1995 experiment measured mature OSS/FS to mature proprietary software, and the OSS/FS software was more reliable under this measure.
Downtime | Apache |
Microsoft |
Netscape |
Other |
September | 5.21 |
10.41 |
3.85 |
8.72 |
October | 2.66 |
8.39 |
2.80 |
12.05 |
November | 1.83 |
14.28 |
3.39 |
6.85 |
Average | 3.23 |
11.03 |
3.35 |
9.21 |
It’s hard not to notice that Apache (the OSS web server) had the best results over the three-month average (and with better results over time, too). Indeed, Apache’s worst month was better than Microsoft’s best month. I believe the difference between Netscape and Apache is statistically insignificant - but this still shows that the freely-available OSS/FS solution (Apache) has a reliability at least as good as the most reliable proprietary solution. The report does note that this might not be solely the fault of the software’s quality, since there were several Microsoft IIS sites that had short interruptions at the same time each day (suggesting regular restarts). However, this still begs the question -- why did the IIS sites require so many more regular restarts than the Apache sites? Every outage, even if preplanned, results in a service loss (and for e-commerce sites, a potential loss of sales).
As with all surveys, this one has weaknesses, as discussed in Netcraft’s Uptime FAQ. Their techniques for identifying web server and operating systems can be fooled. Only systems for which Netcraft was sent many requests were included in the survey (so it’s not “every site in the world”). Any site that is requested through the “what’s that site running” query form at Netcraft.com is added to the set of sites that are routinely sampled; Netcraft doesn’t routinely monitor all 22 million sites it knows of for performance reasons. Many operating systems don’t provide uptime information and thus can’t be included; this includes AIX, AS/400, Compaq Tru64, DG/UX, MacOS, NetWare, NT3/Windows 95, NT4/Windows 98, OS/2, OS/390, SCO UNIX, Sony NEWS-OS, SunOS 4, and VM. Thus, this uptime counter can only include systems running on BSD/OS, FreeBSD (but not the default configuration in versions 3 and later), recent versions of HP-UX, IRIX, GNU/Linux 2.1 kernel and later (except on Alpha processor based systems), MacOS X, recent versions of NetBSD/OpenBSD, Solaris 2.6 and later, and Windows 2000. Note that Windows NT systems cannot be included in this survey (because their uptimes couldn’t be counted). Windows 2000 systems’s data are included in the source source for this survey, but they have a different problem. Windows 2000 had little hope to be included in the August 2001 list, because the 50th system in the list had an uptime of 661 days, and Windows 2000 had only been launched about 17 months (about 510 days) earlier. Note that HP-UX, GNU/Linux (usually), Solaris and recent releases of FreeBSD cycle back to zero after 497 days, exactly as if the machine had been rebooted at that precise point. Thus it is not possible to see an HP-UX, GNU/Linux (usually), or Solaris system with an uptime measurement above 497 days, and in fact their uptimes can be misleading (they may be up for a long time, yet not show it). There is yet one other weakness: if a computer switches operating systems later, the long uptime is credited to the new operating system. Still, this survey does compare Windows 2000, GNU/Linux (up to 497 days usually), FreeBSD, and several other operating systems, and OSS/FS does quite well.
It could be argued that perhaps systems on the Internet that haven’t been rebooted for such a long time might be insignificant, half-forgotten, systems. For example, it’s possible that security patches aren’t being regularly applied, so such long uptimes are not necessarily good things. However, a counter-argument is that Unix and Linux systems don’t need to be rebooted as often for a security update, and this is a valuable attribute for a system to have. Even if you accepted that unproven claim, it’s certainly true that there are half-forgotten Windows systems, too, and they didn’t do so well. Also, only systems someone specifically asked for information about were included in the uptime survey, which would limit the number of insignificant or half-forgotten systems.
At the very least, Unix and Linux are able to quantitatively demonstrate longer uptimes than their Windows competitors can, so Unix and Linux have significantly better evidence of their reliability than Windows.
Of course, there are many anecdotes about Windows reliability vs. Unix. For example, the Navy’s “Smart Ship” program caused a complete failure of the entire USS Yorktown ship in September 1997. Anthony DiGiorgio (a whistle-blower) stated that Windows is “the source of the Yorktown’s computer problems.” Ron Redman, deputy technical director of the Fleet Introduction Division of the Aegis Program Executive Office, said “there have been numerous software failures associated with [Windows] NT aboard the Yorktown.” Redman also said “Because of politics, some things are being forced on us that without political pressure we might not do, like Windows NT... If it were up to me I probably would not have used Windows NT in this particular application. If we used Unix, we would have a system that has less of a tendency to go down.”
One problem with reliability measures is that it takes a long time to gather data on reliability in real-life circumstances. Thus, there’s more data comparing older Windows editions to older GNU/Linux editions. The key is that these tests contemporary versions of both OSS/FS and proprietary systems; both have moved forward since, but it’s a fair test. Nevertheless, the available evidence suggests that OSS/FS has a significant edge in reliability.
Comparing GNU/Linux and Microsoft Windows performance on equivalent hardware has a history of contentious claims and different results based on different assumptions. I think that OSS/FS has at least shown that it’s often competitive, and in many circumstances it beats the competition.
Performance benchmarks are very sensitive to the assumptions and environment, so the best benchmark is one you set up yourself to model your intended environment. Failing that, you should use unbiased measures, because it’s so easy to create biased measures.
First, here are a few recent studies suggesting that some OSS/FS systems beat their proprietary competition in at least some circumstances:
The FreeBSD developers complained about these tests, noting that FreeBSD by default emphasizes reliability (not speed) and that they expected anyone with a significant performance need would do some tuning first. Thus, Sys Admin’s re-did the tests for FreeBSD after tuning FreeBSD. One change they made was switching to “asynchronous” mounting, which makes a system faster (though it increases the risk of data loss in a power failure) - this is the GNU/Linux default and easy to change in FreeBSD, so this was a very small and reasonable modification. However, they also made many other changes, for example, they found and compiled in 17 FreeBSD kernel patches and used various tuning commands. The other operating systems weren’t given the chance to “tune” like this, so comparing untuned operating systems to a tuned FreeBSD isn’t really fair.
In any case, here are their two performance tests:
System | Windows SPEC Result | Linux SPEC Result |
---|---|---|
Dell PowerEdge 4400/800, 2 800MHz Pentium III Xeon | 1060 (IIS 5.0, 1 network controller) | 2200 (TUX 1.0, 2 network controllers) |
Dell PowerEdge 6400/700, 4 700MHz Pentium III Xeon | 1598 (IIS 5.0, 7 9GB 10KRPM drives) | 4200 (TUX 1.0, 5 9GB 10KRPM drives) |
Dell PowerEdge 8450/700, 8 700MHz Pentium III Xeon | 7300/NC (IIS 5.0, 1 9Gb 10KRPM and 8 16Gb 15KRPM drives) then 8001 (IIS 5.0, 7 9Gb 10KRPM and 1 18Gb 15KRPM drive) | 7500 (TUX 2.0, 5 9Gb 10KRPM drives) |
The first row (the PowerEdge 4400/800) doesn’t really prove anything. The IIS system has lower performance, but it only had one network controller and the TUX system has two - so while the TUX system had better performance, that could simply be because it had two network connections it could use.
The second entry (the PowerEdge 6400/700) certainly suggests that GNU/Linux plus TUX really is much better - the IIS system had two more disk drives available to it (which should increase performance), but the TUX system had more than twice the IIS system’s performance.
The last entry for the PowerEdge 8450/700 is even more complex. First, the drives are different - the IIS systems had at least one drive that revolved more quickly than the TUX systems (which should give IIS higher performance overall, since the transfer speed is almost certainly higher). Also, there were more disk drives (which again should give IIS still higher performance). When I originally put this table together showing all data publicly available in April 2001 (covering the third quarter of 1999 through the first quarter of 2001), IIS 5.0 (on an 8-processor Dell PowerEdge 8450/700) had a SPECweb99 value of 7300. Since that time, Microsoft changed the availability of Microsoft SWC 3.0, and by SPECweb99 rules, this means that those test results are “not compliant” (NC). This is subtle; it’s not that the test itself was invalid, it’s that Microsoft changed what was available and used the SPEC Consortium’s own rules to invalidate a test (possibly because the test results were undesirable to Microsoft). A retest then occurred, with yet another disk drive configuration, at which point IIS produced a value of 8001. However, both of these figures are on clearly better hardware - and in one circumstance the better hardware didn’t do better.
Thus, in these configurations the GNU/Linux plus TUX system was given inferior hardware yet still sometimes won on performance. Since other factors may be involved, it’s hard to judge - there are pathological situations where “better hardware” can have worse performance, or there may be another factor not reported that had a more significant effect. Hopefully in the future there will be many head-to-head tests in a variety of identical configurations.
Note that TUX is intended to be used as a “web accelerator” for many circumstances, where it rapidly handles simple requests and then passes more complex queries to another server (usually Apache). I’ve quoted the TUX figures because they’re the recent performance figures I have available. As of this time I have no SPECweb99 figures or other recent performance measures for Apache on GNU/Linux, or for Apache and TUX together; I also don’t have TUX reliability figures. I expect that such measures will appear in the future.
In February 2002 he published Managing processes and threads, in which he compared the performance of Red Hat Linux 7.2, Windows 2000 Advanced Server (”Win2K”), and Windows XP Professional (”WinXP”), all on a Thinkpad 600X with 320MiB of memory. Linux managed to create over 10,000 threads/second, while Win2K didn’t quite manage 5,000 threads/second and WinXP only created 6,000 threads/second. In process creation, Linux managed 330 processes/second, while Win2K managed less than 200 processes/second and WinXP less than 160 processes/second.
All operating systems in active development are in a constant battle for performance improvements over their rivals. The history of comparing Windows and GNU/Linux helps put this in perspective:
Careful examination of the benchmark did find some legitimate Linux kernel problems, however. These included a TCP bug, the lack of “wake one” semantics, and SMP bottlenecks (see Dan Kegel’s pages for more information). The Linux kernel developers began working on the weaknesses identified by the benchmark.
For file serving, they discovered only “negligible performance differences between the two for average workloads... [and] depending on the degree of tuning performed on each installation, either system could be made to surpass the other slightly in terms of file-sharing performance.” Red Hat Linux slightly outperformed NT on file writes, while NT edged out Red Hat Linux on massive reads. Note that their configuration was primarily network-limited; they stated “At no point were we able to push the CPUs much over 50-percent utilization-the single NIC, full duplex 100BASE-T environment wouldn’t allow it.”
They also noted that “examining the cost difference between the two licenses brings this testing into an entirely new light... the potential savings on licenses alone is eye-opening. For example, based on the average street price of $30 for a Windows NT client license, 100 licenses would cost around $3,000, plus the cost of an NT server license (around $600). Compare this to the price of a Red Hat Linux CD, or perhaps even a free download, and the savings starts to approach the cost of a low-end workgroup server. Scale that up to a few thousand clients and you begin to see the savings skyrocket.” See this paper’s section on total cost of ownership.
There are other benchmarks available, but I’ve discounted them on various grounds:
More information on various benchmarks is available from Kegel’s NT vs. Linux Server Benchmark Comparisons, SPEC, and the dmoz entry on benchmarking.
Remember, in benchmarking everything depends on the configuration and assumptions that you make. Many systems are constrained by network bandwidth; in such circumstances buying a faster computer won’t help at all. Even when network bandwidth isn’t the limitation, neither Windows nor GNU/Linux do well in large-scale symmetric multiprocessing (SMP) configurations; if you want 64-way CPUs with shared memory, neither are appropriate (Sun Solaris, which is not OSS/FS, does much better in this configuration). On the other hand, if you want massive distributed (non-shared) memory, GNU/Linux does quite well, since you can buy more CPUs with a given amount of money. If massive distribution can’t help you and you need very high performance, Windows isn’t even in the race; today Windows 2000 only runs on Intel x86 compatible chips, while GNU/Linux runs on much higher performance processors as well as the x86.
Thus, you can buy a small GNU/Linux or NetBSD system and grow it as your needs grow; indeed, you can replace small hardware with massively parallel or extremely high-speed processors or very different CPU architectures without switching operating systems. Windows CE/ME/NT scales down to small platforms, but not to large ones, and it only works on x86 systems. Many Unix systems (such as Solaris) scale well to specific large platforms but not as well to distributed or small platforms. These OSS/FS systems are some of the most scalable programs around.
Of course, not all sites are broken through their web server and OS - many are broken through exposed passwords, bad web application programming, and so on. But if this is so, why is there such a big difference in the number of defacements based on the operating system? No doubt some other reasons could be put forward (this data only shows a correlation not a cause), but this certainly suggests that OSS/FS can have better security.
Attrition.org has decided to abandon keeping track of this information due to the difficulty of keeping up with the sheer volume of broken sites, and it appeared that tracking this information wouldn’t be possible. However, defaced.alldas.de has decided to perform this valuable service. Their recent reports show that this trend has continued; on July 12, 2001, they report that 66.09% of defaced sites ran Windows, compared to 17.01% for GNU/Linux, out of 20,260 defaced websites.
OS | 1997 | 1998 | 1999 | 2000 |
Debian GNU/Linux | 2 | 2 | 30 | 20 |
OpenBSD | 1 | 2 | 4 | 7 |
Red Hat Linux | 5 | 10 | 41 | 40 |
Solaris | 24 | 31 | 34 | 9 |
Windows NT/2000 | 4 | 7 | 99 | 85 |
You shouldn’t take these numbers very seriously. Some vulnerabilities are more important than others (some may provide little if exploited or only be vulnerable in unlikely circumstances), and some vulnerabilities are being actively exploited (while others have already been fixed before exploitation). Open source operating systems tend to include many applications that are usually sold separately in proprietary systems (including Windows and Solaris) - for example, Red Hat 7.1 includes two relational database systems, two word processors, two spreadsheet programs, two web servers, and a large number of text editors. In addition, in the open source world, vulnerabilities are discussed publicly, so vulnerabilities may be identified for software still in development (e.g., “beta” software). Those with small market shares are likely to have less analysis. The “small market share” comment won’t work with GNU/Linux, of course, since we’ve already established that GNU/Linux is the #1 or #2 server OS (depending on how you count them). Still, this clearly shows that the three OSS/FS OSs listed (Debian GNU/Linux, OpenBSD, and Red Hat Linux) did much better by this measure than Windows in 1999 and (so far) in 2000. Even if a bizarre GNU/Linux distribution was created explicitly to duplicate all vulnerabilities present in any major GNU/Linux distribution, this intentionally bad GNU/Linux distribution would still do better than Windows (it would have 88 vulnerabilities in 1999, vs. 99 in Windows). The best results were for OpenBSD, an OSS/FS operating system that for years has been specifically focused on security. It could be argued that its smaller number of vulnerabilities is because of its rarer deployment, but the simplest explanation is that OpenBSD has focused strongly on security - and achieved it better than the rest.
This data is partly of interest because various reporters make the same mistake: counting the same vulnerability multiple times. One journalist, Fred Moody, failed to understand his data sources - he used these figures to try to show show that GNU/Linux had worse security. He took these numbers and then added the GNU/Linux ones so each Linux vulnerability was counted at least twice (once for every distribution it applied to plus one more). By using these nonsensical figures he declared that GNU/Linux was worse than anything. If you read his article, you also need to read the rebuttal by the manager of the Microsoft Focus Area at SecurityFocus to understand why the journalist’s article was so wrong.
In 2002, another journalist (James Middleton) made the same mistake, apparently not learning from previous work. Middleton counted the same Linux vulnerability up to four times. What’s bizarre is that he even reported the individual numbers showing that specific Linux systems were actually more secure by using Bugtraq’s vulnerability list through August 2001, and somehow he didn’t realize what it meant. He noted that Windows NT/2000 suffered 42 vulnerabilities, while Mandrake Linux 7.2 notched up 33 vulnerabilities, Red Hat Linux 7.0 suffered 28, Mandrake 7.1 had 27 and Debian 2.2 had 26. In short, all of the GNU/Linux distributions had significantly fewer vulnerabilities by this count. It’s not entirely clear what was being considered as being “in” the operating system in this case, which would of course make a difference; there are some hints that vulnerabilities in some Windows-based products (such as Exchange) weren’t counted while vulnerabilities in the same functionality (e.g., sendmail) were counted. It also appears that many of the Windows attacks were more dangerous (which were often remote attacks actively exploited), as compared to the GNU/Linux ones (which were often local attacks, found by looking at source and not actively exploited at the time). I would appreciate links to someone who’s analyzed these issues more carefully. The funny thing is that given all these errors, the paper gives evidence that the GNU/Linux distributions were more secure.
The September 30, 2002
VNUnet.com article “Honeymoon over for Linux Users”,
claims that there are more “Linux bugs” than “Microsoft bugs.”
In particular, it quotes
X-Force (the US-based monitoring group of security software
firm Internet Security Systems), and summarizes by saying that
in 2001 the centre found 149 bugs in Microsoft software compared to
309 for Linux, and in 2002 485 Linux bugs were found
compared to Microsoft’s 202.
However,
Linux Weekly News discovered
and reported serious flaws in these figures:
Indeed, as noted in Bruce Schneier’s
Crypto-gram
of September 15, 2000, vulnerabilities are affected by other
things such as how many attackers exploit the vulnerability,
the speed at which a fix is released by a vendor, and
the speed at which they’re applied by administrators.
Nobody’s system is invincible.
A more recent
analysis by John McCormick in Tech Republic
compared
Windows and Linux vulnerabilities using
numbers through September 2001.
This is an interesting analysis, showing that although Windows NT lead
in the number of vulnerabilities in 2000, using the 2001 numbers
through September 2001, Windows 2000 had moved to the “middle of the pack”
(with some Linux systems having more, and others having fewer,
vulnerabilities).
However, it appears that in these numbers, bugs in Linux applications
have been counted with Linux, while bugs in Windows applications haven’t -
and if that’s so, this isn’t really a fair comparison.
As noted above, typical Linux distributions bundle
many applications that are separately purchased from Microsoft.
Clearly this table uses a different method for counting security problems
than the previous table.
Of the three noted here, Sun’s Solaris had the fewest vulnerabilities,
but it took by far the longest to fix security problems identified.
Red Hat was the fastest at fixing security problems, and placed in the
middle of these three in number of vulnerabilities.
It’s worth noting that the OpenBSD operating system (which is
OSS/FS) had fewer reported vulnerabilities than all of these.
Clearly, having a proprietary operating system doesn’t mean you’re
more secure - Microsoft had the largest number of security advisories,
by far, using either counting method.
More recent examples seem to confirm this;
on September 30, 2002,
eWeek Labs’
article “Open Source Quicker at Fixing Flaws” listed specific examples
of more rapid response.
This article can be paraphrased as follows:
In June 2002, a serious flaw was found in the Apache Web server;
the Apache Software Foundation made
a patch available two days after the Web server hole was announced.
In September 2002, a flaw was announced in
OpenSSL and a patch was available the same day.
In contrast,
a serious flaw was found in Windows XP
that made it possible to delete files on a system using a single URL;
Microsoft quietly fixed this problem in Windows XP Service Pack 1
without notifying users of the problem.
A more direct comparison can be seen in how Microsoft
and the KDE Project responded to an SSL (Secure Sockets Layer) vulnerability
that made the Internet Explorer and Konqueror browsers,
respectively, potential tools for stealing data such as credit card information.
The day the SSL vulnerability was announced, KDE provided a patch.
Later that week, Microsoft posted a memo on its TechNet site basically
downplaying the problem.
In contrast, in the article
“IT bugs out over IIS security,” eWeek determined that
Microsoft has issued
21 security bulletins for IIS from January 2000 through June 2001.
Determining what this number means is a little difficult, and the article
doesn’t discuss these complexities, so I’ve examined Microsoft’s bulletins
myself to find their true significance.
Not all of the bulletins have the same significance, so just stating that
there were “21 bulletins” doesn’t give the whole picture.
However, it’s clear that several of these
bulletins discuss dangerous vulnerabilities
that allow an external user to gain control over the system.
I count 5 bulletins on such highly dangerous vulnerabilities
for IIS 5.0 (in the period from January 2000 through June 2001), and
previous to that time, I count 3 such bulletins for IIS 4.0
(in the period of June 1998 through December 1999).
Feel free to examine the bulletins yourself; they are
MS01-033, MS01-026, MS01-025, MS01-023, MS00-086,
MS99-025, MS99-019, and MS99-003.
The Code Red
worm, for example, exploited a vast number of IIS sites through
the vulnerabilities identified in the June 2001 security bulletin MS01-033.
In short, by totaling the number of reports of dangerous
vulnerabilities (that allow attackers to execute arbitrary code),
I find a total of 8 bulletins for IIS from June 1998 through
June 2001, while Apache had zero such vulnerabilities for that time
period.
Apache’s last such report was in January 1998, and that one
affected the log analyzer not the web server itself.
As was noted above, the last such dangerous vulnerability in Apache itself
was announced in January 1997.
It’s time-consuming to do this kind of analysis, so I haven’t repeated
the effort more recently.
However, it’s worth noting
eWeek’s April 10, 2002 article
noting that ten more IIS flaws have been found
in IIS Server 4.0, 5.0, and 5.1, some of which would
allow attackers to crash the IIS service or allow the
attacker to run whatever code he chooses.
Even this doesn’t give the full story, however; a vulnerability in IIS
tends to be far more dangerous than an equivalent vulnerability in Apache,
because Apache wisely follows the good security practice of
“least privilege.”
IIS is designed so that anyone who takes over IIS can take over
the entire system, performing actions such as
reading, modifying, or erasing any file on the system.
In contrast, Apache is installed with very few privileges by default, so
even taking over Apache gives attackers relatively few
privileges. For example, cracking Apache does not give attackers the
right to modify or erase most files.
This is still not good, of course, and an attacker may be able to
find another vulnerability to give them complete access, but an Apache
system presents more challenges to an attacker than IIS.
The article claims there are four reasons for Apache’s strong security,
and three of these reasons are simply good security practices.
Apache installs very few server extensions by default
(a “minimalist” approach),
all server components run as a non-privileged user
(supporting “least privilege” as noted above), and
all configuration settings are centralized
(making it easy for administrators to know what’s going on).
However, the article also claims that one of the main reasons Apache is
more secure than IIS is that its
“source code for core server files is well-scrutinized,” a task that
is made much easier by being OSS/FS, and it could be argued that OSS/FS
encourages the other good security practices.
Simple counts of vulnerability notices aren’t necessarily a good measure,
of course.
A vendor could intentionally release fewer bulletins - but since Apache’s
code and its security is publicly discussed, it seems very unlikely that
Apache is deliberately underreporting security vulnerabilities.
Fewer vulnerability notices could result if the product isn’t well scrutinized
or is rarely used - but this simply isn’t true for Apache.
Even the trend line isn’t encouraging - using the months of the bulletins
(2/99, 6/99, 7/99, 11/00, three in 5/01, and 6/01), I find the
time in months between new major IIS vulnerability announcements to be
4, 1, 18, 6, 0, 0, 1, and 3 as of September 2001;
this compares to 12 and 44 as of September 2001 for Apache.
Given these trends, it looks like IIS’s security is slowly improving,
but it has little likelihood of meeting Apache’s security in the near future.
Indeed, these vulnerability counts are corroborated by other measures such
as the web site defacement rates.
The issue here isn’t whether or not a particular program is invincible
(what nonsense!) - the issue here is which is more likely to resist
future attacks, based on past performance.
It’s clear that the OSS/FS Apache has much a better security record
than the proprietary IIS, so
much so that Gartner Group decided to make an unusual recommendation
(described below).
In a background
document by Gartner,
they discuss Code Red’s impacts further.
By July 2001, Computer Economics (a research firm) estimated that
enterprises worldwide had spent $1.2 billion fixing vulnerabilities in
their IT systems that Code Red could exploit (remember, Code Red is designed
to only attack IIS systems; systems such as Apache are immune).
To be fair, Gartner correctly noted that the problem is not just that
IIS has vulnerabilities; part of the problem is that enterprises using IIS
are not keeping their IT security up to date, and Gartner openly wondered why
this was the case.
However, Gartner also asked the question, “why do Microsoft’s software
products continue to provide easily exploited openings for such attacks?”
This was prescient, since soon after this the “Nimba” attack surfaced
which attacked IIS, Microsoft Outlook, and other Microsoft products.
A brief aside is in order here.
Microsoft spokesman Jim Desler tried to counter Gartner’s recommendation,
trying to label it as “extreme” and
saying that “serious security vulnerabilities have been found in
all Web server products and platforms.. this is an industry-wide challenge.”
While true, this isn’t the whole truth. As Gartner points out,
“IIS has a lot more security vulnerabilities than other products
and requires more care and feeding.”
It makes sense to select the product with the best security
track record, even if no product has a perfect record.
The CERT Coordination Center (CERT/CC) is federally funded to study
security vulnerabilities and perform related activities such as publishing
security alerts.
I sampled their list of
“current activity” of the most frequent, high-impact security
incidents and vulnerabilities on September 24, 2001,
and found yet more evidence
that Microsoft’s products have poor security compared to others
(including OSS/FS).
Four of the six most important security vulnerabilities
were specific to Microsoft:
W32/Nimda, W32/Sircam, cache corruption on Microsoft DNS servers, and
“Code Red” related activities.
Only one of the six items primarily affected non-Microsoft products
(a buffer overflow in telnetd); while this particular vulnerability is
important, it’s worth noting that many open source systems
(such as Red Hat 7.1) normally don’t enable
this service (telnet) in the first place and thus are less likely to be
vulnerable.
The sixth item (“scans and probes”) is a general note that there is
a great deal of scanning and probing on the Internet, and that there are
many potential vulnerabilities in all systems.
Thus, 4 of 6 issues are high-impact vulnerabilities are specific to Microsoft,
1 of 6 are vulnerabilities primarily affecting Unix-like systems
(including OSS/FS operating systems),
and 1 of 6 is a general notice about scanning.
Again, it’s not that OSS/FS products never have security vulnerabilities -
but they seem to have fewer of them.
The ICAT system provides a searchable
index and ranking for the vulnerabilities cross-references by CVE.
I sampled its top ten list on December 19, 2001; this top ten list
is defined by the number of requests made for a particular vulnerability
in ICAT (and including only vulnerabilities within the last year).
In this case, 8 of the top 10 vulnerabilities only affect proprietary systems
(in all cases, Windows).
Only 2 of 10 affect OSS/FS systems (#6, CAN-2001-0001, a weakness in
PHP-Nuke 4.4, and #8, CVE-2001-0013, a new vulnerability found in an
old version of BIND - BIND 4).
Obviously, by itself this doesn’t prove that there are fewer serious
vulnerabilities in OSS/FS programs, but it is suggestive of it.
Many have noted that one reason Windows is attacked more often is
simply because there are so many Windows systems in use.
Windows is an attractive target for virus writers simply
because it is in such widespread use.
For a virus to spread, it has to transmit itself to
other susceptible computers; on average,
each infection has to cause at least one more.
The ubiquity of Windows machines makes it easier
for this threshold to be reached.
There may be a darker reason: there are many who do not like Microsoft’s
business practices, and perhaps this contributes to the problem.
Some of Microsoft’s business practices have been proven in court to be
illegal, but the U.S. government appears unwilling to effectively punish
or stop those practices.
Some computer literate people may be taking their frustration out
on users of Microsoft’s product.
This is absolutely wrong, and in most countries illegal.
It is extremely unethical to attack an innocent user of a Microsoft
product simply because of Microsoft’s policies, and I condemn such behavior.
At this point, although this has been speculated many times, I have
not found any evidence that this is a widespread motivator for
actual attacks.
On the other hand, if you are choosing products, do you really
want to choose the product whom people may have a vendetta against?
However, the reasons given above don’t explain the
disproportionate vulnerability of Microsoft’s products.
A simpler explanation, and one that is easily proven, is that
Microsoft has made a number of design choices over the years in Microsoft’s
products that are fundamentally less secure,
and this has made their products a much easier target than many other systems.
Examples include execution of start-up macros in Word,
execution of attachments in Outlook, and lack of write protection on
system directories in Windows 3.1/95/98.
This may be because Microsoft has assumed that customers
will buy their products whether or not Microsoft secures them;
since until recently there’s been little competition, there was no
need to spend money on “invisible” attributes such as security.
It’s also possible that Microsoft is still trying to adjust to an
Internet-based world;
the Internet would not have developed as it has without Unix-like systems,
which have supported the Internet standards for decades,
while for many years Microsoft ignored the Internet and then
suddenly had to play “catch-up” in the early 1990s.
Microsoft has sometimes claimed that they can’t secure their products
because they
want to make sure that their products are “easy to use”.
While it’s true that some security
features can make a product harder to use,
usually a secured product can be just as easy to use if the
security features are carefully designed into the product.
Besides, what’s so easy to use about a system that has to be reformatted
and reinstalled every few months because yet another virus got in?
But for whatever the reason, it’s demonstrably true that
Microsoft’s designers have in the past made decisions that made
their products’ security much weaker than other systems.
In contrast,
while it’s possible to write a virus for OSS/FS operating systems, their design
makes it more difficult for viruses to spread... showing that
Microsoft’s design decisions were not inevitable.
It appears that
OSS/FS developers tend to select design choices that limit the damage
of viruses, perhaps in part because their code is subject to
public inspection and comment.
For example,
OSS/FS programs generally do not support start-up macros nor execution
of mail attachments that can be controlled by attackers.
Also, leading OSS/FS operating systems (such as
GNU/Linux and the *BSDs) have always had
write protection on system directories.
Another discussion on why viruses don’t seem to significantly
affect OSS/FS systems is available from Roaring Penguin.
OSS/FS systems are not immune to malicious code,
but they are certainly more resistant.
In their words,
I agree with the authors that ideally a network vulnerability scanner
should find every well-known vulnerability,
and that “even one hole is too many.”
Still, perfection is rare in the real world.
More importantly,
a vulnerability scanner should only be part of the process to secure an
organization - it shouldn’t be the sole activity.
Still, this evaluation suggests that an organization
will be more secure, not less secure, by using an OSS/FS program.
It could be argued that this simply shows that this particular OSS/FS
program had more functionality - not more security - but in this case,
the product’s sole functionality was to improve security.
Indeed, assuming that the vulnerabilities were only counted three times
(and thus dividing by only 3) would show Linux as having a better result,
never mind the fact that there are more than than 3 distributions and
the other factors noted by Linux Weekly News.
How did our contestants [fare]?
Red Hat had the best score, with 348 recess days on 31 advisories,
for an average of 11.23 days from bug to patch.
Microsoft had 982 recess days on 61 advisories,
averaging 16.10 days from bug to patch.
Sun proved itself to be very slow, although
having only 8 advisories it accumulated 716 recess days,
a whopping three months to fix each bug on average.
Their table of data for 1999 is as shown:
1999 Advisory Analysis
Vendor
Total Days, Hacker Recess
Total Advisories
Recess Days/Advisory
Red Hat 348
31
11.23 Microsoft 982
61
16.10 Sun 716
8
89.50
The numbers differ in detail, but all sources agree that computer
viruses are overwhelmingly more prevalent on Windows than any other system.
There are about 60,000 viruses known for Windows,
40 or so for the Macintosh, about 5 for commercial Unix versions,
and perhaps 40 for Linux.
Most of the Windows viruses are not important,
but many hundreds have caused widespread damage.
Two or three of the Macintosh viruses were widespread
enough to be of importance. None of the Unix or
Linux viruses became widespread - most were confined to the laboratory.
Some of us were a bit skeptical of the open-source Nessus
project’s thoroughness until [Nessus] discovered the greatest
number of vulnerabilities. That’s a hard fact to argue with,
and we are now eating our words ...
[Nessus] got the highest overall score
simply because it did more things right than the other products.
One serious problem is that there are strong economic disincentives for proprietary vendors to make their software secure. For example, if vendors make their software more secure, they would often fail to be “first” in a given market; this often means that they will lose that market. Since it is extremely difficult for customers to distinguish proprietary software with strong security from those with poor security, the poor products tend to eliminate the good ones (after all, they’re cheaper to develop and thus cost less). Governments have other disincentives as well. For a discussion of some of the economic disincentives for secure software, see Why Information Security is Hard - an Economic Perspective by Ross Anderson (Proceedings of the Annual Computer Security Applications Conference (ACSAC), December 2001, pp. 358-365). It’s not clear that OSS/FS always avoids these disincentives, but it appears in at least some cases it does. For example, OSS/FS source code is public, so the difference in security is far more visible than in proprietary products.
One of the most dangerous security problems with proprietary software is that if intentionally malicious code is snuck into it, such code is extremely difficult to find. Few proprietary vendors have other developers examine all code in great detail - their testing processes are designed to catch mistakes (not malice) and often don’t look at the code at all. In contrast, malicious code can be found by anyone when the source code is publicly available, and with OSS/FS, there are incentives for arbitrary people to review it (such as to add new features or perform a security review of a product they intend to use). Thus, someone inserting malicious code to an OSS/FS project runs a far greater risk of detection. Here are two examples, one confirmed, one not confirmed:
Bruce Perens, in “Open sourcers wear the white hats”, makes the interesting claim that most of the people reviewing proprietary products looking for security flaws (aside from one or two paid reviewers) are “black hats,” outsiders who disassemble the code or try various types of invalid input in search of a flaw that they can exploit (and not report). There is simply little incentive, and many roadblocks, for someone to search for security flaws simply to improve someone else’s proprietary product. “Only a black hat would disassemble code to look for security flaws. You won’t get any ‘white hats’ doing this for the purpose of [just] closing the flaws.” In contrast, he believes many open source developers do have such an incentive. I think this article slightly overstates the case; there are other incentives (such as fame) that can motivate a few people to review some other company’s proprietary product for security. Still, he has a point; even formal reviews often only look at designs (not code), proprietary code is often either unreviewed or poorly reviewed, and there are many cases (including the entire OpenBSD system) where legions of developers review open source code for security issues. As he notes, “open source has a lot of ‘white hats’ looking at the source. They often do find security bugs while working on other aspects of the code, and the bugs are reported and closed.”
The “Alexis de Tocqueville Institute” (ADTI) published a white paper called “Opening the Open Source Debate” that purported to examine OSS/FS issues. Unfortunately, it makes a large number of wrong, specious, and poorly-argued claims about OSS/FS, including some related to security. Wired (in its article Did MS Pay for Open-Source Scare?) made some startling discoveries about ADTI, and found strong circumstantial evidence that the paper was paid for by Microsoft (a prime competitor to OSS/FS), directly or indirectly: “A Microsoft spokesman confirmed that Microsoft provides funding to the Alexis de Tocqueville Institution... Microsoft did not respond to requests for comment on whether the company directly sponsored the debate paper. De Tocqueville Institute president Ken Brown and chairman Gregory Fossedal refused to comment on whether Microsoft sponsored the report.” Politech found additional suspicious information about ADTI. ADTI apparently has a history of creating “independent” results that are apparently paid for by corporations (e.g., see the Smoke Free for Health article about ADTI’s pro-tobacco-lobby papers). Reputable authors clearly identify any potential conflict of interest, even if it’s incidental; ADTI did not when it developed this OSS/FS paper. Not surprisingly, the ADTI paper makes a number of errors and draws unwarranted conclusions. I’ll just note a few examples of the paper’s problems that aren’t as widely noted elsewhere: incorrect or incomplete quotations, rewriting web browser history, and cleverly omitting the most important data in one of their charts:
Now it should be obvious from these figures that OSS/FS systems are not magically invincible from security flaws. Indeed, some have argued that making the source code available gives attackers an advantage (because they have more information to make an attack). While OSS/FS gives attackers more information, this ignores opposing forces: having the source code also gives the defenders more information (because they can also examine its original source code), and in addition, the defenders can improve the code. More importantly, the necessary information for breaking into a program is in the binary executable of the program; disassemblers and decompilers can quickly extract whatever information is needed from executables to break into a program, so hiding the source code isn’t all that helpful for preventing attacks. It is not true that proprietary programs are always more secure, or that OSS/FS is always more secure, because there are many factors at work. For a longer description of these issues, see my discussion on open source and security (part of my book on writing secure software). However, from these figures, it appears that OSS/FS systems are in many cases better - not just equal - in their resistance to attacks as compared to proprietary software.
Indeed, whatever product you use or support, you can probably find a study to show it has the lowest TCO for some circumstance. Not surprisingly, both Microsoft and Sun provide studies showing that they have the lowest TCO (but see my comments later about Microsoft’s study). Xephon has a study determining that mainframes are the cheapest per-user (due to centralized control) at £3450 per user per year; Centralized Unix cost £7350 per user per year, and a decentralized PC environment costs £10850 per user per year. Xephon appears to be a mainframe-based consultancy, though, and would want the results to come out this way. There are indeed situations where applying a mainframe makes sense.. but as we’ll see in a moment, you can use OSS/FS in such environments too.
In short, what has a smaller TCO depends on your environment and needs. To determine TCO you have to identify all the important cost drivers (the “cost model”) and estimate their costs. Don’t forget “hidden” costs, such as administration costs, upgrade costs, technical support, end-user operation costs, and so on. However, OSS/FS has a number of strong cost advantages in various categories that, in many cases, will result in its having the smallest TCO.
OSS/FS isn’t cost-free, because you’ll still spend money for paper documentation, support, training, system administration, and so on, just as you do with proprietary systems. In many cases, the actual programs in OSS/FS distributions can be acquired freely by downloading them (linux.org provides some pointers on how to get distributions). However, most people (especially beginners and those without high-speed Internet connections) will want to pay a small fee to a distributor for a nicely integrated package with CD-ROMs, paper documentation, and support. Even so, OSS/FS is far less expensive to acquire.
For example, look at some of the price differences when trying to configure a server (say a public web server or an intranet file and email server, in which you’d like to use C++ and an RDBMS for some portions of it). This is an example, of course; different missions would involve different components. I used the prices from “Global Computing Supplies” (Suwanee, GA), September 2000, and rounded to the nearest dollar. Here’s a quick summary of some costs:
Microsoft Windows 2000 | Red Hat Linux | |
Operating System | $1510 (25 client) | $29 (standard), $76 deluxe, $156 professional (all unlimited) |
Email Server | $1300 (10 client) | included (unlimited) |
RDBMS Server | $2100 (10 CALs) | included (unlimited) |
C++ Development | $500 | included |
Basically, Microsoft Windows 2000 (25 client) costs $1510; their email server Microsoft Exchange (10-client access) costs $1300, their RDBMS server SQL Server 2000 costs $2100 (with 10 CALs), and their C++ development suite Visual C++ 6.0 costs $500. Red Hat Linux 6.2 (a widely-used GNU/Linux distribution) costs $29 for standard (90 days email-based installation support), $76 for deluxe (above plus 30 days telephone installation support), or $156 for professional (above plus SSL support for encrypting web traffic); in all cases it includes all of these functionalities (web server, email server, database server, C++, and much more). A public web server with Windows 2000 and an RDBMS might cost $3610 ($1510+$2100) vs. Red Hat Linux’s $156, while an intranet server with Windows 2000 and an email server might cost $2810 ($1510+$1300) vs. Red Hat Linux’s $76.
Both packages have functionality the other doesn’t have. The GNU/Linux system always comes with an unlimited number of licenses; the number of clients you’ll actually use depends on your requirements. However, this certainly shows that no matter what, Microsoft’s server products cost thousands of dollars more per server than the equivalent GNU/Linux system.
For another in-depth analysis comparing the initial costs GNU/Linux with Windows, see Linux vs. Windows: The Bottom Line by Cybersource Pty Ltd. Here’s a summary of their analysis (in 2001 U.S. dollars):
Microsoft Solution | OSS/FS (GNU/Linux) Solution | Savings by using GNU/Linux | |
Company A (50 users) | $69,987 | $80 | $69,907 |
Company B (100 users) | $136,734 | $80 | $136,654 |
Company C (250 users) | $282,974 | $80 | $282,894 |
Consulting Times found that as the number of mailboxes got large, the three-year TCO for mainframes with GNU/Linux became in many cases quite compelling. For 50,000 mailboxes, an Exchange/Intel solution cost $5.4 million, while the Linux/IBM(G6) solution cost $3.3 million. For 5,000 mailboxes, Exchange/Intel cost $1.6 million, while Groupware on IFL cost $362,890. For yet another study, see the Cost Comparison from jimmo.com. Obviously, the price difference depends on exactly what functions you need for a given task, but for many common situations, GNU/Linux costs far less to acquire.
In Scientific American’s August 2001 issue, the article The Do-It-Yourself Supercomputer discusses how the researchers built a powerful computing platform with a large number of obsolete, discarded computers and GNU/Linux. The result was dubbed the “Stone Soupercomputer”; by May 2001 it contained 133 nodes, with a theoretical peak performance of 1.2 gigaflops.
According to Network World Fusion News, Linux is increasingly being used in healthcare, finance, banking, and retail because of its cost advantages when large numbers of identical sites and servers are built. According to their calculations for a 2,000 site deployment, SCO UnixWare would cost $9 million, Windows would cost $8 million, and Red Hat Linux costs $180.
This report also found that GNU/Linux and Solaris had smaller administrative costs than Windows. Although Windows system administrators were individually less expensive, each Linux or Solaris administrator could administrate many more machines, making Windows administration much more expensive. The study also revealed that Windows administrators spent twice as much time patching systems and dealing with other security-related issues than did Solaris or GNU/Linux administrators.
RFG also examined some areas that were difficult to monetize. In the end, they concluded that “Overall, given its low cost and flexible licensing requirements, lack of proprietary vendor goals, high level of security, and general stability and usability, Linux is worth considering for most types of server deployments.”
A survey was by TheOpenEnterprise.com (a joint editorial effort between InternetWeek.com and InformationWeek) of individuals with management responsibility for IT and software specifically in companies with more than $5 million in revenue. In this survey, Indeed, 39% said “open source/standards-based software” was 25% to 50% less expensive than proprietary software, while 27% (more than 1 in 4) said it’s 50% to 75% less expensive in their experience. In context, it appears their phrase was intended to mean the same (or similar) thing as the term OSS/FS in this paper, since in many cases they simply use the term “open-source.” As they note, “Would your CFO react favorably to a 50-75% reduction in software costs?”
There are many other reports from those who have switched to OSS/FS systems; see the usage reports section for more information.
You may also want to see MITRE’s business case study of OSS, which considered military systems.
Most of these items assume that users will use the software unmodified, but even if the OSS/FS software doesn’t do everything required, that is not necessarily the end of the story. One of the main hallmarks of OSS/FS software is that it can be modified by users. Thus, any true TCO comparison should consider not just the products that fully meet the requirements, but the existing options that with some modifications could meet the requirements. It may be cheaper to start with an existing OSS/FS program, and improve it, than to start with a proprietary program that has all of the necessary functionality. Obviously, the total TCO including such costs varies considerably depending on the circumstances.
Brendan Scott (a lawyer specializing in IT and telecommunications law) argues that the long run TCO of OSS/FS must be lower than proprietary software. Scott’s paper makes some interesting points, for example, “TCO is often referred to as the total cost of ‘ownership’... [but] ‘ownership’ of software as a concept is anathema to proprietary software, the fundamental assumptions of which revolve around ownership of the software by the vendor. ... The user [of proprietary software] will, at best, have some form of (often extremely restrictive) license. Indeed, some might argue that a significant (and often uncosted) component of the cost of ‘ownership’ of proprietary software is that users don’t own it at all.” The paper also presents arguments as to why GPL-like free software gives better TCO results than other OSS/FS licenses. Scott concludes that “Customers attempting to evaluate a free software v. proprietary solution can confine their investigation to an evaluation of the ability of the packages to meet the customer’s needs, and may presume that the long run TCO will favor the free software package. Further, because the licensing costs are additional dead weight costs, a customer ought to also prefer a free software solution with functionality shortfalls where those shortfalls can be overcome for less than the licensing cost for the proprietary solution.”
Microsoft’s TCO study (mentioned earlier) is probably not useful as a starting point for estimating your own TCO. Their study reported the average TCO at sites using Microsoft products compared to the average TCO at sites using Sun systems, but although the Microsoft systems cost 37% less to own, the Solaris systems handled larger databases, more demanding applications connecting to those databases, 63% more concurrent connections, and 243% more hits per day. In other words, the Microsoft systems that did less work were less expensive. This is not a useful starting point if you’re using TCO to help determine which system to buy -- to make a valid comparison by TCO, you need to compare the TCOs of systems that both perform the job that you need to do. A two-part analysis by Thomas Pfau (see part 1 and part 2) identifies this and many other flaws in the study.
There are some studies that emphasize Unix-like systems, not OSS/FS, which claim that that there are at least some circumstances where Unix-like systems are more cost-effective than Windows. A Strategic Comparison of Windows vs. Unix by Paul Murphy is one such paper. It appears that many of these arguments would also apply to OSS/FS systems, since many of them are Unix-like.
Again, it’s TCO that matters, not just certain cost categories. However, given these large differences, in many situations OSS/FS has a smaller TCO than proprietary systems. At one time it was claimed that OSS/FS installation took more time, but nowadays OSS/FS systems can be purchased pre-installed and automatic installers result in equivalent installation labor. Some claim that system administration costs are higher, but studies like Sun’s suggest than in many cases the system administration costs are lower, not higher, for Unix-like systems (at least Sun’s). For example, on Unix-like systems it tends to be easier to automate tasks (because you can, but do not need, to use a GUI) - thus over time many manual tasks can be automated (reducing TCO). Retraining costs can be significant - but now that GNU/Linux has modern GUI desktop environments, there’s anecdotal evidence that this cost is actually quite small (I’ve yet to see serious studies quantitatively evaluating this issue). In short, it’s often hard to show that a proprietary solution’s purported advantages really help offset their demonstrably larger costs in other categories when there’s a competing mature OSS/FS product for the given function.
Does this mean that OSS/FS always have the lowest TCO? No! As I’ve repeatedly noted, it depends on its use. But the notion that OSS/FS always has the larger TCO is simply wrong.
In fairness, I must note that not all issues can be quantitatively measured, and to many they are the most important issues. The issues most important to many include freedom, protection from license litigation, and flexibility. Another issue that’s hard to measure is innovation.
For example, many organizations have chosen to use Microsoft’s products exclusively, and Microsoft is trying to exploit this through its new “Microsoft Licensing 6.0 Program.” The TIC/Sunbelt Software Microsoft Licensing Survey Results (covering March 2002) reports the impact on customers of this new licensing scheme. 80% had a negative view of the new licensing scheme, noting, for example, that the new costs for software assurance (25% of list for server and 29% of list for clients) are the highest in the industry. Of those who had done a cost analysis, an overwhelming 90% say their costs will increase if they migrate to 6.0, and 76% said their costs would increase from 20% to 300% from what they are paying now under their current 4.0 and 5.0 Microsoft Licensing plans. This survey found that 36% of corporate enterprises don’t have the necessary funds to upgrade to the Microsoft Licensing 6.0 Program. Half indicated that the new agreement would almost certainly delay their migration initiatives to new Microsoft client, server and Office productivity platforms, and 38% say they are actively seeking alternatives to Microsoft products. In New Zealand a Commerce Commission Complaint has been filed claiming that Microsoft’s pricing regime is anti-competitive. In particular, after reading the “Open License” contract, Craig Horrocks notes that Software Assurance was in fact not assuring the purchaser of software, but merely buying the “right” to upgrade to any version Microsoft releases in the period of cover. Microsoft may levy further charges on a release, and the contract does not obligate Microsoft to deliver anything in the time period.
More generally, defining an organization’s “architecture” as being that of a single vendor is sometimes called “Vendor Lock-in” or “Pottersville”, and this “solution” is a well-known AntiPattern (an AntiPattern is a “solution” that has more problems than it solves).
Historically, proprietary vendors eventually lose to vendors selling products available from multiple sources, even when their proprietary technology is (at the moment) better. Sony’s Betamax format lost to VHS in the videotape market, IBM’s microchannel architecture lost to ISA in the PC architecture market, and Sun’s NeWS lost to X-windows in the networking graphics market, all because customers prefer the reduced risk (and eventually reduced costs) of non-proprietary products. This is sometimes called “commodification”, a term disparaged by proprietary vendors and loved by users. Since users spend the money, users eventually find someone who will provide what they want, and then the other suppliers discover that they must follow or give up the market area.
With OSS/FS, users can choose between distributors, and if a supplier abandons them they can switch to another supplier. As a result, suppliers will be forced to provide good quality products and services for relatively low prices, because users can switch if they don’t. Users can even band together and maintain the product themselves (this is how the Apache project was founded), making it possible for groups of users to protect themselves from abandonment.
Proprietary vendors also litigate against those who don’t comply with their complex licensing management requirements, creating increased legal risks for users. For example, the Business Software Alliance (BSA) is a proprietary software industry organization sponsored by Microsoft, Macromedia, and Autodesk, and spends considerable time searching for and punishing companies who cannot prove they are complying. As noted in the SF Gate (Feb. 7, 2002), the BSA encourages disgruntled employees to call the BSA if they know of any license violations. “If the company refuses to settle or if the BSA feels the company is criminally negligent and deliberately ripping off software, the organization may decide to get a little nastier and organize a raid: The BSA makes its case in front of a federal court in the company’s district and applies for a court order. If the order is granted, the BSA can legally storm the company’s offices, accompanied by U.S. marshals, to search for unregistered software.”
Software Licensing by Andrew Grygus discusses the risks and costs of proprietary licensing schemes in more detail. According to their article, “the maximum penalty is $150,000 per license deficiency; typically, this is negotiated down, and a company found deficient at around $8,000 will pay a penalty of around $85,000 (and have to buy the $8,000 in software too).” For example, information services for the city of Virginia Beach, VA were practically shut down for over a month and 50 employees were tied up trying to put its licensing in order to answer a random audit demand by Microsoft. Eventually the city was fined $129,000 for missing licenses the city had probably paid for but couldn’t match to paperwork. Temple University had to pay $100,000 to the BSA, in spite of strong policies forbidding unauthorized copying.
In contrast, OSS/FS users have no fear of litigation from the use and copying of OSS/FS. Licensing issues do come up when OSS/FS software is modified and then redistributed, but to be fair, proprietary software essentially forbids this action (so it’s a completely new right). Even in this circumstance, redistributing modified OSS/FS software generally requires following only a few simple rules (depending on the license), such as giving credit to previous developers and releasing modifications under the same license as the original program.
One intriguing example is the musical instrument company Ernie Ball (described in the May 2002 issue of World Trade). A disgruntled ex-employee turned them into the Business Software Alliance (BSA); who then arranged to have them raided by armed Federal Marshals. Ernie Ball was completely shut down for a day, and then was required to not touch any data other than what is minimally necessary to run their business. After the investigation was completed, Ernie Ball was found to be noncompliant by 8%; Ball argued that it was “nearly impossible to be totally compliant” by their rules, and felt that they were treated unfairly. The company ended up paying a $90,000 settlement, $35,000 of which were Microsoft’s legal fees. Ball then decided at that moment his company would become “Microsoft free.” In one year he converted to a Linux-based network and UNIX “mainframe” using Sun’s StarOffice (Sun’s proprietary cousin to OpenOffice); he now has no Microsoft products at all, and much of the software is OSS/FS or based on OSS/FS products.
For example, in 1998 Microsoft decided against developing an Icelandic version of Windows 95 because the limited size of the market couldn’t justify the cost. Without the source code, the Islandic people had little recourse. However, OSS/FS programs can be modified, so Icelandic support was immediately added to them, without any need for negotiation with a vendor. Users never know when they will have a specialized need not anticipated by their vendor; being able to change the source code makes it possible to support those unanticipated needs.
This history of innovation shouldn’t be surprising; OSS/FS approaches are based on the scientific method, allowing anyone to make improvements or add innovative techniques and then make them immediately available to the public. Eric Raymond has made a strong case for why innovation is more likely, not less likely, in OSS/FS projects. The Sweetcode web site reports on innovative free software. Here’s what Sweetcode says about their site: “Innovative means that the software reported here isn’t just a clone of something else or a minor add-on to something else or a port of something else or yet another implementation of a widely recognized concept... Software reported on sweetcode should surprise you in some interesting way.”
If Microsoft’s proprietary approaches were better for research, then you would expect that to be documented in the research community. However, the opposite is true; the paper “NT Religious Wars: Why Are DARPA Researchers Afraid of Windows NT?” found that, in spite of strong pressure by paying customers, computer science researchers strongly resisted basing research on Windows. Reasons given were: developers believe Windows is terrible, Windows really is terrible, Microsoft’s highly restrictive non-disclosure agreements are at odds with researcher agendas, and there is no clear technology transition path for operating system and network research products built on Windows (since only Microsoft can distribute changes to its products). Microsoft’s own secret research (later leaked as “Halloween I”) found that “Research/teaching projects on top of Linux are easily ‘disseminated’ due to the wide availability of Linux source. In particular, this often means that new research ideas are first implemented and available on Linux before they are available / incorporated into other platforms.” Stanford Law School professor Lawrence Lessig (the “special master” in Microsoft’s antitrust trial) noted that “Microsoft was using its power to protect itself against new innovation” and that Microsoft’s practices generally threaten technical innovation - not promote it.
Given an entire site dedicated to linking to innovative OSS/FS projects, OSS/FS’s demonstrated history in key innovations, Microsoft’s failure to demonstrate innovation itself, reports from IT managers supporting OSS/FS, reports of dissatisfaction by researchers and others about Microsoft’s proprietary approaches, and Microsoft’s own research finding that new research ideas are often first implemented and available on Linux before other platforms, the claim that OSS/FS quashes innovation is demonstrably false.
While I cannot quantitatively measure these issues, these issues (particularly the first three) are actually the most important issues to many.
As an alternative, you can also get unpaid support from the general community of users and developers through newsgroups, mailing lists, web sites, and other electronic forums. While this kind of support is non-traditional, many have been very satisfied with it. Indeed, in 1997 InfoWorld awarded the “Best Technical Support” award to the “Linux User Community,” beating all proprietary software vendors’ technical support. Many believe this is a side-effect of the Internet’s pervasiveness - increasingly users and developers are directly communicating with each other and finding such approaches to be more effective than the alternatives (for more on this business philosophy, see The Cluetrain Manifesto). Using this non-traditional approach effectively for support requires following certain rules; for more on these rules, consult “How to ask smart questions”. But note that there’s a choice; using OSS/FS does not require you to use non-traditional support (and follow its rules), so those who want guaranteed traditional support can pay for it just as they would for proprietary software.
There is a another legal difference that’s not often mentioned. Many proprietary programs require that users permit software license audits and pay huge fees if the organization can’t prove that every use is licensed. So in some cases, if you use proprietary software, the biggest legal difference is that the vendors get to sue you.
Also, looking only at companies making money from OSS/FS misses critical issues, because that analysis looks only at the supply side and not the demand side. Consumers are saving lots of money and gaining many other benefits by using OSS/FS, so there is a strong economic basis for its success. Anyone who is saving money will fight to keep the savings, and it’s often cheaper for consumers to work together to pay for small improvements in an OSS/FS product than to keep paying and re-paying for a proprietary product. A proprietary vendor may have trouble competing with a similar OSS/FS product, because the OSS/FS product is probably much cheaper and frees the user from control by the vendor. For many, money is still involved - but it’s money saved, not money directly acquired as profit. Some OSS/FS vendors have done poorly financially - but many proprietary vendors have also done poorly too. Luckily for consumers, OSS/FS products are not tied to a particular vendor’s financial situation as much as proprietary products are.
Joel Spolsky’s “Strategy Letter V” notes that “most of the companies spending big money to develop open source software are doing it because it’s a good business strategy for them.” His argument is based on microeconomics, in particular, that every product in the marketplace has substitutes and complements. A substitute is another product you might buy if the first product is too expensive, while a complement is a product that you usually buy together with another product. Since demand for a product increases when the prices of its complements decrease, smart companies try to commoditize their products’ complements. For many companies, supporting an OSS/FS product turns a complementary product into a commodity, resulting in more sales (and money) for them.
Although many OSS/FS projects originally started with an individual working in their spare time, and there are many OSS/FS projects which can still be described that way, the “major” widely-used projects tend to no longer work that way. Instead, most major OSS/FS projects have large corporate backing with significant funds applied to them. This shift has been noted for years, and is discussed in papers such as Brian Elliott Finley’s paper Corporate Open Source Collaboration?.
Fundamentally, software is economically different than physical goods; it is infinitely replicable, it costs essentially nothing to reproduce, and it can be developed by thousands of programmers working together with little investment (driving the per-person development costs down to very small amounts). It is also durable (in theory, it can be used forever) and nonrival (users can use the same software without interfering with each other, a situation not true of physical property). Thus, the marginal cost of deploying a copy of a software package quickly approaches zero. This explains how Microsoft got so rich so quickly (by selling a product that costs nearly nothing to replicate), and why many OSS/FS developers can afford to give software away. See “Open Source-onomics: Examining some pseudo-economic arguments about Open Source” by Ganesh Prasad, which counters “several myths about the economics of Open Source.” People are already experimenting with applying OSS/FS concepts to other intellectual works, and it isn’t known how well OSS/FS concepts will apply to other fields. However, it is clear that making economic decisions based on analogies between software and physical objects is not sensible, because software has many economic characteristics that are different from physical objects.
OSS/FS doesn’t require that software developers work for free; many OSS/FS products are developed or improved by employees (whose job is to do so) and/or by contract work (who contract to make specific improvements in OSS/FS products). If an organization needs to have a new capability added to an OSS/FS program, they will need to find someone to add it... and generally, that will mean paying a developer to develop the addition. The difference is that, in this model, the cost is paid for development of those specific changes to the software, and not for making copies of the software. Since copying bits is essentially a zero-cost operation today, this means that this model of payment more accurately reflects the actual costs.
Indeed, there has been a recent shift in OSS/FS away from volunteer programmers and towards paid development by experienced developers. Again, see Ganesh Prasad’s article for more information. There’s even quantitative evidence that OSS/FS developers are experienced; the Boston Consulting Group/OSDN Hacker Survey (January 31, 2002) surveyed users of SourceForge and found that OSS/FS developers had an average age of 28 and that their programming experience averaged 11 years.
OSS/FS enables inexperienced developers to gain experience and credibility, while enabling organizations to find the developers they need. Often organizations will find the developers they need by looking at the OSS/FS projects they depend on (or on related projects). Thus, lead developers of a particular OSS/FS project are more likely to be hired by organizations when those organizations need an extension or support for that project’s program. This gives both hope and incentive to inexperienced developers; if they start a new project, or visibly contribute to a project, they’re more likely to be hired to do additional work. Other developers can more easily evaluate that developer’s work (since the code is available for all to see), and the inexperienced developer gains experience by interacting with other developers. This isn’t just speculation; one of Netscape’s presenters at FOSDEM 2002 was originally a volunteer contributor to Netscape’s Mozilla project; his contributions led Netscape to offer him a job (which he accepted).
Karen Shaeffer has written an interesting piece, Prospering in the Open Source Software Era, which discusses what she views to be the effects of OSS/FS, for example, it has the disruptive effect of commoditizing what used to be proprietary property and it invites innovation (as compared to proprietary software which constrained creativity). She believes the big winners will be end users and the software developers, because “the value of software no longer resides in the code base - it resides in the developers who can quickly adapt and extend the existing open source code to enable businesses to realize their objectives concerned with emerging opportunities. This commoditization of source code represents a quantum step forward in business process efficiency - bringing the developers with the expertise into the business groups who have the innovating ideas.”
Eric Raymond’s “The Magic Cauldron” describes a number of ways to make money with OSS/FS, and also gives some evidence that 95% of all software is not developed for sale. For the vast majority of software, organizations have to pay to develop it anyway. Thus, even if OSS/FS eliminated all shrink-wrapped programs, it would hardly affect the number of jobs available for software development.
One interesting case is the “General Public License” (GPL), the most common OSS/FS license. Software covered by the GPL can be modified, but any release of that modified software must include an offer for the source code under the same GPL license. Basically, the GPL creates a consortium; anyone can use the program, but you can’t make changes to the program or use its code in another program and make the results proprietary. Since the GPL is a legal document, it can be hard for some to understand. Here is one less legal summary (posted on Slashdot):
This software contains the intellectual property of several people. Intellectual property is a valuable resource, and you cannot expect to be able to use someone else’s intellectual property in your own work for free. Many businesses and individuals are willing to trade their intellectual property in exchange for something of value; usually money. For example, in return for a sum of money, you might be granted the right to incorporate code from someone’s software program into your own.The developers of this software are willing to trade you the right to use their intellectual property in exchange for something of value. However, instead of money, the developers are willing to trade you the right to freely incorporate their code into your software in exchange for the right to freely incorporate your code [which incorporates their code] into theirs. This exchange is to be done by way of and under the terms of the GPL. If you do not think that this is a fair bargain, you are free to decline and to develop your own code or purchase it from someone else. You will still be allowed to use the software, which is awfully nice of the developers, since you probably didn’t pay them a penny for it in the first place.
Microsoft complains that the GPL does not allow them to take such code and make changes that it can keep proprietary, but this is hypocritical. Microsoft doesn’t allow others to make and distribute changes to Microsoft software at all, so the GPL grants far more rights to customers than Microsoft does.
In some cases Microsoft will release source code under its “shared source” license, but that license (which is not OSS/FS) is far more restrictive. For example, it prohibits distributing software in source or object form for commercial purposes under any circumstances. Examining Microsoft’s shared source license also shows that it has even more stringent restrictions on intellectual property rights. For example, it states that “if you sue anyone over patents that you think may apply to the Software for a person’s use of the Software, your license to the Software ends automatically,” and “the patent rights Microsoft is licensing only apply to the Software, not to any derivatives you make.” A longer analysis of this license, and the problems it causes developers, is available at http://www.shared-source.org; the FSF has also posted a press release on why they believe the GPL protects software freedoms.
It’s true that organizations that modify and release GPL’ed software must yield any patent and copyright rights for those additions they release, but such organizations do so voluntarily (no one can force anyone to modify GPL code) and with full knowledge (all GPL’ed software comes with a license clearly stating this). And such grants only apply to those particular modifications; organizations can hold other unrelated rights if they wish to do so, or develop their own software instead. Since organizations can’t make such changes at all to proprietary software in most circumstances, and generally can’t redistribute changes in the few cases where they can make changes, this is a fair exchange, and organizations get far more rights with the GPL than with proprietary licenses (including the “shared source” license). If organizations don’t like the GPL license, they can always create their own code, which was the only option even before GPL’ed code became available.
Although the GPL is sometimes called a “virus” by proprietary vendors because of the way it encourages others to also use the GPL license, it’s only fair to note that many proprietary products also have virus-like effects. Many proprietary products with proprietary data formats or protocols have “network effects,” that is, once many users begin to use that product, that group puts others who don’t use the same product at a disadvantage. For example, once some users pick a particular product such as a proprietary operating system or word processor, it becomes increasingly difficult for other users to use a different product. Over time this enforced use of a particular proprietary product also spreads “like a virus.”
Certainly many technologists and companies don’t believe Microsoft that the GPL will destroy their businesses. Many seem too busy mocking Microsoft’s claims instead (for an example, see John Lettice’s June 2001 article “ Gates: GPL will eat your economy, but BSD’s cool”). After all, Microsoft sells a product which has GPL’ed components, and still manages to hold intellectual property (see below).
Perhaps Microsoft means the GPL “destroys” intellectual property because the owners of competing software may be driven out of business. If so, this is hypocritical; Microsoft has driven many companies out of business, or bought them up at fractions of their original price. Indeed, sometimes the techniques that Microsoft used have later been proven in court to be illegal. In contrast, there are excellent reasons to believe that the GPL is on very solid legal ground. “Destruction” of one organization by another through legal competition is quite normal in capitalistic economies.
The GPL does not “destroy” intellectual property; instead, it creates a level playing field where people can contribute improvements voluntarily to a common project without having them “stolen” by others. You could think of the GPL as creating a consortium; no one is required to aid the consortium, but those who do must play by its rules. The various motivations for joining the consortium vary considerably (see the article License to FUD), but that’s true for any other consortium too. It’s understandable that Microsoft would want to take this consortium’s results and take sole ownership of derivative works, but there’s no reason to believe that a world where the GPL cannot be used is really in consumers’ best interests.
Open source gives the user the benefit of control over the technology the user is investing in... The best analogy that illustrates this benefit is with the way we buy cars. Just ask the question, “Would you buy a car with the hood welded shut?” and we all answer an emphatic “No.” So ask the follow-up question, “What do you know about modern internal-combustion engines?” and the answer for most of us is, “Not much.”We demand the ability to open the hood of our cars because it gives us, the consumer, control over the product we’ve bought and takes it away from the vendor. We can take the car back to the dealer; if he does a good job, doesn’t overcharge us and adds the features we need, we may keep taking it back to that dealer. But if he overcharges us, won’t fix the problem we are having or refuses to install that musical horn we always wanted -- well, there are 10,000 other car-repair companies that would be happy to have our business.
In the proprietary software business, the customer has no control over the technology he is building his business around. If his vendor overcharges him, refuses to fix the bug that causes his system to crash or chooses not to introduce the feature that the customer needs, the customer has no choice. This lack of control results in high cost, low reliability and lots of frustration.
To developers, source code is critical. Source code isn’t necessary to break the security of most systems, but to really fix problems or add new features it’s quite difficult without it. Microsoft’s Bill Gates has often claimed that most developers don’t need access to operating system source code, but Graham Lea’s article “Bill Gates’ roots in the trashcans of history” exposes that Gates actually extracted operating system source code himself from other companies by digging through their trash cans. Mr. Gates said, “I’d skip out on athletics and go down to this computer center. We were moving ahead very rapidly: Basic, FORTRAN, LISP, PDP-10 machine language, digging out the operating system listings from the trash and studying those.” If source code access isn’t needed by developers, why did he need it?
See also the discussion on the greater flexibility of OSS/FS.
In many cases OSS/FS is developed with and for Microsoft technology. On June 21, 2002, SourceForge listed 831 projects that use Visual Basic (a Microsoft proprietary technology) and 241 using C# (a language that originated from Microsoft). A whopping 8867 projects are listed as working in Windows. This strongly suggests that there are many OSS/FS developers who are not “anti-Microsoft.”
Microsoft says it’s primarily opposed to the GPL, but Microsoft sells a product which has GPL’ed components. Microsoft’s Interix product provides an environment which can run UNIX-based applications and scripts on the Window NT and Windows 2000 operating systems. There’s nothing wrong with this; clearly, there are a lot of Unix applications, and since Microsoft wants to sell its operating systems, Microsoft decided to sell a way to run Unix applications on its own products. But many of the components of Interix are covered by the GPL; see Microsoft’s ftp site to see the list of Interix components that are covered by the GPL, along with a copy of the GPL text (here is my local copy). The problem is not what Microsoft is actually doing; as far as I can tell, they’re following both the letter and the spirit of the law in this product. The problem is that Microsoft says no one should use the GPL, and that no one can make money using the GPL, while simultaneously making money using the GPL. Bradley Kuhn (of the FSF) bluntly said, “It’s hypocritical for them to benefit from GPL software and criticize it at the same time.” Microsoft is certainly aware of this use of the GPL; even Microsoft Senior Vice President Craig Mundie acknowledged this use of GPL software. Kelly McNeill reported this on June 22, 2001, and when I re-checked on April 23, 2002 Microsoft was still selling GPL’ed software. A more detailed description about this use of the GPL by Microsoft is given in The Standard on June 27, 2001. Perhaps in the future Microsoft will try to remove many of these GPL’ed components so that this embarrassing state of affairs won’t continue. But even if these components are removed in the future, this doesn’t change the fact that Microsoft has managed to sell products that include GPL-covered code without losing any of its own intellectual property rights.
That being said, there are certainly many people who are encouraging specific OSS/FS products (such as Linux) so that there will be a viable competition to Microsoft, or who are using the existence of a competitor to obtain the best deal from Microsoft for their organization. This is nothing unusual - customers want to have competition for their business, and they usually have it in most other areas of business. Certainly there is a thriving competing market for computer hardware, which has resulted in many advantages for customers. The New York Times’ position is that “More than two dozen countries - including Germany and China - have begun to encourage governmental agencies to use such “open source” software ... Government units abroad and in the United States and individual computer users should look for ways to support Linux and Linux-based products. The competition it offers helps everyone.”
Naturally, if you want services besides the software itself (such as guaranteed support, training, and so on), you’ll need to pay for those things just like you would for proprietary software. If you want to affect the future direction of the software - particularly if you need to have the software changed in some way to make it fit your needs better - then you will need to invest to create those specific modifications. Typically these investments involve hiring someone to make those changes, possibly sharing the cost with others who also need the change. Note that you only need to pay to make a change to the software - you don’t need to pay to use the software, or a per-copy fee, only the actual cost of the changes.
For example, when IBM wanted to join the Apache group, IBM discovered there really wasn’t a mechanism to pay in money. IBM soon realized that the primary “currency” in OSS/FS is software code, so IBM turned the money into code and all turned out very well.
This also leads to interesting effects that explains why many OSS/FS projects start small for years, then suddenly leap into a mode where they have a rapidly increasing functionality and user size. For any particular application, there is a minimum level of acceptable functionality; below this, there will be very few users. If that minimum level is large enough, this creates an effect similar to an “energy barrier” in physics; the barrier can be large enough that most users are not willing to pay for the initial development of the project. However, at some point, someone may decide to begin the “hopeless” project anyway. The initial work may take a while, because the initial work is large and there are few who will help. However, once a minimum level of functionality is reached, a few users will start to use it, and a few of them may be willing to help (e.g., because they want the project to succeed or because they have specialized needs). At some point in this growth, it is like passing an energy barrier; the process begins to become self-sustaining and exponentially increasing. As the functionality increases, the number of potential users begins to increase rapidly, until suddenly the project is sufficiently usable for many users. A percentage of the userbase will decide to add new features, and as the userbase grows, so do the number of developers. As this repeats, there is an explosion in the program’s capabilities.
As discussed earlier, the City of Largo, Florida supports 900 city employees using GNU/Linux, saving about $1 million a year. A BusinessWeek online article notes that Mindbridge shifted their 300-employee intranet software company from Microsoft server products and Sun Solaris to GNU/Linux; after experiencing a few minor glitches, their Chief Operating Officer and founder Scott Testa says they now couldn’t be happier, and summarizes that “...we’re saving hundreds of thousands of dollars between support contracts, upgrade contracts, and hardware.” Amazon.com saved millions of dollars by switching to GNU/Linux. Oracle’s Chairman and CEO, Larry Ellison, said that Oracle will switch to GNU/Linux to run the bulk of its business applications no later than summer 2002, replacing three Unix servers. A travel application service provider saved $170,000 in software costs during the first six months of using GNU/Linux (for both servers and the desktop); it also saved on hardware and reported that administration is cheaper too. CRN’s Test Center found that a GNU/Linux-based network (with a server and 5 workstations) cost 93% less in software than a Windows-based network, and found it to be quite capable. The article Linux as a Replacement for Windows 2000 determined that “Red Hat Linux 7.1 can be used as an alternative to Windows 2000... You will be stunned by the bang for the buck that Linux bundled free ‘open source’ software offers.”
Educational organizations have found OSS/FS software useful. The K12 Linux Terminal Server Project has set up many computer labs in the U.S. Northwest in elementary, middle, and high schools. For example, St. Mary’s School is a 450-student Pre-K through 8th grade school in Rockledge, Florida that applying GNU/Linux using their approach. Their examples show that kids don’t find GNU/Linux that hard to use and quite able to support educational goals. For example, third graders put together simple web pages about their favorite Saints using a variety of OSS/FS programs: they logged into GNU/Linux systems, typed the initial content using Mozilla Composer (an OSS/FS web page editor), drew pictures of the Saints using The Gimp (an OSS/FS drawing program), and shared the results with Windows users using Samba. The page Why should open source software be used in schools? gives various examples of educational organizations who have used OSS/FS programs, as well as linking to various general documents on why educational organizations should use OSS/FS.
Many financial organizations use OSS/FS. Online brokerage E*Trade is moving its computer systems to IBM servers running GNU/Linux, citing cost savings and performance as reasons for switching to GNU/Linux (the same article also notes that clothing retailer L.L. Bean and financial services giant Salomon Smith Barney are switching to GNU/Linux as well). Merrill Lynch is switching to GNU/Linux company-wide, and are hoping to save tens of millions of dollars annually within three to five years. Adam Wiggins reports on TrustCommerce’s successful transition to Linux on the desktop. An April 22, 2002 report on ZDNet, titled “More foreign banks switching to Linux”, stated that New Zealand’s TSB bank “has become the latest institution to adopt the open-source Linux operating system. According to reports, the bank is to move all its branches to the Linux platform... in Europe, BP and Banca Commerciale Italiana feature among the big companies that have moved to Linux. According to IBM, as many as 15 banks in central London are running Linux clusters.” They also mentioned that “Korean Air, which now does all its ticketing on Linux, and motorhome manufacturer Winnebago, are high-profile examples.” The Federal Aviation Air Traffic Control System Command Center in Herndon, Virginia is currently installing a system to support 2,000 concurrent users on Red Hat Linux. The system, known as the National Log, will act as a central clearinghouse database for users in air traffic centers across the country.
Some organizations are deploying GNU/Linux widely at the point of sale. Many retailer cash registers are switching to GNU/Linux, according to Information Week (”Cash Registers are Ringing up Sales with Linux” by Dan Orzech, December 4, 2000, Issue 815); on September 26, 2002, The Economist noted that “Linux is fast catching on among retailers.” According to Bob Young (founder of Red Hat), BP (the petroleum company) is putting 3,000 Linux servers at gas stations. Zumiez is installing open-source software on the PCs at all its retail locations, and expects that this will cut its technology budget between $250,000 and $500,000 a year; note that this includes using Evolution for email, Mozilla for web browsing (to eliminate the need for printed brochures and training manuals), and an open source spreadsheet program. Sherwin-Williams, the number one U.S. paint maker, plans to convert its computers and cash registers (not including back office support systems) in more than 2,500 stores to GNU/Linux and has hired IBM to do the job; this effort involves 9,700 NetVista desktop personal computers,
OSS/FS is also prominent in Hollywood. Back in 1996, when GNU/Linux was considered by some to be a risk, Digital Domain used GNU/Linux to generate many images in Titanic. After that, it burst into prominence as many others began using it, so much so that a February 2002 article in IEEE Computer stated that “it is making rapid progress toward becoming the dominant operating system in ... motion pictures.” “Shrek” and “Lord of the Rings” used GNU/Linux to power their server farms, and now DreamWorks SKG has switched to using GNU/Linux exclusively on both the front and back ends for rendering its movies. Industrial Light & Magic converted its workstations and renderfarm to Linux in 2001 while it was working on Star Wars Episode II. They stated that “We thought converting to Linux would be a lot harder than it was” (from their SGI IRIX machines). They also found that the Linux systems are 5 times faster than their old machines, enabling them to produce much higher quality results. They also use Python extensively (an OSS/FS language), as well as a number of in-house and proprietary tools. Disney is also shifting to GNU/Linux for film animation.
Many remote imaging systems use GNU/Linux; one in particular got some press because Linux’s mascot is a Penguin. Thus, when a remote imaging system was placed at the North Pole, it was announced that Penguins invade the North Pole.
The U.S. government has been using OSS and many have suggested more use. The (U.S.) President’s Information Technology Advisory Committee (PITAC)’s report, the Recommendations of the Panel on Open Source Software For High End Computing, recommends that the U.S. “Federal government should encourage the development of open source software as an alternate path for software development for high end computing.” See the separate discussion on MITRE’s business case study of OSS (which emphasized use by the U.S. government, particularly the U.S. military). The U.S. National Imagery and Mapping Agency (NIMA) National Technical Alliance, through the National Center for Applied Technology (NCAT) consortium, funded the Open Source Prototype Research (OSPR) project. Under the OSPR project ImageLinks Inc., Tybrin Inc., Kodak Inc., and Florida Institute of Technology (Florida Tech) performed evaluations of open source software development practices and demonstrated the technological advantages of Open Source Software. The OSPR final report includes those evaluations, a survey, and various related documents; these are actually rather extensive. The final report concludes:
Open Source Software development is a paradigm shift and has enormous potential for addressing government needs. Substantial technology leverage and cost savings can be achieved with this approach. The primary challenge will be in establishing an organizational structure that is able to employ OSS methodology...The paper Open Source and These United States by C. Justin Seiferth summarizes that:
The Department of Defense can realize significant gains by the formal adoption, support and use of open licensed systems. We can lower costs and improve the quality of our systems and the speed at which they are developed. Open Licensing can improve the morale and retention of Airmen and improve our ability to defend the nation. These benefits are accessible at any point in the acquisition cycle and even benefit deployed and operational systems. Open Licensing can reduce acquisition, development, maintenance and support costs and increased interoperability among our own systems and those of our Allies.NetAction has proposed more OSS/FS use and encouragement by the government; see The Origins and Future of Open Source Software by Nathan Newman and The Case for Government Promotion of Open Source Software by Mitch Stoltz for their arguments.
Such benefits have not escaped the eyes of other governments. Germany intends to increase its use. The Korean government announced that it plans to buy 120,000 copies of Hancom Linux Deluxe this year, enough to switch 23% of its installed base of Microsoft users to open source equivalents; by standardizing on GNU/Linux and HancomOffice, the Korean government expects savings of 80% compared with buying Microsoft products (HancomOffice isn’t OSS/FS, but GNU/Linux is). Taiwan is starting a national plan to jump-start the development and use of OSS/FS. A Linux Journal article notes many interesting international experiments and approaches, for example, Pakistan plans to install 50,000 low cost computers in schools and colleges all over Pakistan using GNU/Linux. Finnish MPs are encouraging the use of GNU/Linux in government systems. A June 14, 2002 article in PC World also lists actions various governments are taking.
In 2002 an independent study was published by the European Commission. Titled ”Pooling Open Source Software”, and financed by the Commission’s Interchange of Data between Administrations (IDA) programme, it recommends creating a clearinghouse to which administrations could “donate” software for re-use. This facility would concentrate on applications specific to the needs of the public sector. More specifically, the study suggests that software developed for and owned by public administrations should be issued under an open source license, and states that sharing software developed for administrations could lead to across-the-board improvements in efficiency of the European public sector.
Peru is even contemplating passing a law requiring the use of OSS/FS for public administration (government); rationale for doing so, besides saving money, include supporting “Free access to public information by the citizen, Permanence of public data, and the Security of the State and citizens.” Dr. Edgar David Villanueva Nuñez (a Peruvian Congressman) has written an interesting letter supporting this law. Marc Hedlund written has a brief description of the letter; an English translation is available (from GNU in Peru, UK’s “The Register”, and Linux Today); there is a longer discussion of this available at Slashdot. Whether or not this law passes, it is an interesting development.
There have been many discussions about the advantages of OSS/FS in less developed countries. Heinz and Heinz argue in their paper Proprietary Software and Less-Developed Countries - The Argentine Case that the way proprietary software is brought to market has deep and perverse negative consequences regarding the chances of growth for less developed countries. Danny Yee’s Free Software as Appropriate Technology argues that Free Software is an appropriate technology for developing countries, using simple but clear analogies.
Librarians have also found many advantages to OSS/FS.
One interesting usage story is the story of James Burgett’s Alameda County Computer Resource Center, one of the largest non-profit computer recycling centers in the United States. Its plant processes 200 tons of equipment a month in its 38,000-square-foot warehouse. It has given thousands of refurbished computers to disadvantaged people all over the world, including as human rights organizations in Guatemala, the hard-up Russian space program, schools, and orphanages. All of the machines have GNU/Linux installed on them.
Summaries of government use in various countries are available from Infoworld and IDG.
Several organizations collect reports of OSS/FS use, and these might be useful sources for more information. Linux International has a set of Linux case studies/success stories. Mandrakesoft maintains a site recording the experiences of business users of the Mandrake distribution. Red Hat provides some similar information. Opensource.org includes some case studies.
Here are some other related information sources:
More recently, according to the Washington Post article Open-source Fight Flares at Pentagon, “Microsoft Corp. is aggressively lobbying the Pentagon to squelch its growing use of freely distributed computer software and switch to proprietary systems such as those sold by the software giant, according to officials familiar with the campaign... But the effort may have backfired; a MITRE report prepared for the Department of Defense (DoD) dated May 10, 2002, concluded that open source “often results in more secure, less expensive applications and that, if anything, its use should be expanded. ‘Banning open source would have immediate, broad, and strongly negative impacts on the ability of many sensitive and security-focused DOD groups to protect themselves against cyberattacks,’ said the report...” MITRE also noted that OSS “plays a more critical role in the DOD than has been generally recognized,” and it identified “249 uses of open-source systems and tools, including running a Web portal for the Defense Intelligence Agency, running network security for the Army command in Europe and support for numerous Air Force Computer Network Defense tools.” The Post article also notes that “at the Census Bureau, programmers used open-source software to launch a Web site for obtaining federal statistics for $47,000, bureau officials said. It would have cost $358,000 if proprietary software were used.”
... But Microsoft’s statements Friday suggest the company has itself been taking advantage of the very technology it has insisted would bring dire consequences to others. “I am appalled at the way Microsoft bashes open source on the one hand, while depending on it for its business on the other,” said Marshall Kirk McKusick, a leader of the FreeBSD development team.More recently Microsoft has particularly targeted the GPL license rather than all open source licenses, claiming that the GPL is somehow anti-commercial. But this claim lacks evidence, given the large number of commercial companies (e.g., IBM, Sun, and Red Hat) who are using the GPL. Also, see this paper’s earlier note that Microsoft itself makes money by selling a product with GPL’ed components. The same article closes with this statement:
In its campaign against open-source, Microsoft has been unable to come up with examples of companies being harmed by it. One reason, said Eric von Hippel, a Massachusetts Institute of Technology professor who heads up a research effort in the field, is that virtually all the available evidence suggests that open source is “a huge advantage” to companies. “They are able to build on a common standard that is not owned by anyone,” he said. “With Windows, Microsoft owns them.”Other related articles include Bruce Peren’s comments, Ganesh Prasad’s How Does the Capitalist View Open Source?, and the open letter Free Software Leaders Stand Together.
For general information on OSS/FS, see my list of Open Source Software / Free Software (OSS/FS) references at http://www.dwheeler.com/oss_fs_refs.html
OSS/FS has significant market share in many markets, is often the most reliable software, and in many cases has the best performance. OSS/FS scales, both in problem size and project size. OSS/FS software often has far better security, perhaps due to the possibility of worldwide review. Total cost of ownership for OSS/FS is often far less than proprietary software, particularly as the number of platforms increases. These statements are not merely opinions; these effects can be shown quantitatively, using a wide variety of measures. This doesn’t even consider other issues that are hard to measure, such as freedom from control by a single source, freedom from licensing management (with its accompanying risk of audit and litigation), and increased flexibility.
Realizing these potential OSS/FS benefits may require approaching problems in a different way. This might include using thin clients, deploying a solution by adding a feature to an OSS/FS product, and understanding the differences between the proprietary and OSS/FS models. Acquisition processes may need to change to include specifically identifying OSS/FS alternatives, since simply putting out a “request for proposal” may not yield all the viable candidates. OSS/FS products are not the best technical choice in absolutely all cases, of course; even organizations which strongly prefer OSS/FS generally have some sort of waiver process for proprietary programs. However, it’s clear that considering OSS/FS alternatives can be beneficial.
I believe OSS/FS options should be carefully considered any time software or computer hardware is needed. Organizations should ensure that their policies encourage, and not discourage, examining OSS/FS approaches when they need software.
David A. Wheeler is an expert in computer security and has a long history of working with large and high-risk software systems. His books include Software Inspection: An Industry Best Practice (published by IEEE CS Press), Ada 95: The Lovelace Tutorial (published by Springer-Verlag), and the Secure Programming for Linux and Unix HOWTO. Articles he’s written include More than a Gigabuck: Estimating GNU/Linux’s Size and The Most Important Software Innovations. Mr. Wheeler’s web site is at http://www.dwheeler.com; you may contact him at dwheeler@dwheeler.com, but you may not send him spam (he reserves the right to charge fees to those who send him spam). |
You may reprint this article (unchanged) an unlimited number of times and distribute local electronic copies. You may not “mirror” this document to the public Internet or other public electronic distribution systems; mirrors interfere with ensuring that readers can immediately find and get the current version of this document. Copies clearly identified as old versions and not included in normal searches as current Internet data are fine; examples of acceptable copies are Google caches and the Internet archive’s copies. Please contact David A. Wheeler if you’d like to translate this article into another (human) language; I would love to see more freely-available translations of this document, and I will help you coordinate with others who may be translating the document into that language. This is a personal essay and not endorsed by David A. Wheeler’s employer. This article is a research article, not software nor a software manual.