Version 4/5/06
Support Material: Hackers, Hits and Chats
Keyterms: affiliate marketing, average cost, business model, cash flow, collaborate filtering cookie file, diminishing returns, first mover advantage, fixed cost, (media) flexibility, free software, game theory, hackers, learning curve, lifetime value, lock-in, marginal cost, metcalfe's law, network effects, open source, opportunity cost, positive spiral, purchasing process, (media) reach, (media) richness, standard, sunk cost, switching cost, tipping point, variable cost, winner takes all

Digital Economics and Strategy Issues

As the internet began to evolve as a medium for business there was much hype that traditional economics no longer made sense. Business models were developed that focused entirely on growth, and did not consider cash flow. These times have now changed and things are certainly beginning to normalize. What is clear however is there are some unique economic characteristics with regard to the internet and digital products that are not as persistent with traditional, tangible products. This material covers some of these issues.

Reach, Richness and Flexibility

Traditionally there is a trade off between richness of communication, the reach of the communication and the flexibility of the communication. For instance, if one wants to communicate a message to a large audience, quickly, then perhaps running a TV commercial makes sense (high reach, low richness). TV is, however, fairly limiting in the quality of message that can be conveyed in a 30 second commercial. At the other end of the scale, a salesperson is able to spend a significant amount of time with each customer, tailoring the marketing message to the individual customers needs. The salesperson is also able to execute a transaction with the customer. However the salesperson is only able to communicate with one person at a time (high richness, low reach). This trade-off in communication is very accepted.

The internet is the first medium that allows for a rich, unique and flexible communication, to many people simultaneously. Each customer controls the type of information he or she receives, based of his/her needs. The unique path each customer takes through a web-site presents a unique set of information for that person. Complementing this, a site is able to tailor its presentation based on the individual's previous experiences with the site, using information from the cookie file and stored in the server-side databases. Thus a website is able to offer a rich set of information to the entire audience of the marketer, simultaneously, at little marginal cost to the marketer.

Not only can a web-site handle simultaneous interactions with target audiences with different information needs, it is also able to facilitate many activities that are vital to the marketing of a product or service (high flexibility). This includes not only the marketing literature, but customer service needs, transaction needs and community building. A site can thus manage a marketer's needs for customers throughout the purchasing process.

The Long Tail

A second characteristics of the internet that follows from its reach, richness and flexibility is the notion of "The Long Tail". While the web is useful for large businesses with large audiences, much of web activity actually focuses on niche products, services and activities. While the physical world suffers from the limitations that scarcity of space creates, the abundance of space provided by the web, and the ability to find content within the large space via search engines and collaborative filtering techniques, is attractive for hard to find products. These products are typically discontinued from traditional retail environments, if they find their way there in the first place. More than half of amazon's sales comes from book titles that are out of the top 130,000, and thus books that would not be stocked at a traditional bricks and mortar store. These sales are a result of the needs of a few, but those few, aggregated across the entire internet, make a compelling business proposition. If amazon relied on the reach of a traditional bookstore (perhaps a 20 mile radius) local demand would not support the costs of supply of very limited shelf space. Collaborative filtering, in this case, helps highlight books a customer may be interested in that are related to more popular books being purchased. Thus there is a mechanism in place, through the recommendation system, that highlights books that otherwise would have been forgotten.

In terms of using the web as a marketing medium we can look at the fact that more than half the keyword terms searched on google in any given day were not searched the prior day. Many of those search terms are only searched one time.

This provides a great medium for niche products with small but focused audiences. By using programs like Google Adwords, Google Local (you can now get your restaurant advertisement displayed when someone does a local search) or signing up for an affiliate marketing program, small companies are able to convert their marketing budgets to a variable cost, only paying for those who are interested in their product (as a result of clicking on their advertisement).

source: The Long Tail; The Long Tail

Cost Structure

Cost structures of digital products are very different from the traditional manufactured good. Costs are typically made of up fixed costs and variable costs. Fixed costs (such as rent on a manfacturing site) are fixed over a given number of units produced. Therefore the more units that are sold, the lower the average cost per unit. Variable costs are fixed per unit produced. Examples include materials for the product.

Digital products' cost structures typically differ from traditional goods in two key ways:

  1. Fixed costs are sunk

  2. Variable costs are close to zero

Sunk costs are non-recoverable fixed costs, including research and development and human capital. Since these costs are sunk costs, they should not be considered in future decisions about the marketing of the product.

Digital products typically have small variable costs, and can have zero variable costs, assuming the product (software for instance) is being marketed directly from the web-site, with no distribution or packaging costs.

Thus with a cost structure that is mostly fixed (and sunk) and not variable, it effects the decisions one can make about marketing the product. A few factors that support the ability to have zero marginal costs include:

In competitive markets, where more than one firm/product compete for the same consumer needs, this competition can tend to drive the price of the products close to the marginal cost of the product, since any revenue over the margin will contribute directly to recovering the sunk costs and contribute to the profit.

Cost Structures and Versioning

Due to the unique cost structures of digital products (high fixed costs, close to zero marginal costs,) this allows for some interesting possibilities for differentiating products in the marketplace. If one wanted to introduce multiple products into the marketplace, to satisfy different needs of different customers --- and one is producing automobiles, then there is significant fixed and variable costs associated with each style of product introduced. For digital products this is not the case. The research and development is applied to developing the core product, this product is then altered to satisfy different markets. For the most part, the core product will be the most sophisticated product offered to the market (high end spreadsheet package for tax people) and the low end product uses the same code-base, with limitations ADDED. This is interesting to note, as the low end product (sold at the lower price) is actually the most expensive of the products to produce (given the additional work required to limit its capabilities). Clearly, this work could be avoided, to increase the margins on the low end product, but this would allow the high end market to purchase the low end product, if the functionality was not reduced.

A simple example can illustrate how this works. The goal is to maximize the revenue generated for the product, therefore maximizing the profits (assuming zero marginal costs):

Software version casual user: Sold at $50, size of market 10,000
Software version student user: Sold at $30, size of market 50,000
Software version professional user: Sold at $150, size of market 50,000

This gives a total revenue of: $500,000 + $1,500,000 + $7,500,000 = $9.5M
Assume zero variable costs and fixed costs of : $1M
Thus gross profit = $9.5M - $1M = $8.5M

Assume that instead, the firm decided to launch the product, without differentiation, the high-end product at $150. What are the consequences? Fixed costs (sunk) will be reduced because there will be less development costs associated with developing only one version, thus, fixed costs are reduced to: $900,000. However, due to the high price point, there is only demand from the professional user, generating revenue if $7,500,000 (150 x 50,000). Thus the product generates a profit of $6.6M.

Now assume the firm introduces the product at $30, but again, with all the functionality in place. This saves $100,000 for fixed costs (similar to last example.) The entire market will demand the product, but will have access to the product at $30. Revenue = 30 x 110,000 = $3.3 M. Profit = $3.3M - 900,000 = $2.4M.

Thus both scenarios leave the company with less opportunity to be as successful.

Cost Structures and Product Differentiation

Given that digital products have small (or zero marginal costs) it is possible to give-away (or significantly discount) a version of the product in order to entice people to purchase a paid version of the product in the future, as they become more sophisticated users of the product. In the above scenario, if the marketer reduced the student version to zero, it would lose $1.5M in revenue, but still realise a profit of $7M. If this strategy increases the number of student users, which in turn increases the likelihood of increased conversion to professional users, then in the long term, this may make sense.

The only revenue the company is losing with this strategy is the opportunity cost. This works if the company can establish lock-in and the consumer upgrades to paid versions in time.

Examples of this strategy include Blogspot, the free version of Blogger and Zoomerang which has a limited version of its online survey tool for free.

Installed Base

Installed-base is the term used to reflect the marketer's customer base. The value of the installed-base to the marketer is essentially the value that can be placed on the company. Thus as marketers build their installed-base, they will consider the following issues:

Lock-in refers to the strategies/tactics in place to more closely align the customer with the particular product in the marketplace. A marketer will lock the customers into their product when costs (switching costs) are associated with the customer selecting another product. In its most basic form, lock-in can come from the marketer having data on the customer, such that the marketer can use this data to better present their products to the customer on return visits. eBay is a great example of establishing lock-in for its customer base. Its customers are essentially both the buyers and the sellers that trade on eBay. Once you have performed a transaction on Ebay, you have a historical record that other users can use to determine whether they want to trade with you (building a reputation). Once you have built up a good reputation on Ebay, you have created a cost that is associated with you moving to another online auction service. Your established reputation, which in turn will translate into dollars, becomes your cost to switch to another online auction system.

Switching Costs refer to the costs of a customer (or entire customer base) switching from one competing product to another. Telephone companies are an good example of companies that estimate the switching costs of consumers, and then develop an incentive program to encourage consumers to switch from one competitor to another. (Surely you have experienced the phone calls to encourage you to switch!) Once the telephone company estimates that switching cost, and calculates the life-time value of a customer then the company will try to grow its installed base by trying to encourage customers to switch. As long as the switching cost is less than the life-time value, there is incentive available to encourage a switch. If lock-in can be established within the customer base after the switch, this clearly increases the life-time value of the customer as it increases the switching costs.

The Life-Time Value of a customer, is the value placed on a customer, to the company, for the life-time that customer will remain with the company. This calculation is often used to estimate the overall value of a company. It increases as alternative revenue streams are established from the installed-base. For example, if the telephone company is able to sell additional services to its installed-base, on top of the traditional phone service, then they are able to increase this value. They may also sell the data of their installed-base to third party marketers who are interested in selling additional products.

Network Effects

Digital products that benefit from connectivity may experience network effects. The idea behind network effects is that as the number of users of the network increases, the value of the network to each user also increases. This is also known as increasing returns to scale or demand-side economies of scale. This is clearly a different phenomena that we are used to with the industrial age, the notion of diminishing returns to scale. Diminishing returns holds that as the number of users increases, the value to each user diminishes. (Imagine if everyone owned a Porsche motor car, the prestige of being a Porsche owner loses value to each owner).

The Fax effect is an excellent illustration of network effects (and increasing returns). When the fax machine was first introduced, it was an expensive product, and the product needed additional owners for it to be useful. Thus, the buyer of the first fax machine had no use for it, since there was no one to communicate (no value). As more fax machines were purchased, the value of each fax machine to each user increased. This also drove down the costs of developing the fax machines (economies of scale and learning curve effect) which reduced the price of the fax machines, which increased the demand for fax machines, which increased the value of each machine for each user (and so it goes on.) The positive spiral that occurs enabled the fax machine to become an industry standard for transmitting digital documents.

This is also known as Metcalfe's Law: The value of the network to each user is proportional to the number of other users. The total value of the network is proportional to n x (n-1) = n squared - n. (n is the number of users in the network.)

Network effects are one of the reasons first-mover advantage can occur. As the first player in a new market, if the marketer can take advantage of network effects and create a positive spiral, they can make it very difficult for other marketers to enter into the market. The first mover will also move down the learning curve very quickly, and this will reduce the average cost of the product, creating margins that later entries into the market will find difficult to compete with. It is important that the first-mover tries to establish lock-in. Clearly this does not always work effectively. Microsoft has done a wonderful job of never being the first to a market! (netscape, mac)

The combination of network effects (the more users, the more valuable the product to each user) and the learning curve effect (the more units developed/experience developing the product, the lower the average costs of the product) leads to a "winner takes all" scenario, where markets work more effectively if one company (standard) controls the entire market. In fact as competing companies in the market go head to head, as one company reaches its tipping point and experiences increasing returns due to positive spirals etc., the competing company will experience a negative spiral as it loses customers and increases its average costs per unit. One can also argue that it is economically inefficient to have more than one company compete in a marketplace for digital products, where all costs are fixed and sunk. Each competing firm would be adding to the total investment made in development costs of creating the product.

Recent examples of products on the web that have benefited from network effects include myspaces, ...

Standards

Digital marketplaces are much more efficient once a standard for the marketplace is established. Standards are able to increase the overall size of the market as the market itself increases its utility for each consumer.

Standards typically evolve in different ways:

WINTEL (Windows and Intel) is an example of a proprietary-standard that has helped consumers communicate with each other. It is comforting to know that the receiver of a document will be able to read the document a sender emails. Imagine if there were several competing software vendors, marketing incompatable software, competing in the office suite market. It would create a fragmented market, which would undoubtedly reduce our ability to communicate efficiently. While traditional market thinking suggests several competing players would create a better marketplace creating more choice for consumers, here we see developing a standard actually creates a more robust market (a function of a networked market place, where connectivity between consumers is a function of the product). In the above case, the standard is propietary. (Controlled by commercial organizations.) This does not have to be the case.

TCP/IP is a communications standard, that allows any computer to talk to any other computer on the internet. Without this communications protocol, we would not have the internet. Thus TCP/IP is a non-proprietary standard that has enabled the worldwide internet. A non-proprietary standard (unlike the Wintel proprietary standard) is open to the public, and not owned by one company.

A couple of examples of Open Standards, that are overseen by industry bodies include the language of the web, html, overseen by the w3c.org, and the development of 3D on the internet, overseen by web 3D consortium. The Internet Engineering Task Force oversees much standard setting related to internet architecture.

A couple of examples of standard setting to establish markets a few years ago can help illustrate some interesting points.

The standard railway gauge in the United States was not always so. Many years ago, for goods to pass all the way across the country, they had to travel by one rail track, then be unloaded and reloaded to another train, to continue travel, due to the different rail gauges being used (at one point, there were seven different gauges in use!) Clearly this limited trade, and created industries within communities whose work relied on transferring goods from one railway line to another. Once the railway gauges were consistent, trade immediately increased.

The QWERTY example is an interesting case study of an entrenched standard. The keyboard layout that we are used to was developed many years ago, in order to slow down our ability to type (also allowed early salespeople to impress their costomers, since they could type the brand name, typewriter, using the top row of the keyboard only!) Typewriter keys at the time were prone to get stuck together, if the typist typed too fast. Soon, typewriters overcame this technical glitch, but QWERTY has stayed. In fact, a more effective keyboard design (Dvorak layout) was introduced, but never had a chance to become a standard. Even if it is easier to teach new users to type using the new layout, companies would not be encouraged to adopt the new design since its current employers were familiar with the old design, and new users would rather learn a keyboard layout that gave them skills that they could transfer from one organization to another. Thus while learning the new design may have been easier, it had much less value to the user. The collective switching costs of users from one standard to another was too high. A detailed review of the evolution of the QWERTY standard appears in the essay: Clio and the Economics of QWERTY.

The development of the internet is another interest case study in standards. Many proprietary companies attempted to develop the standard for online networks. These included compuserve, genie and prodigy (no longer living!) and aol and microsoft (now gateways to the internet). From the early 1980s to the mid 1990s these companies competed with propietary standards to develop their online communities. Each developed their effort with a blind eye turned to the internet. This changed in the mid 1990s when realization set in that the open standard internet was going to become the network. Even subsequent to this, Microsoft believed that it had the power to force Word to become the standard language of the web (replacing html). This clearly did not happen. This is a good example of an open standard becoming more robust, more quickly than many competing proprietary standards.

When developing a new product, in a new market place, a significant consideration a company is faced relates to the standard setting for the market. Should the company try to establish a proprietary standard (as Front Page did for web authoring tools) or develop an open standard that competing companies would have access. The second strategy assumes that this will allow the market place to grow faster, and the pioneer is able to take first mover advantage. Clearly decisions will be a function of the current nature of the marketplace (does a standard already exist ... does it make sense to join a current standard or develop a competing standard etc.) What is clear, however, is the success of an individual firm, in a networked marketplace is dependant on its relationships with other firms within that marketplace. It is unlikely that one firm can provide a full end-to-end solution for a consumer, that does not rely on other vendors. This dependance on other companies means there is considerable likelihood of the need to develop alliances in order to help establish standards and interfaces across products that improve interoperability. Much work in game theory can be applied to business alliance forming. Incentives of different companies need to be aligned.

Patents will need to be contributed to develop common standards.

dominate your niche or compete in a larger marketspace

Sony P2 versus Microsofts xBox.

IM; aol, MS, Yahoo!, Google

RSS versus Atom

Skype

Open Source

The nature of digital goods allows the source of the product (its building blocks) to be distributed with the product itself (binary product). Before software became a commercially viable market, this was the tradional means of development and distribution, with hackers building off each others' works. This has recently manifested in the free software movement, that has subsequently renamed itself (or at least a faction of the movement) to the open source software movement.

Example success stories to date:

Open Source ... A Revolution?

Before reading this, please review Cathedral and the Bazaar

The following looks at the development of the Open Source Initiative. Open Source, as defined by the Open Source Definition is software that is:

Given these unusual requirements, especially: it is worth understanding the principles that allow this to happen, the business models that may make open source a better alternative, and reasons why a company might chose open source software for its software versus proprietary alternatives. Before we do this, a brief history can establish context.

History Behind the Open Source Revolution

The following is a brief timeline of major events in the evolution of Open Source:
  1. UNIX Hacker culture, from the 1960s

  2. The GNU Project, announced in 1983

  3. The Free Software Foundation FSF, established in 1985

  4. Linux development, established in 1991

  5. Cathedral and Bazaar Paper, 1997

  6. Netscape's Mozilla Project, 1997

  7. Firefox launched 200x

Briefly, the UNIX hacker culture, as described in A Brief History of Hackerdom was the birth ground for the movement that was later to become known as the Open Source movement. Hackers enjoyed the pure creativity of developing software, and sharing that code with fellow hackers, in order to expand the overall knowledge-base of the hacker community (collective invention). This is therefore a very cooperative environment. The evolution, at this time, was somewhat limited with respect to the available platforms to work with, and the ability for the community to communicate with each other. Typically small communities would evolve around specific proprietary technologies (i.e. Digital's PDP 10 series). Eventually the technology would be discontinued and new knowledge development was required. In 1985, Richard Stallman, tiring of this state of affairs, determined he wanted to develop a set of software tools, and an operating system, that were freely available for anyone to use, based on open standards. He established the Free Software Foundation which supported the GNU Project (GNUs Not Unix --- a recursive acronym, typical of the community!) This project is still thriving today, and the GPL license (known as copyleft) was developed for software developed for the GNU project. The GPL license stipulates: The GNU project has been very successful in most respects, but it had not been able to develop a working operating system. The UNIX operating systems had fragmented (forked) and were not a viable proposition for the developing PC market. The PC market was becoming increasingly controlled by Microsoft and the Windows Operating System. This was a very frustrating situation for the hacker community, which did not appreciate the dominating position of the market leader (they used to have disdain for IBM for the same reason). Some factions of the culture actually believed that software should fundamentally be free for all, and it was morally wrong to charge for it. (Stallman GNUs philosophy, hence the term "Free Software.") This was not the case for the entire community however. (Eric Raymond's paper The Magic Cauldron establishes viable business models, and hence the Open Source movement.) The fundamental issue was the correct use of the term "free." While people assume it is used due to the cost structure of the software, it is actually used to highlight the "freedom of use" issue. Thus the user is free to use the software as he/she desires, by accessing the source code and making any necessary changes. The user would then share these changes with anyone who wanted to adopt them. This type of sharing and developing would lead to rapid development cycles for products. (Sometimes daily in the case of the early days of Linux!)

This climate allowed an opportunity for someone to initiate the development of an operating system, that would clearly get tremendous support from the hacker community. Linus Torvold a student in Finland at this time, decided he wanted to develop an operating system that allowed him to do more than the Minix operating system he was working with would allow. This operating system was developed by one of his professors (Andrew Tanenbaum), and was used in his computer science classes. He developed his first beta version of Linux, using much of the Minix code. This code was soon replaced, however, as its license was more limiting than the GPL license (of the GNU project). Linus announced his project on a Minix discussion group (hosted at udel at the time!) in 1991. The Linux Operating System is the outcome of that early beginning. The excellent essay: In the Beginning was the Command Line highlights the differences between the major Operating Systems.

Pre-conditions for the recent success of Linux include:

As the Linux Operating system has become more popular, and the writings of Eric Raymond (a major hacker) have allowed us to understand the phenomena more clearly, other companies have paid close attention, and considered the commercial opportunities of the open source development. Netscape's announcement of the Mozilla project was the first mainstream (previous Wall Street Darling!) company to switch its product to open source. This allowed the movement to gain additional momentum (and subsequently manifested into the Firefox browser).

How does Open Source Work?

Sucessful open source projects require the following: Leadership and managing the project is critical to an open source success. While this may seem obvious it is critical if one wishes to get volunteer contributions of effort in developing the project, when the availability of that effort is a finite, and a scarce resource. Thus the project also needs to be very imteresting and useful to the hacker community (accomplish something that is important to others, not simply the project leader). Leadership involves communication to the community, and rewarding those in the community for the work they provide. Unlike traditional work, where the reward is purely economic, in the open source community it relates to ego (being recognized in the readme file), seeing work being implemented quickly (instant gratification) and being able to take advantage of the network effects of open source development (i.e. each person contributes a little, but gains from all others' contributions). This is also a political process, as developers essentially form alliances, and thus has game theory implications.

The developer community, as a volunteer resource, is finite. This leads to markets that can only support (effectively) one open source project. We see the winner takes all phenomena in play. If there are two open source projects, the market will tip towards one project, as a positive spiral will take effect, the market leader will survive and become robust, others will diminish (but not die, since they are open source! Closed source projects die, open source projects lay dormant until they are picked up by someone else. This is an important distinction.)

The Advantages of Open Source

The network effects of developing in an open source environment can be very powerful. By engaging many developers, each developer will benefit from all other developers in the project. Thus as more developers work on a single open source project, that project will become more rewarding for each developer. The outcome of this effect is that it can lead to rapid development cycles of the software. This development model, if implemented effectively (as in the Linux process,) is more robust and leads to a greater evolution of the product, than in the closed source development model. This can allow a market leader to move down the learning curve more rapidly, increasing its market leadership. It can also allow a market follower to gain momentum on a closed-source market leader, something that is very hard to accomplish for a closed source competitor in a networked economy!

Open source also guarantees you (if you are considering an open source alternative) or your customers (if you are marketing an open source solution) against lock-in. Open source provides alternative vendor solutions, as well as the life of the software. Thus if you purchased a close source commercial solution, that solution is only workable as long as the vendor that provided it remains in business, and the vendor's business goals remain aligned with that software solution. An open source software solution would allow you to develop and maintain the code yourself, or switch to another open source vendor of the code. As a company marketing an open source solution, you can use it to your advantage by stipulating you are allowing your costumers freedom of choice. This issue can also be related to the issue of being dependent on a closed source platform for your software solution to function. If the platform was open source (Linux vs. Windows,) software developers would have much more control of the development of their own software!

Microsoft is not passive about the threat of the Open Source initiatives, as highlighted in the Halloween Memo w/o Raymond.

One of the often cited disadvantages of open source is the notion of the free-rider effect. This argument has been put forth along with the essay Tragedy of the Commons. Clearly however, in the case of open source development, the code-base is not a finite resource that diminishes in quality with additional users, whether they are users that contribute to the evolution of the code, or users that simply use the product for free without contributing any resources (economic or developmental). The latter, the free-riders, do not have a negative effect on the resource, assuming they are not a burdon to the community (asking questions etc.) In fact, it is likely that this group will adopt a commercial version of an open source product, thus paying for the customer service and after sales support and helping broaden the market for the open source solution. Those that use the product for free, that don't contribute to the development effort, are going to be more expert in the knowledge of the product, and are potential contributors to the knowledge-base at some point in the future when additional needs for the product arise. This group are therefore likely to get locked-in to the product at the initial stages, and perhaps become contributors at a later stage.

Open vs. Closed: An Economic Perspective

Open Source software is available for free, commercial versions of the same open source software may also available at a price. These versions include customers service, packaging, detailed instructions and free upgrades. (Red Hat's version of Linux for example.) The question is, does this pricing structure make economic sense. Should software be sold on a per unit basis to recover development costs, or sold on the basis for charging for ongoing support.

The factory, industrial age, model would suggest charging on a per unit basis for the intellectual property of the code (closed source). While this makes clear sense for automobiles and houses, that include significant variable costs per unit, software, and other digital products tend to have very small variable costs (zero marginal costs). The costs associated with these products is fixed and sunk (development costs). Thus costs associated with sales of additional units typically are those for product support after the sale. This support is important for the product to be effective for the users in the medium- to long-term. Thus by charging on a per unit basis, to try to recover sunk costs, creates an incentive for developing software that is purchased but not used (no need for customer support). While the customer support center is considered a cost center, the after sales support will be limiting, which in turn will lead to under-served customers. A product that is given away for free, but has a paid alternative that supports the customer service infrastructure (a free alternative is required for the Open Source license for those offering commercial versions) makes perfect economic sense. This is the strategy adopted by Red Hat and this actually extends the market for the software beyond its traditional base of hackers. This argument therefore supports the economic issues related to pricing open source software, but it can also be applied to ALL types of software.

Life Cycle of Open Source Process

It is important to consider the product life cycle, when considering when to "open source" a project. Clearly some context has to be established, such that external developers are going to be interested enough to contribute their finite resources to the project. Thus an alpha version of the product needs to be complete. This was the case before Linus announced the Linux Operating System to the Minix news group. On the other hand, the project does not want to be so mature, that it is no longer interesting for external developers (their ability to contribute becomes marginal). Since they were not part of the evolutionary process of earlier development, it is hard to engage them at later stages.

An argument can be made for offering a product open source, at a later stage in the life cycle, to extend the product's life while shifting internal development resources to new development efforts. This will help guarantee the life of the product for the current installed base until they switch to the new product.

Workable Business Models for Open Source

As highlighted in the The Magic Cauldron, there are a number of business models that rationalize the open source motive. The major models are highlighted below, refer to The Magic Cauldron for others.

Cost Sharing Approach: As companies share common needs with respect to technology, it can sense to share resources in order to develop common technologies that help each business. Clearly this should be done in areas that do not offer competitive advantages to single businesses, but many business processes would fall into this category. The Apache server is a very good example of this. The Apache web server is the leading server in terms of market share according to the Netcraft survey of web servers. Clearly a web server is important to the running of a business (critical) and the options available to those implementing a server are:

While chosing an open source alternative may appear foolhardy, the market suggests this is what many are doing, and it does make sense. The code was initially developed by the NCSA team that developed the Mosaic browser. As many of the team left to join Netscape, the code was not maintained, and those using the server were not getting any support. They decided to collaborate and continue developing the product, using the open source model. They have proven very successful. Each contributing participant is helping improve the code-base, thus the product is clearly able to take advantage of network effects as each participant benefits from others' participation. Since the Apache server includes the source code, each user is able to modify the code to their specific needs, and the life-time of the server is guaranteed to live beyond any one development team.

The Apache Software Foundation provides support for the Apache community of open source software projects.

Risk-Sharing Approach: Similar to the cost sharing approach, for areas that are critical to the effective functioning of the organization, but not deemed a competitive advantage, it makes sense to pool resources with other companies to develop a technology that is then available for all to use. This is a better alternative than to simply develop it internally, and then (potentially) lose the internal developers and realise the code does not live beyond the lifetime of the original developers. This occurs more frequently that we would care to admit!

Market-Positioning Approach: This is the strategy Netscape adopted with the Mozilla Project. Netscape was losing significant market share in the client-side browser market. They were in jeopardy of losing their client-side franchise altogether. While this did not have much impact on revenue (since most of their browsers were given away) it was important to have a stake in the client-side to guarantee their server-side market. If Microsoft could own the client-side market, they could start dictating specifications that force the Microsoft server-side products to become proprietary industry standards. This is clearly not a good situation for Netscape, or the market as a whole. By open sourcing the client-side, they are guaranteeing the future of the browser, without regard for any revenue generation (since it was not generating revenue anyway)!

Free Product, Pay for Service: This is a strategy adopted by Red Hat. with its Linux Operating System. Red Hat sells its operating system on a CD, with customer support, instructions and upgrade options. It also offers the source code for the product for free on its website (a requirement to comply with the open source definition). Thus the cost for the product is associated with the additional support that Red Hat offers. Services such as Red Hat's essentially broadens the market for the Linux Operating System by making it available to people beyond the hacker community.

The Ecology of an Open Source Marketplace

Looking at the Linux market is instructive in understanding how an open source marketplace can work. The Linux operating system is available at no cost, for all to download, from the internet (check freshmeat and linux.com). Many hackers have freely contributed to the initial code developed by Linus Torvold in 1991. They have essentially built an operating system that is more stable and robust than any commercial competitor (Windows, Mac) in a much shorter timeframe. Unfortunately it is only available, under these conditions, to fellow hackers, as the complexity involved in using the system as available for free, is too much for the average PC user. (Author included!)

This creates a market space for commercial Linux providers (like Red Hat) who can charge for a commercial grade version of the software that targets more typical PC users, helping broaden the Linux franchise (since it competes with Microsoft, this is a good thing for the hacker community). Since the software license for Linux (GPL) allows anyone to charge for the product, hackers do not feel this behavior is descriminating against them (since they could also do the same). The license also requires each commercial provider to provide a version at no cost, they do, without the support.

The commercial providers, on top of providing customer support, upgrades and packaging, can also target additional resources to work on parts of the Linux system that are not as appealing to the hacker community. Clearly one area that needs work on at this point is the development of an effective Graphic User Interface (GUI.) Since this is particularly important to the commercial market (and not as important to the hacker community) then resources can be supplemented by the commercial providers. The GNOME project and KDE project are focused on some of these issues. Commercial providers also hire some of the hacker community and therefore provide economic incentive to develop the technology.

An Ideal Marketplace?

In an networked markeplace we know that large enterprises can take advantage of network effects, reduce their average costs and increase their progression down the learning curve. All these issues point to the larger the market share, the more efficient a company can operate. The company with the largest marketshare is able to offer better (more useful) products at lower costs. In fact, if the market was a monopoly, the monopolist would be able to maximize these advantages. This is unlike traditional markets (auto industry etc.) which do not scale as well, where at some point size creates inefficiencies (decreasing returns to scale, a function of problems with communications etc.) The problem with a monopolist marketplace is, who can control the behavior of the monopolist.

One can then consider that an "ideal" market, for a networked marketplace is where the technology innovation takes full advantage of network effects, learning curves etc. ...but the marketplace for the consumer product is competitive. Thus we would have an open source development model, where all development effort is focused on developing one code-base (clearly if all energy is focused on one solution, we would have a better product than if there were multiple simultaneous and fragmented efforts that are proprietary) with commercial competition between the vendors (like redhat linux market) who develop commercial distributions of the product.

Issues to resolve in the Open Source developent community

Five issues to resolve:
  1. User interface design

  2. Documentation

  3. Feature-centric development

  4. Programming for self

  5. Religious blindness

source: The Cathederal and the Bazaar; Fundamental issues with open source software development