First Monday

Management and Virtual Decentralised Networks: The Linux Project by George N. Dafermos

Abstract
This paper examines the latest of paradigms - the Virtual Network(ed) Organisation - and whether geographically dispersed knowledge workers can virtually collaborate for a project under no central planning. Co-ordination, management and the role of knowledge arise as the central areas of focus. The Linux Project and its development model are selected as a case of analysis and the critical success factors of this organisational design are identified. The study proceeds to the formulation of a framework that can be applied to all kinds of virtual decentralised work and concludes that value creation is maximized when there is intense interaction and uninhibited sharing of information between the organisation and the surrounding community. Therefore, the potential success or failure of this organisational paradigm depends on the degree of dedication and involvement by the surrounding community.

Contents

Introduction
From Hierarchies to Networks
Researching an Emerging Paradigm
The Linux Project
Microsoft vs. Linux
The New Paradigm
Conclusions
 
 

++++++++++

Introduction

The last century has had a great impact on the organisational structure and management. During this period, organisations have gradually evolved from 'bureaucratic dinosaurs' to more flexible and entrepreneurial designs. As a consequence, organisations have revised their management practices to cope with the constantly growing complexity of the business landscape and take advantage of a unique competitive advantage - knowledge.

In the same time, technological breakthroughs in connectivity have extended the reach of organisations and individuals alike to the extent that access to an unlimited wealth of resources without intervention of any central authority is feasible.

These technological achievements enabled organisations to become more centralised or decentralised according to their strategic orientation and further enhanced the efficiency of managing global business processes. However, centralisation is still the prevailing mode of managing despite the increased desirability of decentralised operations.

In the light of the volatility and competitiveness that the new world of business has brought with, new perceptions of the organisation and management have flourished. These perceptions are termed paradigms and this study examines the latest: the 'Virtual or Network(ed) Organisation'.

The Linux Project is an example of this emerging paradigm as it has defied the rules of geography and centralisation and has been growing organically under no central planning for the last ten years.

It is being co-developed by thousands of globally dispersed individuals that are empowered by electronic networks to jointly co-ordinate their efforts and, has recently gained the attention of the business world for its business model that represents a serious threat to leading software companies, especially Microsoft Corporation.

Rationale

To date, the existing organisational and management theory that examines the "virtual - network(ed) organisation" is not clear and does not provide more than a basic explanation about boosting technological developments related to emerging business opportunities to be seized by flexible organisations in a global, volatile marketplace.

Similarly, no in-depth analysis has been carried out regarding the management of "virtual organisations" and the key success factors that play a decisive role on the viability and potential success or failure of this fluid organisational structure.

Objectives

This primary research focuses on the management of decentralised network structures and whether virtual and decentralised collaboration is feasible, especially under no central planning.

It presents an attempt to analyse the Linux Project and identify the crucial success factors behind this novel organisational model with emphasis on its management and investigate whether the adoption of this model in other industries is likely to be successful.

Also, it seeks to provide a managerial framework that can be theoretically applied to industries other than the software industry. The prospective opportunities and limitations of the framework's adoption are analysed.
 
 

++++++++++

From Hierarchies to Networks

The Evolution of the Organisation

Science seeks to solve problems

The concept of hierarchy is built on three assumptions: the environment is stable, the processes are predictable and the output is given. Obviously, these assumptions no longer apply to today's business landscape.

Hierarchies were first developed to run military and religious organisations. However, hierarchies with many layers started to appear in the 20th century, in organisations as the sensible organisational design. In 1911, the book The Principles of Scientific Management was published. F.W. Taylor proved that efficiency and productivity are maximized by applying scientific methods to work. When he started working, he realised that the most crucial asset of doing business - knowledge and particularly technical know-how about production - was well guarded in the heads of workers of the time. He was the first who developed a methodology to convert tacit knowledge into explicit knowledge. He intended to empower managers to understand the production process. Armed with a watch, he embarked on his 'time-and-motion' studies where by observing skilled workers, he showed that every task when broken down to many steps would be easily disseminated as knowledge throughout the organisation. As learning did not require months of apprenticeship, power - knowledge about production - passed from workers to managers. Ironically, the man who grasped the significance of communicating knowledge throughout the organisation, had formulated a framework that regarded the organisation as a machine and the workers as cogs.

Enter the organisation
Shortly after Taylor, H. Fayol (1949) elaborated a managerial framework. He focused on the efficiency of the production process and reinforced Taylor's view that specialisation is essential along with constant supervision, and that no organisation can prosper without a set of rules that 'control and command'. That was the part of his 'story' that got well accepted at the time.

The other side was anarchical for then, but utterly prophetic. He rejected the abuse of managerial power since authority is not to be conceived of apart from responsibility. Moreover, he was the first to identify the main weakness of hierarchy: breakdown in communication (see Figure 1) and pointed out that employees should not be seen as cogs in a machine. Despite his insight that hierarchy does not (always) work, he concluded that a "scalar chain" (hierarchical chain) of authority and command is inevitable as long as mass production and stability is the objective.
 
 

Section E needs to contact section O in a business whose scalar chain is the double ladder F-A-P. By following the line of authority, the ladder must be climbed from E to A and then descend from A to P, stopping at each rung, and then from O to A and from A to E.
Evidently it is much easier and faster to go directly from E to O but bureaucracy does not allow that to happen very often [1].

Figure 1: The Scalar Chain of Authority & Breakdown in Communication



Bureaucracy is the inevitable organisational design

Taylor showed the way, Fayol provided a set of rules and Weber evangelised the adoption of bureaucracy as the rational organisational design. His writings were so influential that modern management theory is founded on Weber's account of bureaucracy. Firstly, he made the distinction between 'power-force' and authority: the 'power-force' implies that the management forces employees to act whereas 'authority' implies that managers give directions on reasonable grounds and based on well-known legitimate rules. Weber was convinced of the superiority of a bureaucratic authority (legal authority with a bureaucratic administrative staff as he termed it). He based his analysis on the evidence that long-living, successfully stable organisations like the army are being brought together by clear rules delivered by 'officers'.

To deal with the complexity of increasingly larger organisations, an 'administration system' should be enforced to control the flow of knowledge and the employees, and in this way, trigger unprecedented efficiency in (mass) production. In his words:
 
 

"[bureaucracy] is capable of the highest degree of efficiency ... as inevitable as the precision machinery in the mass production of goods ... it makes possible the calculability of results for the heads of the organisation ... is the most important mechanism for the administration of everyday profane affairs."
Ironically, he replied to the critics of bureaucracy by arguing that any other structure is an illusion and only by reversion in every field - political, religious, economic - to more flexible structures would it be possible to escape the influence of 'bureaucracy'. History has now proven that his revelations are correct. Now, it was the time of 'industry' men to shape the organisation and managerial minds along Weber-Taylorist lines.

The American Revolution

The 1840s and the U.S. railroads marked the beginning of a great wave of organisational change that has evolved into the modern corporation (Chandler, 1977). A. Chandler is the first "business historian". His study Strategy and Structure (1962) shed light on the American corporation, focusing on General Motors (run by A. Sloan in the 1930s) and du Pont. Chandler analysed the defects of the centralised, functionally departmentalised structure and argued that the bigger a company grows, the more inefficient a hierarchy gets because the management can no longer deal with the increasing complexity of coordinating people [2]. He concluded that decentralisation will flourish, as it allows large companies to establish an organisational platform for better communication and co-ordination.

A few years later, Chandler laid emphasis on control and transaction cost economising and explained how decentralisation and hierarchy could fit together. He characterised this as the 'decentralised line-and-staff concept of the organisation' where the managers were responsible for ordering men involved with the basic function of the enterprise, and functional managers (the staff executives) were responsible for setting standards [3]. Express delegation of authority was of paramount importance [4].

The Corporate Man

Henry Ford adored the idea that organisations were modelled on machines and workers were regarded as 'cogs' and invented the 'assembly line': a system of assembly-line production (known as Fordism), based on divided labour linked together mechanistically. These 'cogs' should be steered in a systematic way to boost efficiency and had no ability to innovate, think or improvise. If every cog had been assigned a specific, repetitive task, everything was supposed to go well. Everything was organised through a pyramid of control designed in a purely bureaucratic fashion. The key word was mass: mass production, mass markets.

The Beginning of the End

By the 1970s this model had begun to falter. A slowdown in productivity, international competition and upward pressures on wages squeezed profits (Clarke and Clegg, 1998). These bureaucracies decided to expand, to embark on a process of 'internationalisation' by following Chandler's guidelines on decentralisation. The aim was reduction of costs. Economising on costs by using cheaper labour was not the solution to satisfy consumers and respond to overseas competition. In a saturated market of no expanding consumer demand, the corporate mantra "any colour they want as long as it is black" no longer worked. Consumers started complaining about low quality, workers were crying out for more rights and even sabotaged the production process, and competitors from abroad started invading the U.S. and European market, particularly Japanese carmakers and consumer electronics. Management writers were claiming that the Japanese threat's success was attributed to a different management paradigm.

Different analysts argued that managers had paid no attention to organisational thinkers since the revolution that the assembly line brought with. In 1963, T. Burns recognised that the hierarchy of command is maintained by the assumption that the only man who knows - or should know - all about the company is the man at the top, and this assumption is entirely mistaken [5]. He categorised the organisation according to two opposite management systems (Table 1): the mechanistic and the organismic.
 
 
 

A mechanistic management system is appropriate to stable conditions. It is characterised by:
The Organismic form is appropriate to changing conditions. It is characterised by:
Hierarchic structure of control, authority and communication
Network structure of control
A reinforcement of the hierarchic structure by the location of knowledge of actualities exclusively at the top of the hierarchy
Omniscience no longer imputed to the head of the concern; knowledge may be located anywhere in the network; the location becoming the centre of authority
Vertical interaction between the members of the concern, ie. between superior and subordinate
Lateral rather than vertical direction of communication through the organisation
 
A content of communication which consists of information and advice rather than instructions and decisions
Table 1: Mechanistic and Organismic Style of Management.
Source: [6]

 

The management of a successful electronics company struck him as 'dangerous thinking' because written communication was discouraged and any individual's job should be as little as possible, so that it would ?shape itself' to his special abilities and initiative (Burns and Stalker, 1961). This management style is obviously the organismic.

What he realised is that turbulent times ask for different structures [7]. He concluded that a mechanistic system is appropriate to stable conditions whereas the organismic form is for changing conditions, which give rise to fresh problems and unforeseen requirements for action which cannot be broken down or distributed automatically arising from the functional roles defined within a hierarchic structure.

Similarly, Lawrence and Lorsch suggested that managers can no longer be concerned about the one best way to organise (Lawrence and Lorsch, 1967). All they suggested was that the more complex the environment becomes, the more flexible the structure should be to allow for rapid responses.

The Fall of the Old Order

These organisations made standardised products, for relatively stable national markets. Their aim was consistency and control; creativity and initiative were frowned upon. But with the liberalisation of world trade, competition became fierce and consumers started demanding products tailored to their needs (Leadbeater, 2000).

This signalled the fall of the command-and-control hierarchy. There were just two problems. First, mass-production oriented processes had been 'stove-piped' into non-communicating business functions. Second, "workers told to 'check your brain at the door', were ill-equipped for the dynamic changes about to wreak havoc on the corporation" [8]. The bureaucratic firm was not qualified to generate knowledge and continuous learning to adapt to these turbulent times.

The Japanese Threat

Sakichi Toyota visited the Ford plant in the 1950s. He realised that much would have to be done differently in Japan (Cusumano, 1985) because the Japanese market demanded many types of cars, thus a more flexible manufacturing system was needed.

Toyota understood that such flexibility and speed could only be delivered by establishing close relationships with suppliers on the basis of mutual benefit. Suppliers became involved in critical decisions and Toyota instead of vertically integrating with them, preferred to use 'just-in-time (JIT) systems. JIT systems establish complex relations with component subcontractors so that supplies arrive when needed. The benefit is minimisation of inventory costs and acceleration of innovation. This is achieved because personnel and ideas are freely exchanged between the partners that make up the subcontracting network (Clarke and Clegg, 1998). Toyota came up with a method of creating, sharing and disseminating knowledge.

Toyota gave rise to significant innovations in production (i.e. Jidoka). However, the most distinct innovation was the management. Toyota was built on Taylor's principles and had a hierarchy with many layers. What they did so differently was that Toyota empowered its employees. They introduced radical methods like job-rotation and project-form of organising. New skills are built into the employees as the latter become more flexible and mobile. Workers were encouraged to develop more skills and work content was not inexorably simplified as in the typical organisation under Fordism. They used self-managing teams where 'team members' allocated tasks internally without any intervention from the higher management and when the team had reached the improvement limits, the team members would move to other areas to pick up new skills. It was the first time that the workers could stop the machines and make crucial decisions. Of course, a sense of trust was developed among the network of partners. After all, workers, management and suppliers were a 'family'.

The Japanese had had the same objectives their competitors had: continuous improvement in production. But they realised the strategic importance of the human element and they encouraged their employees to become kaizen (continuous improvement) conscious by developing as many skills as possible. Also, they seized the opportunities provided by networking between suppliers and the firm to become faster, more flexible and reduce costs. In addition, this model does not take for granted that customers will buy whatever they are offered. The whole production system seeks to ensure quality. This management style is called "lean production".

Quality is everything

In the 1980s, quality was the hype. Managers thought that if the lean production model worked for the Japanese, then it would work everywhere. This signalled the era of the Quality Movement. At the centre is a managerial philosophy that seeks to increase organisational flexibility enabling companies to adapt to changes in the marketplace and swiftly adjust business processes [9]. Quality became synonymous with change, employee empowerment and customer focus.

The new philosophy was termed TQM (total quality management). Dawson and Palmer (1995) identified the TQM as "a management philosophy of change which is based on the view that change is necessary to keep pace with dynamic external environments and continually improve existing operating systems. Those organisations embracing this new philosophy support an ideology of participation and collaboration through involving employees in decision-making" [10]. From late 1970s onwards, all (Western) corporations jumped on the TQM bandwagon. They were evangelising change. The only sad thing was that the emphasis was on incremental innovation, instead of radical change (Clarke and Clegg, 1998). Depending on the organisation and how the TQM approach was implemented, it worked reasonably well until 1990.

Learning means evolving

In 1990, P. Senge put into a context what was already known (Senge, 1990). The role of knowledge is so crucial that no organisation can afford not to extent its existing knowledge and create new. He dwelled upon social systems theory concepts and turned them upstream. The organisation as a social system is an information model whose viability depends upon its capacity for self-design [11]. The difference between 'learning' and TQM was that the emphasis was now on organisational learning rather than individual learning. The point was that organisations should learn to do different things in different ways [12]. Characteristically, Hodgetts, Luthans and Lee (1994) conceptualise it as the transition from an adaptive organisation to one that keeps ahead of change (Figure 2).
 
 

Figure 2: New Paradigm Organisation
Source: [13]



The 'learning paradigm' suggested that 'power' tends to shift towards smaller firms as they learn relatively faster (knowledge flows more freely within small than large firms due to absence of bureaucratic impediments) (Rothwell, 1992).

Furthermore, G. Morgan supported that learning is maximized in flexible, decentralised modes of operation. He insisted that a decentralised networked organisation is the "best design" as long as the network is fostered and not managed and he resembled this design as a spider web. He stressed it is pure risk, but modern times demand risk-embracing to bring innovation (Morgan, 1993).

The Networked Organisation

The network structure reigns

Organisations have been forming stable or elastic networks for a long time. Reasons for doing so vary greatly. Some attribute it to harsh competition because of deregulated global consumer markets and others claim that it is the only way to gain access to new markets, new technologies and know-how. Due to environmental forces and emerging opportunities that firms could exploit, being close to your environment (suppliers, customers and competitors) started being recognised as a unique competitive advantage.

Management gurus have long ago evangelised the advantages related to networking and 'condemned' the disadvantages related to bureaucracy and command-and-control hierarchies.

J. Naisbitt prophesised the shift to a decentralised, networked, global organisation (Table 2).
 
 
 

From
To
Industrial society
Information society
National economy
World economy
Centralisation
Decentralisation
Hierarchies
Networks
Table 2: Original Megatrends.
Source: [14]

 

Organisations controlled by hierarchies where the functional departments are separated will be replaced by organisations based on team-work with cross-teams that treat people as assets [15]. Hames emphasised that this paradigm relies on open and adaptive systems that promote learning, co-operation and flexibility and takes the form of networks of individuals instead of individuals or structures alone (Table 3).
 
 
 

Industrial Age
Information Age
Focus on measurable outcomes
Focus on strategic issues using participation and empowerment
Individual accountability
Team accountability
Clearly differentiated-segmented organisational roles, positions and responsibilities
Matrix arrangement - flexible positions and responsibilities
Hierarchical, linear information flows
Multiple interface, 'boundaryless' information networking
Initiatives for improvement emanate from a management elite
Initiatives for improvement emanate from all directions
Table 3: Transition from Industrial to Information Age Organisations.
Source: [16]

 

Tapscott and Caston (1993) argued that control by hierarchies recedes in efficiency due to boosting technological advancements that favour open networked organisations (Table 4).
 
 
 
 
 

 
Closed Hierarchy
Open Networked Organisation
Structure
hierarchical
networked
Scope
internal/closed
external/open
Resource focus
capital
human, information
State
stable
dynamic, changing
Direction
management commands
self-management
Basis of action
control
empowerment to act
Basis for compensation
position in hierarchy
competency level
Table 4: From Closed hierarchies to Open Networked Organisations.
Source: [17]

 

The 'boundaryless' networked organisation envisages the ideal flexible production system that serves niche-based markets as the response to a society characterised by a decline in the ideas of mass society, mass market and mass production as people no longer want to be identified as part of the mass (Limerick and Cunnington, 1993).

By the late 1990s, management thinkers had embraced the 'networked organisation paradigm' and had similarly dismissed the authoritative model. Typical is the comeback of Hames (1997) who again proclaimed "what hierarchy was to the 20th century, the distributed network will be to the 21st ... the network is the only organisational type capable of unguided, unprejudiced growth ... the network is the least structured organisation that can be said to have any structure at all" [18].

Mergers, Acquisitions & Strategic Alliances

Ansoff (1965) was the first to propose the notion of 'synergy' and that 2+2=5, implying that companies could attain a competitive advantage by joining forces. Mergers, acquisitions and strategic alliances were the first wave of networking disguised under 'internationalisation' and 'expansion'. However, most did not deliver merely because: a) there was 'no strategic fit' (they did not operate in the same or complementary market and thus they could not add any value) between the new partners; b) they were built on bureaucratic structures that impeded the information flow or were shattered by corporate politics; c) appealed to managers only because it was more "fun and glamorous" to run bigger firms (Nordström and Ridderstråle, 2000); and, d) it was the expensive way to get networked (Häcki and Lighton, 2001). Whatsoever, the drivers behind strategic alliances witness that in such a competitive marketplace, the only way to compete is through a network (Figure 3).
 
 

Figure 3: Alliances Driven by Economic Factors (Environmental Forces)
Source: [19]


Nowadays, it has become increasingly common to pursue organisational sustainability and economic self-interest by establishing some kind of alliance especially in information - intensive industries that are in the forefront of upheaval and therefore 'galvanised' by technological uncertainty (Figure 4).
 
 

Figure 4: Alliances in Technologically Unstable, Knowledge-Intensive Markets
Source: [20]



Economic Webs

An economic web [21] is a dynamic network of companies whose businesses (are built around a single common platform to) deliver independent elements of an overall value proposition that strengthens as more companies join (Hagel, 2000). Webs are not alliances. There is no formal relationship among the web's participants as the latter are independent to act in any way they choose to maximize their profits. These features drive them into weblike behaviour. Typical example is the Microsoft-Intel ("Wintel") web, composed of companies that produce Windows-Intel-based software applications and related services for PC users. Unlike alliance networks, in which companies are invited to join by the dominant company, economic webs are open to all and numbers equal power. The purpose of a network platform is to draw together participating companies by facilitating the exchange of knowledge among them.

The platform (a technical standard) of an economic web does not affect participating companies' relationship with the shaper (the company that owns the standard) and enables them to provide complementary products and services [22]. Two conditions must be present for a web to form: a technological platform and increasing returns [23].

The technological standard reduces risk as companies need to make heavy investments in R&D in the face of technological turbulence, while the increasing returns create a dependency among participants by attracting in more producers and customers (Hagel, 2000).

Outsourcing & Software

J. B. Quinn explained why software increases in strategic importance and enables networked structures, and introduced the concept of strategic outsourcing: "Concentrate the firm's resources on its core competences where it can achieve pre-eminence and provide unique value for customers" (Quinn and Hilmer, 1994). The advantages are: a) By outsourcing, you concentrate your resources on what the firm best does; b) well-formed core competences present perfect barriers to entry against potential competitors; c) a company can mobilize the ideas, the innovations and the specialist skills of the suppliers, which it would never be able to replicate itself; and, d) in rapidly changing markets, with shifting technologies, this collaborative strategy reduces risk, shares know-how, speeds learning and shortens development cycles.

In 1990, he visualised the firm as a package of service activities; and that services, not manufacturing activities, provide the major source of value to customers. According to this view, bureaucracy has to be destructed as it was developed for the era that manufacturing was the primary platform of delivering value added. He suggested that " ... there is no reason why organisations cannot be made 'infinitely flat' guided by a computer system" [24]. He investigated the transition to a "spider web" organisation which is a non-hierarchical network and stressed that innovative organisational forms depend on software [25].

The apotheosis came in 1996 when he proved that software is so pervasive that is the primary element in all aspects of innovation from basic research to product introduction and, software is the facilitator of organisational learning that innovation requires as well as an excellent platform of collaboration (Quinn, Baruch and Zien, 1996).

Unbundling outsourcing

Outsourcing focuses the key resources of an organisation on its core value-adding processes. "This is not vertical integration within an enterprise but vertical and horizontal integration across organisations, including alliance partners, sales and distribution agencies, key suppliers, support organisations, and other divisions within their own company" [26].

"Focus on what you excel at and outsource the rest". Nike has struck gold by not applying its slogan (Davis and Meyer, 1998). Timberland no longer makes shoes and in the case of Dell we are talking about a factoryless company. M. Dell realised that "IBM took $700-worth of parts, sold them to a dealer for $2000 who sold them for $3000. It was still $700-worth of parts". But what he actually realised is that information can replace inventory. As D. Hunter, Chief of Dell's Supply Chain Management says: "Inventory is a substitute for information: you buy them because you are not sure of the reliability of your supplier or the demand from your customer" [27].

Virtualness & the virtual organisation

The term virtual means "not physically existing as such but made by software to do so" [28] and management literature identifies the virtual organisation as the extreme form of outsourcing or the exact opposite of Weber's bureaucracy (Table 5).
 
 
 

Modern Organisation
Virtual Organisation
Functionality in design structure
Defunctionalised project-based design held together by network capabilities
Hierarchy governing formal communication flows and managerial imperative the major form and basis of formal communication
Instantaneous remote computer communication for primary interaction; increase in face-to-face informal interaction; decrease in imperative actions and increased governance through accountability in terms of parameters rather than instructions or rules
The files
Flexible electronic immediacy through IT
Impersonal roles
Networking of people from different organisations such that their sense of formal organisation roles blurs
Specialised technical training for specific careers
Global, cross-organisational computer-mediated projects
Table 5: Modern and Virtual Organisation Compared on Weber's Criteria.
Source: [29]

But the literature is not clear. Davidow's and Malone's (1992) classic - The Virtual Corporation - identifies the virtual organisation to be an organisational design enabled by technology to supersede the information controls inscribed in bureaucracy and allow the employees to communicate electronically from any location.

Charles Handy comments on the "virtual organisation" by citing a man using a laptop, a fax and a phone to transform his car to a mobile office and argues that the organisation of the future may outsource all processes and its employees will be communicating like that man (Handy, 1989). Elsewhere, he cites Open University as an example due to the non-existent physical assets of the university (Handy, 1995).

Whereas, Tapscott points out that the value chain becomes a value network, as new (digital) relationships become possible (Figure 5).
 
 

Figure 5: From the Value Chain to the Digital Value Network
Source: [30]



Not surprisingly, what you mean by referring to a "virtual organisation" is subjective. Some imply an organisation that is electronically linked to its environment; others refer to a firm without offices that exists only in the dimension of computer networks.

Project-centric perspective

Moreover, few explored the 'networking paradigm' from a project-centric perspective. Hollywood is famous because a 'crew' or network of people is formed to make a movie (a specific project) and as soon as the film is completed, the 'temporary company' is terminated (Malone and Laubacher, 1998).

Another example is the Silicon Valley, where the average job tenure is two years. Individual firms come and go - temporary alliances of people pursuing specific, narrowly defined projects. In some ways, Silicon Valley performs as a large, decentralised organisation. The Valley, not its constituent firms, owns the labour pool. The Valley, through its venture capital community, starts projects, terminates them, and allocates capital among them [31].

It is apparent that an organisation or a network can be formed on a temporary basis in order to undertake a project and once this is done, the whole organisation or network will disband. Eventually, all supply chains (or organisations) might become ad hoc structures, assembled to fit the needs of a particular project and disassembled when the project ends (Malone and Laubacher, 1998).

For the purpose of this paper, the "virtual organisation" we refer to, is a collection of people that interact only via electronic networks during the knowledge-intensive phases of the product produced (ie. design).
 
 

++++++++++

Researching an Emerging Paradigm

Qualitative research

It is traditional in the social and organisational research field to apply qualitative research techniques to the study of large social units (ie. organisations) and quantitative techniques to the study of individuals (Etzioni, 1969). Researchers have tried to bridge this gap by applying quantitative techniques to the study of organisations (Lazarsfeld and Menzel, 1969; Coleman, 1969; Zelditch, 1969) but even today most researchers that focus on organisational trends/changing paradigms choose to undertake a qualitative approach, particularly the 'case study' analysed below.

In retrospect, qualitative research facilitates quantitative research by means of acting as a precursor to the formulation of problems and the development of instruments for quantitative research. Qualitative research may act as a source of hypotheses to be tested by quantitative research (Sieber, 1973). It is hoped that this research will trigger further investigations to test the concluding hypotheses.

Case study approach

Researchers still rely on this basic approach (in-depth case study) when they encounter a 'non thoroughly examined group phenomenon' and it can be deployed to examine a wide variety of groups (or organisations) (Festinger, Riecken and Schachter, 1956; Radloff and Helmreich, 1968; Hare and Naveh, 1986; Stones, 1982; White, 1977; Bennett, 1980).

One of the reasons behind the lack of extensive research and literature on 'virtual organisations' is merely because the latter represent an emerging phenomenon - organisational structure.

Therefore, the 'case study' is the only approach that can shed light and provide a profound view at the absence of a spectrum of proper examples to draw upon as it allows for a significant amount of data to be gathered and thorough examination. The selection of a suitable 'case study' was obvious - the last ten years have provided us with a perfect example of virtual collaboration: the constantly evolving Linux (or GNU/Linux) operating system project that is being co-developed and maintained by thousands (the exact number is unknown) of geographically dispersed people.

It certainly fulfils all the prerequisites for our research since all the phases of the development process take place only in the Internet without any physical interaction among the developers.

To enhance the validity of the case study approach, we decided to compare the chosen 'virtual organisation' with an organisation that is a) operating in the same industry; b) highly competent and competitive; c) has access and deploys a large amount of resources; and, d) relies on centralised decision-making, management and development.

Again, the selection was apparent - the Microsoft Corporation. There could not have been a more appropriate organisation since:

MS is the archetypical centralised model in the industry [32]. However, due to lack of space, the comparison between Linux and Microsoft that appears later in this paper focuses solely on the critical differences between the two models and does not offer a complete overview of Microsoft [33].

Advantages of the method

Case studies allow in-depth understanding of the group or groups under study, and they yield descriptions of group events - processes often unsurpassed by any other research procedure. Also, and at a more pragmatic level, case studies can be relatively easy to carry out and they make for fascinating reading. But the real forte of the case study approach is its power to provide grist for the theoretician's mill, enabling the investigator to formulate hypotheses that set the stage for other research methods (Forsyth, 1990).

Disadvantages

Primary Sources of Data

Observation

Because the group has not yet disbanded, we decided to observe it as it carries out its functions (through the various Linux mailing lists, Web sites, bulletin boards).

The main strength of observation is its capacity to reveal covert and hidden activities. Studies of groups in large organisations (Dalton, 1959), output regulation in industrial work groups (Roy, 1960) and pilferage at work (Ditton, 1977) all demonstrate the capacity to look behind the scenes and bring to the centre of the stage aspects of these milieux which would otherwise be inaccessible or possibly not even uncovered in the first place (Bryman, 1988).

This study, though, is peculiar in terms of the observation technique used. It does not fall into any of the categories of observation (see Scott, 1969; and, Adler and Adler, 1998 for the observation categories).

The key difference is that I observed the Linux community not physically, but virtually [34]. It is not participant observation or action research since I did not get involved in the actual development process and I did not engage in any conversation that took place in the various mailing lists and (virtual) discussion forums.

Furthermore, major issues such authorisation to enter and explore the particular organisation and whether I should reveal my 'research identity' (overt observation/open researcher) or keep it a secret (covert observation) (Schwartz and Schwartz, 1955; Schwartz and Jacobs, 1979; Whyte, 1943; Landsberger, 1958; Mayo, 1945; Roethlisberger and Dickson, 1939; Bramel and Friend, 1981; Franke, 1979; Franke and Kaul 1978; Barnard, 1938; McGregor, 1960) or how to present myself (Becker, 1956; Spradley, 1979; Fontana, 1977; Malinowski, 1922; Wax, 1957; Johnson, 1976) and gain trust (Frey, 1993; Cicourel, 1974) are irrelevant in this study because access to the organisation (through mailing lists, etc.) is open to the public without asking for permission. There is no question then of invasion of privacy or of using a deceptive observation method (Cook, 1981; Douglas, 1976; Reynolds, 1979).

This unique attribute adds to the overall objectivity and validity of our findings as the individual or group behaviour could not have changed due to our 'virtual presence'. Hence, the possibility for bias is eliminated.

Interviews

Personal indepth semi-structured interviews have been used, by phone and e-mail (the decision was up to the interviewees).

Semi-structured interviews can produce both qualitative and quantitative data (Crozier, 1964) but the reason we chose this form of interviewing was in order to make the interview pleasing to the persons interviewed [35]. Also, semi-structured interviews provide rich detailed data of greater value than straight question and answer sessions especially when the research aim is to explore a phenomenon (Zweig, 1948). Moreover, they do not put the interviewer in an unnatural relationship with those who are researched (Roberts, 1981; Wakeford, 1981).

Overall, field researchers use a semi-structured (or unstructured) approach that is based on developing conversations with informants [36] and this strategy follows a long tradition in social research where interviews have been perceived as 'conversations with a purpose' [37]. They are usually used to complement observation and other techniques and they achieve the flexibility needed when dealing with a complex phenomenon (Roberts, 1981; Finch, 1984; Burgess, 1984).

The interviewees were selected on the basis of their involvement in the Linux project and the Open Source - Free Software movement. We contacted individuals that are recognised by the hacker community, the press and previous researches on the topic as key figures. In addition, we contacted several 'commercial' GNU/Linux distributors and leading software/telecommunications companies (Intekk communications and Lucent Technologies) so as to have a broader picture of the overall management of the 'economic web'.

This approach is dictated by the fact that this is a cutting-edge research and the interviewees have to be at the forefront of technological and organisational transition/evolution and have influence over their 'surrounding community'.

Secondary Sources of Data

A large amount of secondary data have been used. Earlier in this paper, we have drawn upon the organisational literature that identifies the most significant management paradigms whilst tracing back the evolution of the organisation and management. The 'linking thread' is knowledge (access to, sharing, creation, codification, exploitation and dissemination) and its impact that explains the transition to the virtual-network(ed) organisational structure.

In the second part of the literature review, we have analysed important aspects of the software industry that help to understand the behaviour of firms and individuals in the software industry.

As far as the Linux project is concerned, we have used members' biographical writings, previous interviews with key members and descriptions of the group written by other researchers and key figures.

It should be mentioned that Microsoft is covered mostly by secondary data for the following reason: Cusumano and Selby's Microsoft Secrets (London: HarperCollins, 1995) is acknowledged to be the best research on MS so far, it is academic-oriented, entirely based on primary research and offers a complete cultural and technical overview of the company. Also, to have an alternative view, we selected an insider's book, Drummond's Revolutionaries of the Empire (New York: Three Rivers Press, 1999).

Framework of Analysis

The framework (criteria) of analysis draws upon key features of major organisational paradigms (ie. TGM paradigm focuses on continuous improvement, 'Learning paradigm' focuses on organisational learning) and how these are managed. This framework aims at achieving a critical analysis of both Microsoft and the Linux project not based only on one criterion (ie. ability to innovate or flexibility) but through a complete investigation of both entities as it stems from the development process. This decision is based on the fact that "the business literature is rife with stories of performance indicators that failed to capture important aspects of a complex setting. These misattributions may occur because of causal connections that no one understands" [38].

Only through a 'total approach' like this, it is possible to fully explore the management strategies and their efficiency and come to realise what management processes can support or empower the development of a virtual organisation and facilitate virtual, decentralised collaboration.
 
 

++++++++++

The Linux Project

Free Software & Open Source

To understand the workings of the software industry and the Linux project, we need to briefly analyse the ideology of hackers, the advent of free software and some events of historic significance [39]. The term 'hacker' is an euphemism used in computer science circles to describe a talented computer programmer.

In 1971, Richard Matthew Stallman (RMS), started working at MIT's AI Lab. At the time, the AI (artificial intelligence) Lab was regarded as the heaven for most computer programmers as it gave rise to many innovations, contributing to the development of ARPAnet in 1969 (the ancestor of the Internet). Around MIT, an entire culture was built that defined programmers as scientists and members of a networked tribe because ARPAnet enabled researchers everywhere to exchange information, collaborate and hence, accelerate technological innovation (Raymond, 1999a). After all, one of the motivations for launching ARPAnet was to connect communities of computer programmers in order to share their programs and knowledge [40].
Programming for 'hackers' was the ultimate pleasure and nothing but programming mattered. Their home had become the AI Lab. They defined themselves through programming and the hacker culture was their religion (Levy, 1984). They were scientists and as such, their creations-discoveries (software) should be available to everyone to test, justify, replicate and work on to boost further scientific innovation. Software is code (or source code) and the code they developed was available to everyone. They could not conceive that software would have been trademarked some day.

RMS was fascinated by this culture and spent nearly ten years at the AI Lab until in 1981 a company called Symbolics hired all AI Lab programmers apart from two: one of them was RMS. The era that software was trademarked had begun and increasingly more hackers entered the payroll to develop proprietary software whose source code was well guarded as a trade secret. RMS was so frustrated that his community had been destroyed by Symbolics and proprietary software. He decided to embark on a crusade that has not been matched before or since: re-build the hacker community by developing an entire free operating system. For him, "free" means that the user has the freedom to run the program, modify it (thus the user needs to be provided with the source code), redistribute it (with or without a fee is irrelevant as long as the source code is been provided with the copy) or redistribute the modified version of it. The term "free software" has nothing to do with price. It is about freedom [41].

In 1984, he started his GNU (it stands for GNU's Not Unix) project which meant to become a free alternative of the Unix operating system and in 1985 he founded the Free Software Foundation (FSF). To ensure that GNU software would never be turned into proprietary software, he created the GNU General Public License (GNU GPL) [42].

The GNU GPL outlines the distribution terms for source code that has been "copylefted" with guidelines from the FSF. Copylefting involves copyrighting a program and then adding specific distribution terms that give everyone the right to use, modify and redistribute the code. The distribution terms of the GPL are 'viral' in the sense that derivative works based on GPL'd source code must also be covered by the GPL. As a result, other programs that use any amount of GPL'd source code must also publicly release their source code. Therefore, GPL'd code remains free and cannot be co-opted by proprietary development [43]. In fact, the Linux OS is licensed under the GNU GPL and uses most of the GNU programs. That is why it is often referred as GNU/Linux.

During his crusade, he wrote powerful, piece-of-art software that could be both used by programmers as programming tools and also provide 'pieces' that when combined would eventually make up his dream: the GNU operating system. His software appeals to many programmers and so, the pool of GPL'd software grows constantly bigger. He remains a strong advocate of all aspects of freedom, and he sees free software as the area he can contribute the most [44].

However, at a historic meeting in 1997, a group of leaders in the Free Software Foundation came together to find a way to promote the ideas surrounding free software to large enterprises as they concluded that large companies with large budgets were the key drivers of the software industry. "Free" was felt to have too many negative connotations to the corporate audience (Figure 6) and so, they came up with a new term to describe the software they were promoting: Open Source (Figure 7) [45]. Stallman was excluded from this meeting because "corporate friendly" is no compliment in his book [46].
 
 

Figure 6: Free Software
The ovals at the top represent the outward face of the movement.
The projects or activities that the movement considers canonical in defining itself.
The ovals at the bottom represent guiding principles and key messages.
The dark ovals represent undesirable messages that others might be creating and applying to free software; negative connotations to the corporate audience.
Source: [47]



They concluded that an Open Source Definition and license were necessary, as well as a large marketing-PR campaign. The "Open Source Definition and license [48] adhere to the spirit of GNU (GNU GPL), but they allow greater promiscuity when mixing proprietary and open source software" [49].
 
 

Figure 7: Open Source
The new 'map' puts Strategic Positioning in a much clearer context: Open Source is about making better software through sharing the source code and using the Internet for collaboration. And User Positioning speaks of user empowerment instead of moral issues.
The list of core competences is more focused and the negative messages are replaced with directly competing messages that counter them.
Source: [50]



Apparently this strategy has worked wonders since then: key players of the industry (IBM, Netscape, Compaq, HP, Oracle, Dell, Intel, RealNetworks, Sony, Novell and others) have shown great interest in the Open Source Movement, its development and business model and open source products. These days, an equivalent economic web around Open Source exists, in which for example, many companies offer support services and complementary products for Open Source products [51]. In addition, there is plenty of favourable press coverage and even Microsoft regards Open Source as a significant competitive threat (Valloppillil, 1998).

Linux

In 1991, Linus Torvalds made a free Unix-like kernel (a core part of the operating system) available on the Internet and invited all hackers interested to participate. Within the next two months, the first version 1.0 of Linux was released. From that point, tens of thousands of developers, dispersed globally and communicating via the Internet, contributed code, so that early in 1993, Linux had grown to be a stable, reliable and very powerful operating system.

The Linux kernel is 'copylefted' software, patented under the GNU GPL, and thus, nobody actually owns it. But more significantly, Linux is sheltered by the Open Source (hacker) community. From its very birth, Linux as a project has mobilised an incredible number of developers offering enhancements, modifications/improvements and bug fixes without any financial incentive. Despite the fact that an operating system is supposed to be developed only by a closely-knit team to avoid rising complexity and communication costs of coordination (Brook's Law), Linux is being developed in a massive decentralised mode under no central planning, an amazing feat given that it has not evolved into chaos.

Innovation

release early and often: Linus put into practice an innovative and paradox model of developing software. Frequent releases and updates (several times in a week) are typical throughout the entire development period of Linux. In this way, Linus kept the community constantly stimulated by the rapid growth of the project and provided an extraordinary effective mechanism of psychologically rewarding his co-developers for their contributions that were implemented in the last version. On top of this, in every released version, there is a file attached which lists all those who have contributed (code). Credit attribution if neglected, is a cardinal sin that will breed bitterness within the community and discourage developers from further contributing to the project.

According to conventional software-building wisdom, early versions are by definition buggy and you do not want to wear out the patience of your users. But as far as the Linux development stage is concerned, developers are the users themselves and this is where most innovation is created (Figure 8). "The greatest innovation of Linux is that treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging" (Raymond, 1998a).
 
 

Figure 8: Innovation skyrockets when Users and Producers overlap
(as during the Linux development process).
Source: [52]



Similarly important was Linus's decision to create a highly portable system. Whenever new hardware is introduced, Linux development focuses on adapting Linux to it. "Linux therefore quickly appreciates technological elements and turns them into resources for the Linux community" [53]. Its capability to adapt to environmental changes and continuously learn is unprecedented.

Structure & Decentralised Development

Linus cultivated his base of co-developers and leveraged the Internet for collaboration. The Open Source self-appointed anthropologist, E. Raymond explains that the leader-coordinator for a 'bazaar style of effort' does not have to be an exceptional design talent but to leverage the design talent of others (Raymond, 1998a).

However, "the Linux movement did not and still does not have a formal hierarchy whereby important tasks can be handled out ... a kind of self-selection takes place instead: anyone who cares enough about a particular program is welcomed to try" [54]. But if his work is not good enough, another hacker will immediately fill the gap. In this way, this 'self-selection' ensures that the work done is of superb quality. Moreover this "decentralisation leads to more efficient allocation of resources (programmers' time and work) because each developer is free to work on any particular program of his choice as his skills, experience and interest best dictate" (Kuwabara, 2000). In contrast, "under centralised mode of software development, people are assigned to tasks out of economic considerations and might end up spending time on a feature that the marketing department has decided is vital to their ad campaign, but that no actual users care about" [55].

The authority of Linus Torvalds is restricted to having the final word when it is coming to implementing any changes (code that has been sent to him). On the other hand, he cannot be totalitarian since everything is done under perfect transparency. Most communication takes place at the Linux mailing list (which serves as a central discussion forum for the community and is open to the public) and Linus has to justify all his decisions based on solid technical arguments. The management's accountability is essential and only by earning the community's respect, leadership can be maintained [56].

There is only one layer between the community of Linux developers and Linus: the "trusted lieutenants". They are a dozen hackers that have done considerably extended work on a particular part of the kernel to gain Linus' trust. The "trusted lieutenants" are responsible to maintain a part of the Linux Kernel and lots of developers sent their patches (their code) directly to them, instead of Linus. Of course, apart from Linus that has encouraged this to happen, this informal mechanism represents a natural selection by the community since the "trusted lieutenants" are recognised [by the community] as being not owners but simple experts in particular areas [57] and thus, their 'authority' can always be openly challenged. This does not mean that Linus has more influence than they have. Recently, "Alan Cox (one of the "trusted" ones) disagreed with Linus over some obscure technical issue and it looks like the community really does get to judge by backing Alan and making Linus to acknowledge that he made a bad choice" [58].
 
 

Figure 9: Structure of Linux.



Modularity
What made this parallel, decentralised development feasible is the highly modular design of the Linux kernel. "It meant that a Unix-like operating system could be built piecemeal, and others could help by working independently on some of the various components" [59]. Modularity means that any changes done, can be implemented without risk of having a negative impact on any other part of the kernel. Modularity makes Linux an extremely flexible system and propels massive development parallelism [60] and decreases the total need for coordination [61].

Motivation

Open Source programmers write software to solve a specific problem they are facing, 'scratching their personal itch' as E. Raymond points out (Raymond, 1998a). Once they have done so, they can choose either to sit on the patch or give it away for free and hope to encourage reciprocal giving from others too (Raymond, 1999c). But it is the 'hacker ethic' and the community dynamics that provide strong incentives to contribute. First of all, the process of developing such software is quite pleasurable to them. "They do not code for money, they code for love" [62] and they tend to work with others that share their interest and contribute code. They enjoy a multiplier effect from such cooperation (Prasad, 2001) and "when they get the opportunity to build things that are useful to millions, they respond to it" [63].

In addition, the Linux community has dealt with the lack of centralised organisation through an implicit reputation system that characterises the Open Source (OS) community (Kuwabara, 2000). OS programmers have always being unconsciously keen on gaining the community's respect ("seek peer repute") and the best way to do so is by contributing to an already existing project that is interesting enough or has created enough momentum within the community (Raymond, 1998b).

Interestingly, there are also people motivated to contribute because of indirect monetary returns. These programmers fall into four main categories [64]:

a) they work for a commercial Linux distributor (ie. Red Hat) or a company that provides complementary services and products;
b) their work involves occasional Linux programming (ie. their employer's information systems-computer infrastructure relies on Linux);
c) their involvement with Linux makes them more attractive (employable) to potential employers; and,
d) they develop applications that run on Linux in hope of encouraging venture capitalists to fund their start-ups or receiving ownership rights (ie. shares) in start-ups.

However, these financial motives are indirect since no one is paid by the Linux project or by its leader - Linus Torvalds.
 
 

Figure 10: The Linux Development Model Maximizes Learning.



Learning

Massive parallel development is evident in the case of Linux. "In a conventional software development process, because of economic and bureaucratic constraints, parallel efforts are minimized by specifying the course of development beforehand, mandated by the top. In Linux, such constraints are absent since Linux is sustained by the efforts of volunteers. In the 'Cathedral', parallelism translates to redundancy and waste, whereas in the 'Bazaar', parallelism allows a much greater exploration of a problem-scape for the global summit and the OSS process benefits from the ability to pick the best potential implementation out of the many produced". Thereafter, (apart from quality and innovation), social learning is maximized in a 'bazaar style' of development (Figure 10) [65].

Linux has a parallel release structure: one version (1.2.x and 2.0 series) is stable and 'satisfies' the users that need a stable and secure product, and another version (2.1.x series) is experimental and 'cutting-edge' and targets advanced developers [66].

The significance of the parallel release structure should not be underestimated. All decision-making situations face a trade-off: exploration versus exploitation. This principle states clearly that the testing of new opportunities comes at the expense to realising benefits of those already available and vice versa (Holland, 1975; March, 1991; Axelrod and Cohen, 1999). The obvious organisational implication is that whenever resources are allocated to the search for future innovation, resources dedicated to the exploitation of existing capabilities are sacrificed and vice versa. This trade-off is fundamental to all systems and vital to their ability to evolve and survive; a favourable balance is therefore critical (Kauffman, 1993).

Surprisingly, this trade-off is not applicable to Linux: the parallel release structure ensures that both present and future (potential) capabilities are harnessed and this renders Linux extremely adaptive to its environment and also ensures the highest level of innovation and organisational learning.

This massive parallelism (and parallel learning) has resulted in a more secure, reliable, defect-free operating system. One of the most critical stages of software development is characterising the bugs (detecting any defects by testing) and then debugging the software. The massive parallelism has rendered Linux 'untouchable' as all bugs are quickly identified. E. Raymond describes this phenomenon in this way - "Given enough eyeballs, all bugs are shallow" - pointing out that security is an aspect of reliability. And such reliability can only be achieved through massive and parallel peer review. The paradox is that security is directly dependent upon the transparency of the source code and security can only be compromised by non-open code (Schneier, 2000).

Management of the economic web

In the last few years, many organisations have been built around the GNU/Linux operating system and other Open Source software. They mostly offer support services (i.e., maintenance, training), complementary products and packaged distributions. Most of them that are in a favourable financial position, spend money on R&D related to Open Source projects and contribute in any way they can in order to remain dominant players in the Open Source software market and for goodwill towards the Open Source community that provides them with a quality product [67]. Nowadays, increasingly more 'big players' are joining the web: IBM, Dell, Oracle, Intel, HP, SAP and others have been tantalised by Linux and its Open Source development model, that have started investing heavily in the 'Linux platform'.

The management of this web depends heavily on the fact that every member of the web does not place any restrictions or rules on the other participants. Linux as a project community focuses on the software development process and does not interfere in any way with the businesses built around it. The key factor may be the open source code that enables a common computing and communications infrastructure to be established more easily rather than with a 'closed source code' based technological platform. E. Raymond attributes that to the issues of trust and symmetry - "potential parties to a shared infrastructure can rationally trust it more if they can see how it works all the way down, and will prefer an infrastructure in which all parties have symmetrical rights to one in which a single party is in a privileged position to extract rents or exert control" [68].

In all, the massive network effects that pervade the software industry trigger further growth and adoption of the GNU/Linux OS (Figure 11) which in turn might lead to lock-in of the market under this platform and establish Linux as the dominant operating system [69].
 
 

Figure 11: Positive Network Effects Driving Ongoing Growth/Adoption of GNU/Linux OS.





++++++++++

Microsoft vs. Linux

Business Processes

The critical business process in both Microsoft and Linux is development. Microsoft is characterised by centralised development since most activities take place in one site: Redmond, Seattle. Brook's Law [70] holds strongly in every physical software development model and MS is not an exception. Therefore, the cost of co-ordination is high.
 
 
 
 
Microsoft
(Physical)
Linux
(Virtual)
Business Processes
(development)
Linear
Parallel
Cost of development
High
Low
Cost of coordination
High (Brook's law holds)
Low (Brook's law doesn't)
Mode of organisation
Centralised
Decentralised
Management
Hierarchical
Collaborative-community
Emergent (leadership)
Hierarchical layers
Several
Collaborative-community
Four, but not in a bureaucratic fashion
Modularity
Low
High
Knowledge functions
(access, sharing, diffusion, creation, exploitation)
Low
Massive
Organisational learning
Linear
Parallel
System
Closed
Open
Users-producers
Separated
Overlap
Number of participants
Limited
Unlimited
Product transparency
Absent (copyright)
Massive (copyleft)
Decision-making
transparency
Low
Massive
Product innovation
Low
High
Organisational innovation
High
Massive
Management of the
economic web
Offensive - successful
Collaborative - successful
Cost of platform
High
Low
Flexibility of platform
Low
High
Use of standards
Low
High
Table 6: Microsoft vs. Linux.

 

On the other hand, Linux is synonymous to decentralisation since the project is developed by thousands of dispersed people who collaborate under no central planning. It defies Brook's Law because of its parallel release structure, extreme modularity, "trusted lieutenants" structure and as a consequence, co-ordination costs are almost negligible.

Microsoft has elaborated on a development model (synch-and-stabilize) that may be identified as parallel when compared to the conventional 'sequential' model (Figure 12) but is still far from comparable to the incredibly parallel development model of Linux. This is partly attributed to the Linux parallel release structure and mainly because of Linux being an open system that encourages anyone to participate. This is not the case with MS that naturally 'encourages' (employs) only a tiny fraction of the programmers willing to participate (to be employed) in the face of financial constraints and increasing co-ordination costs.
 
 

Figure 12: Synch-and-Stabilize Life Cycle for Program Management, Development and Testing
(12-24 Months from Milestone 0 to Manufacturing).
Source: [71]



This massive parallelism that characterises the Linux Project allows for an ideal allocation of resources towards both exploitation of existing advantages and exploration of potential future opportunities and results in skyrocketing organisational learning and innovation (both product and organisational) that in turn, renders Linux extremely adaptive to any environmental changes.

Management, Structure and Knowledge

Microsoft's management is hierarchical as all the strategic decisions are made by Gates and 'the President's Office' and employees are discouraged to bypass their superiors. Microsoft's structure is built on this premise (Figure 13) and political manoeuvring within the company is not rare. Despite the fact that frequent job rotation and project organising are the norm, information flow is restricted mostly due to politics and therefore the only value extracted out of the knowledge functions is the twenty years of experience that MS has of the industry. Of course this does not mean that MS maintains 'positive' relationships with its surrounding environment since its arrogant management of the economic web has seriously damaged the information exchange between them. The effect on organisational learning and innovation is certainly negative.
 
 

Figure 13: Microsoft Scalar Chain of Command.
Source: [72]



In total contrast, Linux relies on four structural layers that are comprised by the leader (Torvalds), the "trusted lieutenants", the "pool of developers" and the Open Source community. Even though the community layer does not appear to be part of the Linux structure at first glance, it is in fact incorporated in it and has significant influence over the Project. The pattern of the layers cannot be said to be vertical but rather horizontal if there has to be a hierarchical classification.

This structure is not paradox if we consider the technical objectives of Linux as a project: quality and flexibility. For such quality and flexibility to be achieved, a system ensuring the highest quality and flexibility had to be in place. This mechanism takes shape in the form of emergent leadership and emergent task ownership which implies that all decisions and hierarchical positions can be openly challenged so as the best decision is reached and the most capable individual is chosen. Naturally, the prerequisites for such a style of managing are product and decision-making transparency and Linux has embraced both from its inception.

Consequently, this transparent 'collaborative-community centred' management is by far the greatest innovation of Linux and the key enabler for limitless value extraction out of the 'knowledge functions' since all organisational members including the surrounding community are encouraged to freely share information.

At last, the nature of Linux renders it attractive to the economic web because the platform (on which the economic web is based): is flexible (easily customisable), transparent (non-proprietary), the platform's cost is low and build on common standards. These features of Linux as a technological platform reduce uncertainty in the light of technological upheaval and enable investment in Linux (by the economic web) to take off. Evidently, the non-proprietary platform presents an opportunity for 'collaborative management of the economic web' and to date, the Linux Project reaps full benefit out of this opportunity.
 
 

++++++++++

The New Paradigm

Transformations of Management

Management can be digital and networked

Because of enabling technologies, management can be successfully networked as long as the human intelligence that makes up the organisation is networked too. "The intellectual power generated through networking minds for collective vision will far surpass the intellectual prowess of the smartest boss. Equally important, strategies developed collectively have higher probability of being implemented" (Tapscott, 1996). Collective thinking leads to collective action.

If the people that participate in a project are being supported by an appropriate technological infrastructure, management can also become entirely non-physical. With the advent of tele-immersion and related technologies, time and space will no longer be important. However, organisations do not have to wait for wide scale rollout of advanced complex technologies. Despite the technical prowess of Linux developers, use of simple and reliable technologies (ie. e-mail, file transfer, newsgroups) was common in the Linux Project. This clearly shows that the management does not have to be a "PhD in computer science", simply to deploy the already existing tools in the most efficient way to facilitate communication between large groups. The most progressive technologies will not deliver if they are not simple (to handle) and 'common protocols-standards' (to ensure interoperability and compatibility) [73].

Of course, organisational design has to change. In a hierarchical organisation, the employees are vertically or horizontally linked to each other and information rarely bypasses more than one layer, and therefore there is no fertile ground for the management to be networked.

Let's consider the Linux structure again (Figure 14). It is evident that the flow of information is networked. Whenever a developer makes an inquiry, every other member has access to the inquiry and can answer back. Or whenever someone engages in a conversation, the entire conversation is so transparent that anyone can intervene and participate. The Linux mailing list serves as the disseminator of information and links all the members together.

Obviously, for this structure to be efficient, management and decision-making should be based on merit (and not on status) and be transparent throughout the entire organisation.

In the case of Linux, the selection of developers and the changes to be implemented are purely based on solid technical grounds (merit). This indirectly achieves control over product development (and eliminates the need for scholastic control over the employees) and ensures the highest level of quality. In addition, every decision made (by Torvalds) has to be clearly and logically justified and in case there is no catholic consent, the community is able to challenge it.

It is increasingly the role of management to break free from any restraints related to bureaucratic regimes, and empower the customers (both internal and external) of the organisation to participate in all crucial decisions. Issuing orders and direct exercise of control must be replaced by communication of vision and direction. Management becomes leadership. This explains why a project such as Linux, which operates and grows organically under no central planning, needs a leader like Linus Torvalds - to initiate change, communicate vision and "create a organisational mindset" where network communication is fostered.

To continue, the product should be 'transparent' too. Products under development should be accessible to all organisational members to use, test, fix problems and build upon features. Product specs that are guarded under secrecy, do not add any value to this purpose. The organisational equivalent of 'copyleft' (GNU GPL) will enable distributed collaboration. All organisational systems should reward and encourage sharing of information [74].
 
 

Figure 14: Linux Structure Depicted as Flows of Information Between Value Streams.
This structure shows that the "trusted lieutenants" is a support value stream/function and not an additional layer since the developers (can) go directly to Torvalds, and the developers are the primary value stream. This is a pure network structure as all the participants have access to all the nodes of the system. Accordingly, the flow of information is 'networked' since all stakeholders have access to every occurring communication through the mailing list.
Source: [75]



Management should ensure that the organisational and project design maximizes organisational learning and empowers big teams to collaborate digitally.

The organisational design must enable the human intelligence that resides within the organisation and its environment to get networked. With creative use of the technology, knowledge will be disseminated throughout the network towards all the participants (organisational learning), and not just exchanged among few nodes (individual learning). As the Linux Project proves, massive parallel learning is not just a matter of technology but of design and management. The project design must be as modular and simple as possible to facilitate digital collaboration (between an incredibly big project team). It is more important that the design is simple and modular than error-free.

Task decomposition is established as an intelligent organizational principle (March and Simon, 1958; Baldwin and Clark, 1997) and modularity is an extreme form of task decomposition. Modular task decomposition reduces coordination costs to the minimum. Also, it allows multiple groups to work on the same module independently. In the end, the best work is adopted (Moon and Sproull, 2000). Caesar's 'Divide and Conquer' strategy has not lost its effectiveness.

Hence, it is equally important for organisations to embrace parallel development. By initiating a parallel release structure (in projects), both stability and rapid development are ensured. Usually, organisations face a trade-off between exploration and exploitation and this why linear structures have been preferred. But, "parallel release structures could generate more innovation and continuous improvement" (Moon and Sproull, 2000).

Let's consider what could be accomplished in this way. In many business situations and in software development, Pareto's Law applies: 90% of the value/functionality takes 10% of the effort/time. "One excellent thing about Linux is that the originator spent the 10% to produce a working product (highly modular and simple), released it, and has the community do the 90% to finish it off- a massive saving" [76] (Figure 15).
 
 

Figure 15: Pareto's Law & the Linux Project.



Management focus shifts from Organisational Dynamics to Economic Web (Network) dynamics

Managers should focus on the mechanisms (network effects) that govern the market their organisations are in. The first step is to identify the economic web(s) since success or failure is often decided not just by the company but by the success or failure of the web it belongs to. Active management of the economic web can be a decisive factor in an organisation's future (Hagel, 2000).
The conventional strategy requires organisations first to define their own strategies and then to proceed to alliances. But in industries that are built around technological platforms, the management confronts two basic choices: which economic web(s) to participate in (or form) and what role (shaper - that is, the owner of the technological platform - or adopter - built around the technological platform) to play within them. Corporate strategy follows web strategy (Hagel, 2000).

Identifying the New Paradigm

The Emergence of a new paradigm?

So far, this paper has highlighted factors and structures that support decentralised development. The critical questions though still remain: Under which circumstances is the Virtual Organisation a rational organisational model, capable of unlimited growth and competitiveness and how can it be created and sustained? This part will attempt to answer these questions.

We should mention that virtual organisations do not yet exist and maybe will never. The Linux Project - the basis for a model of a virtual organisation in this paper - shows why: the first problem is our perception of the organisation. The conventional perception is that an organisation aims at high profits by selling large quantities of products or providing as many services as possible. To achieve these goals, it deploys a variety of tactics such as costs reduction, political manoeuvring, marketing campaigns, expansion of product portfolio, economies of scale and scope and so on. This paradigm relies on companies to deliver profits and as long the company is profitable, it is sustainable. The emerging paradigm is about creative people enacting creative projects that in turn mobilize creative people. The emerging organisation paradigm is not a company but about a project.

No boring project will mobilize the necessary "talent pool" - the creative human resources [77] and one of the crucial success factors of the virtual organisation model is the massive decentralised collaboration that demands abundant brainpower - a 'legion of workers'.

For a project to be creative - so as to mobilize critical mass of resources - four conditions must be met:

a) The users and the implementers overlap or communicate directly without interference of a filtering layer-mechanism - implementers and users have common objectives;
b) The organisational 'copyleft' is put into place to reward and encourage transparency and sharing of information;
c) Deeply rooted trust within the surrounding community-network; and,
d) Fluid networked organisational structure that gives rise to positive hyper competition and meritocratic self-selection of task owners (leadership is not imposed from the top but emerges from the bottom): motivation to attract creative people.

Motivation is the source of sustainability

Traditional organisational structures are based on deadlines, payroll roles, fixed positions, etc. and this is unlikely to attract creative people [78]. Creative people are excited by fluid structures that provoke competition among peers. The incentive is to differentiate yourself by rising above the standard level (through achieving excellency) and gain the recognition of your peers that regard you as a leader. This is the ultimate form of this motive and recognition by your boss in a bureaucratic environment is the least.

This motive is parallel with achieving a communal goal (your peers are thriving to achieve the same goal). Even though this motive is partially fuelled by the egoistic, competitive nature of creative people, it also ensures the highest level of quality (only the best work will be selected in a democratic process where all organisational members have 'equal vote' and excellency is the higher goal) and leads to continuous and rapid organisational evolution (since leadership is constantly challenged by equally creative people).

This positive hyper competition sustains the project and not profits. Also, it enables the "benevolent dictator" and "trusted lieutenants" structures to emerge. The next question is how, when and where these preconditions will come together to create the creative project - the virtual organisation. The answer is obvious if we consider the Linux project from a different angle (Figure 16).
 
 

Figure 16: The Linux Project & The Virtual Roof.
The lines and the arrows represent information flow and specific project functions (transfer of patches-source code) whereas information flow pervades the virtual roof and is diffused towards all directions.



The Virtual Roof

The Linux Project is characterised by flow of information within the entire organisation, implying among the thousands of developers, Linus Torvalds and the "trusted lieutenants". All of them have access, can engage into all occurring conversations and can communicate directly with every other member-implementer through discussion forums, mailing lists, and newsgroups. But the flow of information is not restricted only within implementers but it extends to the global community reaching virtually everyone interested, including commercial companies (i.e. Sygnus Solutions, SuSe, Red Hat, VA Linux) that provide support services and packaged distributions, computer scientists that may have not been involved directly (as implementers), companies that consider adopting open source software for their own internal use, users that need help from experts and anyone interested or curious enough to observe or even participate in any given conversation. Access to the various open source-related Web sites, discussion forums, etc. is open to the public and all interested parties. Figure 16 shows that communication and information flow is so pervasive that spreads equally towards all directions and is diffused throughout the virtual roof.

Throughout the research for this paper, it became evident that the element encapsulating the view of the interviewees about the Linux Project and its abstract virtual boundaries is what the author identifies as the 'Virtual Roof'.

The virtual roof is the common denominator of the users, the implementers and the surrounding community and acts as intermediary (as a trusted third party) to ensure trust in a largely anonymous virtual marketplace by providing an electronic platform whereby network communication is nurtured.

However, the virtual roof has no power to delegate authority or abuse its power in favour of either the implementers, the users or the surrounding community by establishing asymmetries of information [79].

The main reason is its transparent nature - open to the public and all "transactions" are visible. It is based on the mutual realisation (among the various members) of the long-term benefits that are impeded by a potential abuse of power and asymmetries of information. This is the organisational 'copyleft'. Deep down, it is the system of 'lean production' modified to create trust in the public, hence it extends its reach to include also the users and the global community.

All the virtual roof can do, is enforce common standards-rules of practice (that facilitate collaboration and decentralised development) that are approved by the community.

In a virtual roof, people can gather, talk, share ideas, spread information, announce projects and get community support [80]. These online communities, once formed, make new relationships possible: direct user-implementer interaction. The 'implementers talking to the users mechanism' is possible in any project depending on one thing: the level of dedication by the surrounding community to the project [81], and this is why virtual roofs exist: to establish trust, enable critical relationships that would be otherwise impossible on any significant scale and orchestrate decentralised efforts towards a centralised goal.

Virtual roofs dedicated to global (user-oriented) communities are essential to the life of projects whose success depends on the dedication of the surrounding global community and utilising geographically dispersed resources under no central planning. In such places, leadership is emergent and has a dual objective: give direction and momentum to the project.

The last question deals with the unique competitive advantage of this organisational model and under which circumstances it is a rational organisational design.

Knowledge is the competitive advantage

To date, organisational thinkers had been mainly concerned with how the 'virtual organisation' will carry out the same identical functions-activities with the 'physical organisation'. Literally, how can the virtual organisation duplicate exactly what the physical organisation does?

This is a wrong question to start with. Perhaps, a more appropriate question would be what could the virtual organisation do that the physical organisation cannot (replicate)?

The answer lies on the fact that by bringing the organisation (implementers), the surrounding global community - industry and the end users - customers together, it can generate massive knowledge and exploit it in the most effective way (Figure 17).
 
 

Figure 17: Creation and Exploitation of Massive Knowledge.



It should be noted that the functions of knowledge (sharing, creation, diffusion and storage) are achieved through the high mobility and interaction of the human resources within the virtual roof and not within the organisation (core project group implementers) (Figure 17). The virtual roof is the device for generation of knowledge and the organisation is the device for the most efficient exploitation of the knowledge created. Thus, the separation of the organisation and the virtual roof will destroy this competitive advantage.

Rational organisational design

This model is likely to be the most rational organisational design when there is need or willingness:

  1. To avoid potential fragmentation of the market (vendors offering incompatible versions of the same product that is likely to slow down the pace of innovation in the industry or harm the desired convenience or quality that is sought by the users);
  2. To dethrone a product or vendor that has controlled the market due to increasing returns mechanisms and is effectively having negative effects on the evolution of the industry and its long-term profitability;
  3. To manage the 'economic web' in a way that all participants will be well equipped to seize benefits or profits and keep the market highly competitive;
  4. When the users' contribution (ie. knowledge, testing of product) is invaluable if not necessary during the development phase or even before, so direct interaction between implementers and users is important;
  5. Bring together a global community-industry on equal rights to discourage otherwise prevalence of political manoeuvring and corporate backstabbing; and,
  6. When the issue (project under development) is so complex and has over reaching consequences for the market that requires co-operation and/or generation of knowledge which cannot be created by just a single organisation.

Applicability of the 'Linux Model' to Other Industries

Implications

"Standardisation within industries is about making (virtual) decentralised development possible, provided the 'interfaces' between the different modules are well specified" [82].

But there are differences and various obstacles to overcome.

"Many parts of Linux function completely separately, which makes the task much simpler. Equally, it is much easier to 'hack' a piece of software code than to 'hack' a wheel, so making even small modifications is always possible. This renders distributed software development a much greater flexibility than is probably possible for other industries" [83].

Another difference lays on the nature of software. "Software can be copied endlessly and perfectly at negligible cost. With the advent of nanotechnology such problems might evaporate but the energy problem still remains, whereas the only energy software requires is a programmer's' time and effort" [84].

In the case of Linux, the users were the producers themselves. Instead of having the information from the users filtered through five or six layers before it gets to the implementers, the implementers talked to the users and could find exactly what it is they need. "The open source communities may be unique in the closeness of the developers and the users and this is probably an argument for adopting slightly different approaches towards openness, but in some ways it is more a matter of attitude or respect rather than anything else" [85].

Similarly, "the "trusted lieutenants" structure is highly unlikely to be viable or successful in a traditional corporate setting at which the one who pleases the boss is the one who stands the best chances of climbing up the hierarchy whereas the most zealous but 'less connected' employees will be passed over in the absence of meritocracy" [86].

"Attitude or respect" drops a hint about the tectonic politics that characterise the majority of organisations and the issue of funding for a project to get started. "When projects get big enough to need dedicated management, whoever is holding the purse strings is going to reject the notion that it is a loosely-organised anarchy" [87]. In contrast, when it comes to Linux, the necessary investment is just a person's time.

Perhaps, the greatest problem is the nature of the organisation. "An organisational copyleft will be extremely hard to be accepted by shareholders in the era of well-guarded "intangibles" as a potential competitive advantage. On the contrary, organisations answer to shareholders and shareholder pressure is not likely to result in co-operative values and intra-industry sharing of information" [88].

Another restraining factor may be the regulatory arrangements that often mandate that an enterprise should operate in a "centralised hierarchy" manner. These 'arrangements' may stem from a body like the FAA telling the organisation that they must follow certain procedures to comply with the law or, a union telling management that they need to have clearly defined job descriptions and hierarchy showing the exact delegation of authority and responsibilities within the organisation [89].

Also the adoption of a new organisational model must be considered with regard to the project's level of complexity. "For a project so complex yet so focused as Linux, decentralisation turned out to be a major driving force behind its success. But when it comes to a much simpler task, decentralisation might only undermine organisational coordination" [90].

Critics claim that non-virtual, centralised organisations are better at storing know-how - they serve as a repository of knowledge - and that 'social capital' is what keeps an organisation or a network together [91]. As far as Linux is concerned, the ideal reservoir of knowledge is the freely available source code and the social capital (i.e. trust, common values, reciprocal giving) that serves as the 'integration link' and ties together the entire Linux community network is abundant since all the developers share the hacker mentality-ideology.

"In virtually-networked organisations, suitable mechanisms for storing and extracting value (out of knowledge) will emerge for undertaking these two functions. Film production is a good example where know-how is embodied in individuals' skills and their high mobility makes sure that know-how is quickly disseminated throughout the network. Thus, value is extracted out of their interaction with the network" [92].

So far and until these mechanisms emerge, apart from a few examples (i.e. Linux, Silicon Valley, Hollywood), "knowledge creation is best done in an evolutionary (decentralised) fashion, while knowledge storage is best done in a centralised mode" [93].

The issue of social capital and mostly trust also presents an obstacle. "One of the interesting trust models - which is how Open Source works - consists of webs of trust. You trust the people whom the people you trust trust. After all, centralised trust systems have always been inherently risky" [94]. And the absence of trust is not an option as it destroys this model.
 
 

++++++++++

Conclusions

Epilogue

The virtual organisation in this study is modelled on the Linux Project. Thus, the virtual organisation cannot be a company but instead a project that is inclusive enough to embrace an entire global community including the end users, the implementers of the core project and the organisations that compete within the boundaries of the industry.

The key technological enabler is 'virtuality' - what we have dubbed as a 'virtual roof'. The key organisational enabler is the mutual understanding from the global community of the long-term benefits that will arise from co-operation and uninhibited sharing of information.

Beyond the shadow of doubt, the greatest problem seems to be the esoteric (exclusive) and short-sighted nature of the organisation at least in the form that prevails today. Today's organisational dominant design is based on the centralised mindset, and this renders the adoption of this model not likely to unveil it full potential. Even worse, when centralisation is bundled with a bureaucratic delegation of authority and responsibility in a status-driven organisation, then the model is impossible to be implemented and flourish.

Commentary on the objectives of this research

Overall, this paper's objectives have been fulfilled.

Firstly, through an analysis of the Linux Project and its management, we have come to understand this novel organisational model and identified the crucial success factors enabling virtual decentralised collaboration under no central planning.

We have elaborated a managerial framework that can be theoretically applied to any type of virtual decentralised development and also discussed the strengths and limitations that might stem from the adoption of the organisational model proposed by the Linux Project in other industries.

However, this research is case-specific and may turn out to be an exception rather than the rule. But in the absence of other appropriate examples and previous tested research, the findings presented in this paper can be invaluable in the form of hypotheses for the conduct of additional research.

The author hopes that this study will trigger further research to test the above assumptions.

This paper comes to an end by citing a visionary statement:

"In the automobile industry, each company does its own R&D. Every innovation is patented before it ever reaches the public, which may take five years for the improvements to be incorporated in an actual car after they were originally developed. If the automobile industry started taking on an open source development model with sharing across companies and countries, the cost and prices would eventually drop, innovation and development would speed up and exceptional features would be shared across many makers and models. The auto industry could finally come up with the safe, clean energy car. The problem is that the car companies do not seem likely to support something that they perceive could put them out of business, even though this would not happen since nothing stops them from developing on their own and incorporating developments from their "open design shop" into their own products" [95].
Ultimately, it is a question of developing a new way of looking at competition and customers. End of article
 

About the Author

George N. Dafermos has just completed a masters' programme in Management at Durham Business School and is currently continuing his postgraduate studies in E-Commerce at the David Goldman Informatics Centre in the U.K.
E-mail: georgedafermos@bungo.com
 
 

Acknowledgements

This paper is dedicated to my parents, Nikolaos and Maria, my friend Mike and to the search of excellence through innovative work arrangements.

This paper was submitted as part requirement of the degree MA in Management of Durham Business School, 2001.

I would like to thank my supervisor, Dr. Joanne Roberts for all the help I received over the research. Had not been for her guidance, this paper would not have materialised. I also wish to thank all those who shared their experience and valuable insight with us by accepting to be interviewed. They are in alphabetical order (they are also mentioned in Appendix IV: Interviewees): Dan Barber, Chris Browne, Chris Dibona, Matt Haak, Philip Hands, Ikarios, Ko Kuwabara, Robert Laubacher, Michael McConnel, Glyn Moody, Ganesh Prasad, and Richard Stallman.
 
 

Notes

1. Source: Adapted from H. Fayol, General and industrial Management, chapter 4, 1949.

2. Chandler, 1962, pp. 382-383.

3. Chandler, 1977, p. 106.

4. Chandler, 1977, p. 102.

5. Burns, 1963, p. 18.

6. Ibid.

7. Ibid.

8. Levine, 1999, p. 25.

9. Dawson and Palmer, 1995, pp. 3-4.

10. Dawson and Palmer, 1995, pp. 29-30.

11. Gherardi, 1997, p. 542.

12. Hayes, Wheelwright and Clark, 1988, p. 252.

13. Hodgetts, Luthans and Lee, 1994.

14. Naisbitt, 1982.

15. Hall, 1993, p. 281.

16. Hames, 1994, The Management Myth.

17. Tapscott and Caston, 1993, Paradigm Shift, p. 22.

18. Hames, 1997, p.141.

19. Krubasik and Lautenschlager, 1993, p. 56.

20. From www.orgnet.com; the figure originally titled 'Internet Industry: Strategic Alliances, Joint Ventures and other Partnerships'.

21. Even though the use of the term 'economic web' is not established and commonly used, Brandenburger and Nalebuff (1996) and Shy (2001) have reached similar conclusions regarding the importance of the 'economic web' in strategy formulation. Their 'conception' is based on game theory applications.

22. Hacki and Lighton, 2001, p. 33.

23. See Appendix II: Increasing Returns.

24. Quinn, Doorley, and Paquette, 1990, pp. 79-87.

25. Quinn, Anderson and Finkelstein, 1996, pp. 71-80.

26. Tapscott and Caston, 1993, p. 9.

27. The Economist, 2000a, p. 36.

28. Concise Oxford Dictionary, s.v. Virtual.

29. Nohria and Berkely, 1994.

30. Adapted from Tapscott, 1996, The Digital Economy, pp. 86-87.

31. Evans and Wurster, 2000, p. 211.

32. See also the analysis of MS in Appendix III: Microsoft - The Cathedral.

33. For this purpose, we have included Appendix I which aims at facilitating the reader's inquiry by providing a complete analysis of Microsoft. Appendix I also serves to support the assumptions that appear in comparisons in the course of the paper.

34. For a long time, social researchers have been concerned with the impact of computer-mediated communication technologies (i.e. newsgroups, e-mail, IRC, etc) on the conduct of primary social research (i.e. observation). For a more elaborate discussion, see Belson (1994); Reid (1991); Rheingold (1993); and, Hiltz and Turoff (1978; 1991).

35. Webb and Webb, 1932, p. 139.

36. Burgess, 1984, p. 121.

37. Ibid., p. 102.

38. Axelrod and Cohen, 1999, p. 139.

39. The history is of necessity highly abbreviated and we do not offer a complete explanation of the origins of Open Source. For more detailed treatments, see Moody, 2001; Browne, 1998; DiBona, Ockman, and Stone (editors), 1999; Levy, 1984; and, Raymond, 1999a.

40. David and Fano, 1965, pp. 36-39; Abbate, 1999; and, Naughton, 2000.

41. Stallman, 1999, p. 45.

42. Accessible at www.gnu.org/licenses/gpl.html.

43. Hood and Hall, 1999, p. 9.

44. Moody, 2001, p. 29.

45. DiBona, Ockman, and Stone (editors), 1999, p. 4; Hood and Hall, 1999 p. 24.

46. Lewis, 2001.

47. T. O'Reilly, 2001, p. 42.

48. Accessible at www.opensource.org/docs/definition.html.

49. DiBona, Ockman, and Stone (editors), 1999, p. 4.

50. T. O'Reilly, 2001, p. 44.

51. For a discussion of the business models based on Open Source, see Hood and Hall, 1999; Raymond, 1999c; and, DiBona, Ockman, and Stone (editors), 1999.

52. Peters and Austin, 1985, p. 157.

53. Tuomi, 2001.

54. Moody, 2001, p. 62.

55. Interview with Philip Charles.

56. Interview with C. Dibona.

57. Moody, 2001, pp. 81, 84.

58. Interview with G. Moody.

59. Moody, 2001, pp. 14, 82.

60. Torvalds, 1999a, pp. 38-39; Torvalds, 1999b, pp. 101-111.

61. Moon and Sproull, 2000.

62. Interview with G. Moody.

63. Interview with R. Stallman.

64. Lee and Cole, 2000, p. 21.

65. Kuwabara, 2000; Nadeau, 1999a.

66. Browne, 1998; Moon and Sproull, 2000.

67. Young, 1999, p. 105.

68. Raymond, 1999c, p. 15; interview with M. McConnel.

69. Interview with C. Dibona.

70. Co-ordination costs and complexity grow with the square of developers but work done rises linearly.

71. Adapted from Cusumano and Selby, 1997 - "Interview with Dave Moore, Director of Development," (17 March 1993); and, Microsoft Corporation, Office Business Unit, "Scheduling Methodology and Milestones Definition," unpublished internal document, 1 September 1989.

72. Based on Cusumano and Selby, 1995, pp. 41-52.

73. Interview with K. Kuwabara.

74. See Constant, Kiesler and Sproull, 1996, pp. 119-135.

75. Adapted from Martin, 1996, p. 93.

76. Interview with Philip Hands.

77. Interview with G. Prasad, G. Moody and P. Hands.

78. Interview with G. Prasad.

79. Interview with C. Dibona.

80. Interview with D. Barber.

81. Interview with D. Barber.

82. Interview with G. Moody.

83. Interview with G. Moody.

84. Ibid.

85. Ibid.

86. Interview with D. Barber.

87. Interview with C. Browne.

88. Interview with G. Prasad.

89. Interview with C. Browne.

90. Interview with K. Kuwabara; also see Appendix III: Communication Networks.

91. See Leadbeater (2000) for a more elaborate discussion.

92. Interview with R. Laubacher.

93. Interview with K. Kuwabara.

94. Interview with G. Moody.

95. Dan Barber, 2001, "The Open Source Development Model," at http://mojolin.com/articles/open_source_model.php?session=vTVi4tc1GfTb.

96. The term 'Cathedral' refers to centralisation and Raymond (1998a) identifies Microsoft as 'The Cathedral'.

97. Cusumano and Shelby, 1995, pp. 58-71.

98. Cusumano and Shelby, 1995, p. 40.

99. Cusumano and Shelby, 1995, p. 402.

100. Stephenson, 1999, pp. 109-117.

101. Drummond, 2001, p. 95.

102. Cusumano and Shelby, 1995, p. 52.

103. Cusumano and Shelby, 1995, pp. 190-320.

104. Cusumano and Shelby, 1995, pp. 406-433.

105. Adapted from Cusumano and Selby, 1997 - "Interview with Dave Moore, Director of Development," (17 March 1993); and, Microsoft Corporation, Office Business Unit, "Scheduling Methodology and Milestones Definition," unpublished internal document, 1 September 1989.

106. Drummond, 2001, p. 39.

107. Cusumano and Shelby, 1995, p. 70.

108. Cusumano and Shelby, 1995, p. 407.

109. Cusumano and Shelby, 1995, pp. 41-52.

110. Cusumano and Shelby, 1995, p. 405.

111. Cusumano and Shelby, 1995, pp. 69-71.

112. Drummond, 2001, p. 68.

113. Drummond, 2001, p. 212.

114. Drummond, 2001, p. 35.

115. Cusumano and Shelby, 1995, p. 424.

116. Cusumano and Shelby, 1995, p. 403.

117. Drummond, 2001, p. 69.

118. Drummond, 2001, pp. 286-287.

119. Cusumano and Shelby, 1995, p. 402.

120. www.orgnet.com.

121. Arthur, 1996, p. 1.

122. Arthur, 1996, p. 6.

123. Adapted from Kelly (1998).

124. The Economist (UK), 23-29 September 2000

125. For a complete discussion of the study and Leavitt's experiment, see Leavitt (1951) and Forsyth (1990).

126. Shaw, 1964, p. 126.

127. See Jablin (1979) for a detailed review of communication in hierarchical organisations.

128. This is the description given in DiBona, Ockman, and Stone (editors), 1999 and no modifications have been made.

129. Malone and Laubacher (1998).

130. This is the description given in DiBona, Ockman, and Stone (editors), 1999 and no modifications have been made.

131. 'Debian' is the most technical-oriented, stable, 'hardcore open source' version of the GNU/Linux OS and in fact the 'Open Source Definition' is based on the 'Debian Definition'.
 
 

Appendix I: Microsoft: The Cathedral

Microsoft [96] is a 'product development' company and the key principles for managing product development are:

Hire smart people and have small teams: Microsoft employs only 2-3% of all programmers that apply for a job; it is important to have bright people with excellent technical skills, including all managers. In this way, managers both create the products and make technical decisions. However, MS admits that management skills seem to be non-existent among managers as the late shipments witness, despite the corporate focus on shipping products [97].

Having small teams does not only introduce flexibility but it is necessary in order to cope with the complexity of projects. "Brooks's Law" indicates that "the complexity and communication costs of a (software) project rise with the square of the number of developers, while work done only rises linearly" (Brooks, 1975), hence software should be developed by a closely knit team.

Product architectures that reduce interdependencies among teams: The software architecture at MS is modular, which means that software is developed like a "Lego" toy. Such Lego-like pieces of code enable the fast assembly of software, cut down on the overall product complexity and make coordination easier. In contrast, buggy (defective) MS products have proven themselves to be not highly modular [98]. "MS Windows (in its various versions) enforces relatively little separation between the different system components. This has the unfortunate result that if any system component is changed, all programs need to be changed to conform" (Browne, 1998).

Nearly all product development done at one site (Redmont, Wash.): Having everything located at the same place aims at speeding up the information flow (communication) and facilitating face-to-face communication. Ironically, the famous trial (part of the trial was based on internal MS e-mails) proved that even matters of strategic importance were communicated via e-mail. E-mail is the norm at MS (Drummond, 2001).

An enormous feedback loop from customers: MS gives phone calls to a few thousands of people to check whether everything (the product) is working properly and its 'support services' provide an efficient way to get customer feedback (MS receives daily thousands of calls) [99]. On the other hand, lots of people get irritated and confused when struggling to use MS' support services [100].

Organise around business units and constantly move people around: "Business Units aim at having the programmers more focused on specific customers and competitors, and moving people from one project to another (internal reorganisations) maximizes learning or/and allows burned-out employees a chance to get recharged in new areas, which stokes competitive fires and keeps the giant company nimble". However, "the real reason might be that MS reorgs are the result of massive political tectonics and power grabbing between the Windows and Windows NT groups" [101] and between the Systems and Applications Division [102].

A development process that allows large teams to work like small teams: For this purpose, MS has developed the "Synch and Stabilize" process (Figure 12 and Table 7). The Synch-and-stabilize process was born in 1990 to deal with quality and project management problems that MS had encountered and resulted in a series of late and buggy (defective) product shipments. The main ideas behind this process are: a) breaking up a project into subprojects (milestones); b) doing daily builds of products; and, c) documenting everything to avoid doing the same mistake twice [103].

Synch-and-stabilize: The Development model

MS has adopted the Synch-and-stabilize development model instead of the conventional Sequential development model (Table 8). Sequential development treats development and testing as separate phases that are done one after the other, whereas at Synch-and-Stabilize, development and testing are done in parallel.

This approach has several advantages: a) learning is enhanced as work is done in parallel; b) greater flexibility arises when dealing with 'problems' (like defects and architectural issues); c) better quality and control are ensured because of "daily builds" and "milestones stabilizations"; and, d) large teams are enabled to work like small teams. MS also claims that this model adds to the overall corporate responsiveness/adaptation because this development approach is an 'evolving system', and that it strengthens the focus on shipping products [104].
 
 
 

Planning stage
Define product vision, specification, and schedule.
Vision statement
Product and program management use customer input to identify and prioritise product features.
Specification document
Based on vision statement, program management and development group define feature functionality, architecture, component interdependencies.
Schedule and Feature team formation
Based on specs document, program management coordinates schedule and arranges feature teams that each contain approximately 1 program manager, 3-8 developers, and 3-8 testers (who work in parallel 1:1 with developers).
Development phase
Feature development in 3 or 4 sequential subprojects that each result in a milestone release.
 
Program managers coordinate evolution of specification.
Developers design, code and debug. Testers pair up with developers for continuous testing.
Subproject I
First 1/3 of features: Most critical features and shared components.
Subproject II
Second 1/3 of features.
Subproject III
Last 1/3 of features: Least critical features.
Stabilization Phase
Comprehensive internal and external testing, final product stabilization, and ship.
 
Program managers coordinate "the economic web" and monitor customer feedback.
Developers perform final debugging and code stabilisation.
Testers recreate and isolate errors.
Internal testing
Thorough testing of complete product within the company.
External testing
Thorough testing of complete product outside the company by "beta" sites such as OEMs, ISVs, and end-users.
Release preparation
Prepare final release of "golden master" disks and documentation for manufacturing.
Table 7: Overview of Synch-and-stabilize Development Approach.
Source: [105]

 

Structure, Culture and Management

The corporate chain of control resembles a pyramid (Figure 13). Generally, MS' structure is fluid; changes in structure are not rare but employees are discouraged to bypass (upwards) the hierarchical layers (because their superiors will take it as a serious offence) - especially going directly to Gates [106]. Gates has the final word for every decision made maybe because so far he is the only one who combines a business vision-insight and technical understanding [107].
 
 
 

Synch-and-Stabilize
Sequential Development
Product development and testing done in parallel
Separate phases done in sequence
Vision statement and evolving specification
Complete "frozen" specification and detailed design before building the product
Features prioritised and built in 3 or 4 milestone subprojects
Trying to build all pieces of a product simultaneously
Frequent synchronizations (daily builds) and intermediate stabilisations (milestones)
One late and large integration and system phase at the project's end
"Fixed" release and ship dates and multiple release cycles
Aiming for feature and product "perfection" in each project cycle
Product and process design so that large teams work like small teams
Working primarily as a large group of individuals in separate functional departments
Table 8: Synch-and-Stabilize vs. Sequential Development.
Source: [108]

MS's culture is anti-bureaucratic and developers are been given large amounts of freedom (but only over evolving features and experimenting with designs and not over strategic decisions). Control is almost non-existent and it is mostly enforced by the Synch and stabilize process [110]. "Adhering to rules and regulations, respecting formal titles, or cultivating skills in political infighting are not regarded as so important by the administration and employees (as incredibly smart and arrogant individuals) also do not want to be told what to do but are willing to be allowed to discover what to do" [111].

"MS is a company where titles often don't mean as much as credibility, and thus, being blunt is a way to establish dominance. The company is rife with pecking-order gamesmanship, such as not answering e-mail or chronically arriving late to meetings" [112] and in all, politics reign (at software development) in MS. "Opportunistic predators and competitors often "kill or kidnap sick or newborn" technologies. Survival of the fittest is systemic Sinternecine backstabbing did not evaporate in the presence of great intelligence and wealth, it became more brutal" [113]. Insiders argue that Gates himself is responsible for this culture of conflict in two ways: by being arrogant ("Gates is famous for ridiculing someone's idea just to see how he or she defends a position") and by employing the brightest people and inducing them to grow arrogant and assertive (Drummond, 2001).

Learning

Fresh employees do not go through a formal training programme but they learn on the job. Usually, they are being appointed to work together with an already experienced developer so as tacit knowledge that resides in the latter's head is being effectively transferred to 'fresher' (Cusumano and Shelby, 1995).

MS takes advantage of the knowledge it has accumulated by exploiting emerging mass markets and establishing its products as standards. But at an organisational level, learning is restricted. "Communication frequently suffers as a result of the inner corporate politics and even privileged employees have trouble getting information from inside Microsoft, a reflection of the long-standing schism between the company's marketing staff and its legion of programmers" [114]. MS even blocks widespread sharing (of their own source code) within the company (Valloppillil, 1998; Nadeau, 1999a, 1999b, 1999c).

Learning from customers is also limited since there is not effective two-way communication between developers and customers. Lots of people who have used MS' 'help/support services' found it problematic and of limited help.

Innovation

MS is an innovative company in many respects. Gates has played a key role in establishing software as a "copyrighted" good (Open Letter to Lobbyists, 1976). MS has dominated the market by licensing the rights (of its operating system) which has proven to be an innovative move. But as far as product innovation is concerned, MS is a laggard.

Analysts claim that MS finds it difficult to balance being technology-driven with being consumer-driven and this results to great difficulty to move from incremental innovation to truly radical innovation or invention [115].

After all, MS's competitive strategy is to design products for mass markets and then improve them incrementally by enhancing existing features or adding new ones [116]. Perhaps it this 'incremental evolution' product approach that impedes radical innovation: "The company has a very dramatic focus on its profitable business. I'm not saying that's bad. But it does preclude you from doing any dramatic thinking, doing any dramatic innovation" [117] ... "to the extent that several employees manipulate their inferiors to be given a chance to create something really novel" [118].
 
 

Management of the economic web

MS's management of its economic web is contradictory. At first, it is so successful that MS has achieved a lock on the market. The Windows operating system family has dominated the desktop market and until recently had been the unquestionable leader in the corporate and server market. All 'software manufacturers' make sure that their products can be ported to the MS platform (run on Windows) without any additional effort from MS' side. Figure 18 shows MS as the central hub in one of the largest and most powerful networks of organisational partnerships.

On the other hand, it was authoritative management of their economic web that led to the infamous trial as MS sought to conquer as many software-related markets (by providing complementary products) as possible and force the rest participants of the web to drop their own products and adopt MS'.

MS' aggressiveness - "a relentless pursuit of future markets" - [119] is harmful to the overall economic web. By leveraging its technology to capture other software-related markets, it has irritated many big industry players like Sun Microsystems, Netscape, HP and Oracle to name a few.
 
 

Figure 18: Microsoft's Network of Corporate Partnerships.
Source: [120]


Appendix II: Increasing Returns

Conventional economic understanding of how markets and businesses operate is based on the assumption of diminishing returns: products or companies that get ahead in a market eventually run into limitations, so that a predictable equilibrium of prices and market share is reached. But history has proven that this is not always the case.

W.B. Arthur identifies 'increasing returns' to be the tendency for that which is ahead to get further ahead, for that which loses advantage to lose further advantage. In other words, "if a company or a technology - one of many competing in a market - gets ahead by chance or clever strategy, increasing returns can magnify this advantage, and the product or company or technology can go on to lock in the market" [121]. Therefore, the value decreases rather than increases with scarcity.

For some time, economists have been accepting the notion of increasing returns but they restricted their scope only to the good produced with emphasis on the per unit cost of reproduction. Hence, since the cost of reproducing software is almost zero, software is not subject to diminishing returns. Whereas, W.B. Arthur proved that increasing returns (or positive feedback) dominate knowledge-intensive industries and form the inner mechanisms of these industries.

He cites the market for operating systems for PCs in the early 1980s as a characteristic example: DOS was born when Microsoft locked up a deal to supply an operating system for the IBM PC. Despite the fact that DOS was derided by computer experts and the fierce competition from other operating systems (mainly Macintosh and CP/M), the growing base of DOS/IBM users encouraged software developers to write for DOS.

IBM PC/DOS's prevalence bred further prevalence, and eventually came to dominate a large portion of the market. This is where the 'economic web' comes in. DOS did not lock in the market because of technical superiority but because IBM passively allowed other companies to join its PC web as clones whereas Apple erred in the other direction by closing its Macintosh system to outsiders and hence opted not to create such a web [122].

According to Arthur, there are three reasons why knowledge-intensive industries operate under increasing returns:

a) high up-front (ie. R&D) costs;
b) learning effects: high tech (or knowledge-intensive) products are difficult to use and require training. Therefore, once users have spent time to learn how to use a specific product, technology or platform, it becomes harder for them to switch to another alternative as they will have to devote additional effort and time again; and,
c) Network effects (or network externalities): according to "Metcalfe's Law", the experienced utility of belonging to a network increases exponentially with the number or users (Figure 19). If two people out of ten have a telephone, their experienced utility is not significant. But if all of them have a telephone, then it is much more useful. The more diffused the DOS/IBM PC became, the more value was attributed to it by users.
 
 

Figure 19: Value Increases With the Number of Users-Members.
Source: [123]



The crucial points are that a technology does not need to be the best to become the 'dominant design' and, all technologies follow an unshaped path (Figure 20). They are slow to get going, but once they reach critical mass the technology spreads fasts. Usually, technologies typically improve as more people adopt them and gain experience with them. This link is a positive feedback loop: the more people adopt a particular technology, the more it improves, and the more incentive there is for further adoption (Arthur, 1990).
 
 

Figure 20: The S-Curve.
Source: [124]


Appendix III: Communication Networks

Regular patterns of information exchange among groups are called communication networks. Communication networks are either deliberately implemented when the group is organised (i.e. many organisations adopt a hierarchical communication network that dictates the flow of information among the employees, especially from superiors to subordinates and horizontally to one's peers) or evolve in an evolutionary mode even when there is no formal attempt to specify and organise communication and its flow (without central planning).

In addition, the communication network tends to parallel role, status and attraction patterns. Studies indicate that usually those with higher status initiate and receive more information, as those who are better liked within the group (Aiken and Hage, 1968; Bacharach and Aiken, 1979; Jablin, 1979; Shaw, 1964).

The original analyses of communication networks were focused on small groups and conducted as laboratory experiments. Recently, researchers have extended the scope of their research to investigate networks in may settings including large business organisations, families, research and development units, university departments and military units (Craddock, 1985; Friedkin, 1983; Keller and Holland, 1983; Monge, Edwards and Kirste, 1983; Tutzauer, 1985). The findings of these studies attest to the powerful impact of networks on the overall group performance and efficiency and to the members' level of satisfaction. These are summarised below:

Centralisation and Performance

The earliest systematic studies of communication networks were carried out by the Group Networks Laboratory (MIT) in the 1950s. A characteristic study based on that systematic approach was done by Leavitt (1951) [125]. This study showed that one of the most important features of a network is its degree of centralisation and additional research has verified it (Leavitt, 1951; Bavelas and Barrett, 1951; Shaw, 1964; 1978).

A centralised network is a structure at which one of the positions is located at the "crossroads" of communications and therefore controls the flow of information (by acting as an intermediary between nodes of the system) and a decentralised network is a structure at which the number of channels at each position-node is roughly equal, so none position is more central than another (all positions-nodes can 'reach' the same number of positions-nodes).

These early studies argued that centralised networks are more efficient than decentralised networks when the task is simple but when the task is more complex, the decentralised networks outperformed the centralised ones [126].

Satisfaction

Since the number of peripheral positions in a centralised network exceeds the number of central positions, the overall level of satisfaction is lower than in a decentralised network (Shaw, 1964; Eisenberg, Monge and Miller, 1983; Krackhardt and Porter, 1986).

Communication in Hierarchical Networks

For reasons of efficiency and control, many organisations adopt hierarchical communication networks. In such networks, information can either pass horizontally among peers or vertically, up and down from superiors to subordinates and vice versa (Jablin, 1979). However, upward communications differ greatly from downward communications (Browning, 1978; Katz and Kahn, 1978) and upward communications are fewer in number, briefer and much more guarded than downward communications.

Evidence shows that in large organisations the upward flow of information may be impeded by the reluctance of low-status members to send information that might reflect unfavourably on their skills and performance (Bradley, 1978; Browning, 1978).

In practice, this means that good news will travel quickly up the hierarchy, whereas the top management will be the last to learn bad news [127]. Further studies indicate that employees are likely to distort information when they are not satisfied with their job because they are not interested in helping the organisation to fulfil its objectives (O'Reilly, 1978).
 
 

Appendix IV: Interviewees

Dan Barber is the main co-ordinator/maintainer of "Mojolin" which is a 'virtual roof' for the Open Source community and projects. He is probably the only person in the Open Source community that proposes the adoption of the Open Source development model in industries other than software.

Chris Browne is the author of "Linux and Decentralized Development" which is highly regarded analysis of the strengths and weaknesses of decentralisation in the software industry.

Chris Dibona volunteers as the Linux International Webmaster and is also the Linux International grant development fund coordinator. He is proud to work as the Director of Linux Marketing for VA Research Linux systems (http://www.varesearch.com) and is the Vice President of the Silicon Valley Users Group (the world's largest at http://www.svlug.org).

His writings have been featured in the Vienna Times, Linux Journal, Tech Week, Boot Magazine (now Maximum PC) and a number of online publications. Additionally, he has edited (together with Sam Ockman and Mark Stone) the book Open Sources: Voices From the Open Source Revolution which is a key book on the Open Source movement [128].

Matt Haak works for Intekk Communications which is a leading provider of Internet-based data storage solutions and web applications.

Ko Kuwabara has provided a fascinating sociological account of Linux and the Open Source community in his "Linux: A Bazaar at the Edge of Chaos". It is one of the first (and certainly the best known) attempt to explain the 'Linux phenomenon' with complexity theory.

Robert Laubacher is a leading researcher for Massachusetts Institute of Technology's Sloan School of Management's "Initiative on Inventing the Organizations of the 21st Century". The paper "The Dawn of the E-lance Economy" [129] was one of the main influences for this study.

Glyn Moody is an eminent journalist-researcher and he has written extensively for Linux and Open Source in the Financial Times, O'Reilly Network, Guardian, Economist, Wired, Computer, and New Scientist as well as a column in Computer Weekly. His book Rebel Code: Linux and the Open Source Revolution (2001) is regarded as the best history of the Open Source movement and an excellent example of research as he has interviewed all the key figures of the movement.

Ganesh Prasad is a leading Open Source figure and his "Examining some pseudo-economic arguments about Open Source" and "The Manager's Practical guide to Linux" are the most popular and economically sound efforts to bring corporate managers closer to Open Source software and particularly the Linux OS.

Richard Stallman (RMS) Fifteen years ago, he started the GNU project, to protect and foster the development of free software. A stated goal of the project was to develop an entire operating system and complete sets of utilities under a free and open license so that no one would ever have to pay for software again.

In 1991, Stallman received the prestigious Grace Hopper Award from the Association for Computing Machinery for his development of the Emacs editor. In 1990 he was awarded a MacArthur Foundation fellowship. He was awarded an honorary doctorate from the Royal Institute of Technology in Sweden in 1996. In 1998 he shared, with Linux Torvalds, the Electronic Frontier Foundation's Pioneer award.

He is now more widely known for his evangelism of free software than the code he helped create.

Like anyone utterly devoted to a cause, Stallman has stirred controversy in the community he is a part of. His insistence that the term "Open Source software" is specifically designed to quash the freedom-related aspects of free software is only one of the many stances that he has taken of late that has caused some to label him an extremist. He takes it all in stride, as anyone can testify who as seen him don the garb of his alter ego, Saint GNUtias of the Church of Emacs.

Many have said, "If Richard did not exist, it would have been necessary to invent him." This praise is an honest acknowledgment of the fact that the Open Source movement could not have happened without the Free Software movement that Richard popularizes and evangelizes even today.

In addition to his political stance, Richard is known for a number of software projects. The two most prominent projects are the GNU C compiler (GCC) and the Emacs editor. GCC is by far the most ported, most popular compiler in the world. But far and wide, RMS is known for the Emacs editor. Calling Emacs editor an editor is like calling the Earth a nice hunk of dirt. Emacs is an editor, a web browser, news reader, mail reader, personal information manager, typesetting program, programming editor, hex editor, word processor, and a number of video games. Many programmers use a kitchen sink as an icon for their copy of Emacs. There are many programmers who enter Emacs and don't leave to do anything else on the computer. Emacs, you'll find, isn't just a program, but a religion, and RMS is its saint [130].

'Debian' [131] distributors:

Michael McConnel of Eridani Star System (UK)
Philip Hands of Philip Hands (UK)
Kajetan Hinner of Hinner EDV (Germany)
Ikarios (France)
 
 

Bibliography

J. Abbate, 1999. Inventing the Internet. Cambridge, Mass.: MIT Press.

P.A. Adler and P. Adler, 1998. "Observational techniques," In: N.K. Denzin and V.S. Lincoln (editors). Collecting and Interpreting Qualitative Materials. Thousand Oaks, Calif.: Sage.

M. Aiken and J. Hage, 1968. "Organizational interdependence and intraorganizational structure," American Sociological Review, volume 33, pp. 912-930.

H.I. Ansoff, 1965. Corporate Strategy. London: Penguin.

J.B. Arthur, 1996. "Increasing returns and the new world of business," Harvard Business Review, volume 74, number 4 (July-August), pp. 101-109.

J.B. Arthur, 1990. "Positive feedbacks in the economy," Scientific American, volume 262, number 2 (February), pp. 92-99.

R.M. Axelrod and M.D. Cohen, 1999. Harnessing Complexity: Organizational Implications of a Scientific Frontier. New York: Free Press.

S.B. Bacharach and M. Aiken, 1979. "The Impact of alienation, meaninglessness, and meritocracy on supervisor and subordinate satisfaction," Social Forces, volume 57, pp. 853-870.

C.Y. Baldwin and K.B. Clark, 1997. "Managing in an age of modularity," Harvard Business Review, volume 75, number 5 (September-October), pp. 84-93.

D. Barber, 2001. "The Open Source development model - is it applicable to other industries?," (3 March) at http://mojolin.com/articles/open_source_model.php?session=vTVi4tc1GfTb, accessed 26 October 2001.

C.I. Barnard, 1938. The Functions of the Executive. Cambridge, Mass.: Harvard University Press.

A. Bavelas and D. Barrett, 1951. "An Experimental approach to organization communication," Personnel, volume 27, pp. 367-371.

H.S. Becker, 1956. "Interviewing medical students," American Journal of Sociology, volume 62, pp. 199-201.

D. Belson, 1994. "The Network nation revisited," at http://www.stevens-tech.edu/~dbelson/thesis/thesis.html, accessed 26 October 2001.

H.S. Bennett, 1980. On Becoming a Rock Musician. Amherst: University of Massachusetts Press.

A.M. Brandenburger and B.J. Nalebuff, 1996. Co-opetition. New York: Doubleday.

P.H. Bradley, 1978. "Power, status, and upward communication in small decision-making groups," Communication Monographs, volume 45, pp. 33-43.

D. Bramel and R. Friend, 1981. "Hawthorne, the myth of the docile worker, and class bias in psychology," American Psychologist, volume 36, pp. 867-878.

F. Brooks, 1975. The Mythical Man-Month: Essays on Software Engineering. Reading, Mass.: Addison-Wesley.

C.B. Browne, 1998. "Linux and decentralized development," at http://www.firstmonday.org/issues/issue3_3/browne/, First Monday, volume 3, number 3 (March), accessed 27 October 2001.

L. Browning, 1978. "A Grounded organizational communication theory derived form qualitative data," Communication Monographs, volume 45, pp. 93-109.

A. Bryman, 1988. Quantity and Quality in Social Research. London: Unwin Hyman.

R.G. Burgess, 1984. In the Field: An Introduction to Field Research. London: Allen and Unwin.

T. Burns and G.M. Stalker, 1961. The Management of Innovation. London: Tavistock.

T. Burns, 1963. "Industry in a new age," New Society (31 January), pp. 17-20.

A.D. Chandler, 1977. The Visible Hand: The Managerial Revolution in American Business. Cambridge, Mass.: Belknap Press.

A.D. Chandler, 1962. Strategy and Structure: Chapters in the History of the Industrial Enterprise. Cambridge, Mass.: MIT Press.

A.V. Cicourel, 1974. Theory and Method in a Study of Argentine Fertility. New York: Wiley.

T. Clarke and S. Clegg, 1998. Changing Paradigms: The Transformation of Management Knowledge for the 21st Century. London: HarperCollins Business.

J.S. Coleman, 1969. "Relational analysis: The Study of organizations with survey methods," In: A. Etzioni (compiler). A Sociological Reader on Complex Organizations. New York: Holt, Rinehart and Winston.

D. Constant, S. Kiesler, and L. Sproull, 1996. "The Kindness of strangers: On the usefulness of weak ties for technical advice," Organization Science, volume 17, number 2, pp. 119-135.

S.W. Cook, 1981. "Ethical implications," In: L.H. Kidder (editor). Selltiz, Wrightsman, and Cook's Research Methods in Social Relations. 4th edition. New York: Holt, Rinehart and Winston.

A.E. Craddock, 1985. "Centralised authority as a factor in small group and family problem solving," Small Group Behavior, volume 16, pp. 59-73.

M. Crozier, 1964. The Bureaucratic Phenomenon. Chicago: University of Chicago Press.

M.A. Cusumano, 1985. The Japanese Automobile Industry: Technology and Management at Nissan and Toyota. Cambridge, Mass.: Harvard University Press.

M.A. Cusumano and R.W. Selby, 1995. Microsoft Secrets: How the World's Most Powerful Software Company Creates Technology, Shapes Markets, and Manages People. London: HarperCollins.

M. Dalton, 1959. Men Who Manage: Fusions of Feeling and Theory in Administration. New York: Wiley.

E.E. David, Jr. and R.M. Fano, 1965. "Some thoughts about the social implications of the accessible computing," excerpts reprinted in IEEE Annals of the History of Computing, volume 14, number 2, pp. 36-39, and at http://www.multicians.org/fjcc6.html,, accessed 27 October 2001.

W.H. Davidow and M.S. Malone, 1992. The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21st Century. London: HarperCollins.

S.M. Davis and C. Meyer, 1998. Blur: The Speed of Change in the Connected Economy. Oxford: Capstone.

P. Dawson and G. Palmer, 1995. Quality Management: The Theory and Practice of Implementing Change. Melbourne: Longman.

C. DiBona, S. Ockman, and M. Stone (editors), 1999. Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly.

J. Ditton, 1977. Part-time Crime: An Ethnography of Fiddling and Pilferage. London: Macmillan.

J.D. Douglas, 1976. Investigative Social Research: Individual and Team Field Research. Beverly Hills, Calif.: Sage.

M. Drummond, 2001. Renegades of the Empire. New York: Three Rivers Press.

A. Duncan and S. Hull, 2001. Oracle & Open Source. Sebastopol, Calif.: O'Reilly.

The Economist (UK), 2001. "Survey: Software," (14-20 April).

The Economist (UK), 2000. 18-24 November.

The Economist (UK), 2000. 23-29 September.

E.M. Eisenberg, P.R. Monge, and K.I. Miller, 1983. "Involvement in communication networks as a predictor or organizational commitment," Human Communication Research, volume 10, pp. 179-201.

A. Etzioni (compiler), 1969. A Sociological Reader on Complex Organizations. New York: Holt, Rinehart and Winston.

P. Evans and T.S. Wurster, 2000. Blown to Bits: How the New Economics of Information Transforms Strategy. Boston: Harvard Business School Press.

H. Fayol, 1949. General and Industrial Management. London: Pitman.

L. Festinger, H.W. Riecken, and S. Schachter, 1956. When Prophecy Fails. Minneapolis: University of Mineapolis Press.

J. Finch, 1984. ""It's great to have someone to talk to": the ethics and politics of interviewing women," In: C. Bell and H. Roberts, (editors). Social Researching: Policies, Problems, Practice. London: Routledge and Kegan Paul.

A. Fontana, 1977. The Last Frontier: The Social Meaning of Growing Old. Beverly Hills, Calif.: Sage.

D.R. Forsyth, 1990. Group Dynamics. Second edition. Pacific Grove, Calif.: Brooks/Cole.

R.H. Franke, 1979. "The Hawthorne experiments: Re-view," American Sociological Review, volume 44, number 5 (October), pp. 861-867.

R.H. Franke and J.D. Kaul, 1978. "The Hawthorne experiments: First statistical interpretation," American Sociological Review, volume 43, number 5 (October), pp. 623-643.

J.H. Frey, 1993. "Risk perception associated with a high level nuclear waste repository," Sociological Spectrum, volume 13, pp. 139-151.

N.E. Friedkin, 1983. "Horizons of observability and limits of informal control in organizations," Social Forces, volume 62, pp. 54-77.

S. Gherardi, 1997. "Organisational learning," In: A. Sorge and M. Warner, (editors). The IEBM Handbook of Organizational Behavior. London: International Thompson Business Press.

R. Häcki and J. Lighton, 2001. "The Future of the networked company," McKinsey Quarterly, number 3, at http://www.mckinseyquarterly.com, accessed 27 October 2001.

J. Hagel, III, 2000. "Spider versus Spider," i>McKinsey Quarterly, number 3, at http://www.mckinseyquarterly.com, accessed 27 October 2001.

R.W. Hall, 1993. The Soul of the Enterprise: Creating a Dynamic Vision for American Manufacturing. Mew York: HarperCollins.

R.D. Hames, 1997. Burying the 20th Century: New Paths for New Futures. Warriewood, N.S.W., Australia: Business & Professional Publishing.

R.D. Hames, 1994. The Management Myth: Explaining the Essence of Future Organisations. Chatswood, N.S.W., Australia: Business & Professional Publishing.

C.B. Handy, 1995. "Trust and the virtual organization," Harvard Business Review, volume 73, number 3 (May-June), pp. 40-50.

C.B. Handy, 1989. The Age of Unreason. Boston: Harvard Business School Press.

A.P. Hare and D. Naveh, 1986. "Conformity and creativity: Camp David," Small Group Behavior, volume 17, pp. 243-268.

R.H. Hayes, S.C. Wheelwright, and K.B. Clark, 1988. Dynamic Manufacturing: Creating the Learning Organisation. New York: Free Press.

S.R. Hiltz and M. Turoff, 1993. The Network Nation: Human Communication via Computer. Revised edition. Cambridge, Mass.: MIT Press.

S.R. Hiltz and M. Turoff, 1978. The Network Nation: Human Communication via Computer. Reading, Mass.: Addison-Wesley.

R.M. Hodgetts, M.F. Luthans, and S.M. Lee, 1994. "New paradigm organizations: From total quality to learning to world-class," Organization Dynamics, volume 22, number 3 (Winter), pp. 5-19.

J.H. Holland, 1975. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Implications to Biology, Control, and Artificial Intelligence. Cambridge, Mass.: MIT Press.

P. Hood and D. Hall, 1999. Lighthouse "Open source software: Lighthouse case study," Toronto: Alliance for Converging Technologies.

F.M. Jablin, 1979. "Superior-subordinate communication: the state of the art," Psychological Bulletin, volume 86, pp. 1201-1222.

I.L. Janis, 1989. Crucial Decisions: Leadership in Policymaking and Crisis Management. New York: Free Press.

I.L. Janis, 1985. "International crisis management in the nuclear age," Applied Social Psychology Annual, volume 6, pp. 63-86.

I.L. Janis, 1983. "Groupthink," In: H.H. Blumberg, A.P. Hare, V. Kent, and M.F. Davis (editors). Small Groups and Social Interaction. volume 2, New York: Wiley, pp. 39-46.

I.L. Janis, 1972. Victims of Groupthink: A Psychological Study of Foreign-policy Decisions and Fiascoes. Boston: Houghton Mifflin.

I.L. Janis, 1963. "Group identifications under conditions of external danger," British Journal of Medical Psychology, volume 36, pp. 227-238.

I.L. Janis and L. Mann, 1977. Decision Making: A Psychological Analysis of Conflict, Choice, and Commitment. New York: Free Press.

J.M. Johnson, 1976. Doing Field Research. New York: Free Press.

D. Katz and R.L. Kahn, 1978. The Social Psychology of Organizations. Second edition. New York: Wiley.

S.A. Kauffman, 1993. The Origins of Order: Self-organization and Selection in Evolution. New York: Oxford University Press.

R.T. Keller and W.E. Holland, 1983. "Communications and innovators in research and development organizations," Academy of Management Journal, volume 26, pp. 742-749.

K. Kelly, 1998. New Rules for the New Economy: 10 Radical Strategies for a Connected World. London: Fourth Estate.

D. Krackhardt and L. W.Porter, 1986. "The Snowball effect: Turnover embedded in communication networks," Journal of Applied Psychology, volume 71, pp. 50-55.

E. Krubasik and H. Lautenschlager, 1993. "Forming successful strategic alliances in high-tech businesses," In: J. Bleeke and D. Ernst (editors). Collaborating to Compete: Using Strategic Alliances and Acquisitions in the Global Marketplace. New York: Wiley.

K. Kuwabara, 2000. "Linux: A Bazaar at the edge of chaos," First Monday, volume 5, number 3 (March), at http://www.firstmonday.org/issues/issue5_3/kuwabara/, accessed 28 October 2001.

H.A. Landsberger, 1958. Hawthorne revisited. (Cornell Studies in Industrial and Labor Relations, volume 9). Ithaca N.Y.: Cornell University.

P.R. Lawrence and J.W. Lorsch, 1967. Organization and Environment: Managing Differentiation and Integration. Boston: Division of Research, Graduate School of Business Administration, Harvard University.

P.F. Lazarsfeld and H. Menzel, 1969. "On the relation between individual and collective properties" In: A. Etzioni (editor). A Sociological Reader on Complex Organizations. New York: Holt, Rinehart and Winston, pp. 499-516.

C. Leadbeater, 2000. Living on Thin Air: The New Economy. London: Penguin.

H.J. Leavitt, 1951. "Some effects of certain communication patterns on group performance," Journal of Abnormal and Social Psychology, volume 46, pp. 38-50.

G.K. Lee and R.E. Cole, 2000. "The Linux development as a model of open source knowledge creation," Haas School of Business, University of California, at http://www.haas.berkeley.edu/~pierce/leecole.pdf, accessed 28 October 2001.

R. Levine, 1999. Cluetrain Manifesto: The End of Business as Usual. Cambridge, Mass.: Perseus, see also http://www.cluetrain.com/book.html, accessed 28 October 2001.

S. Levy, 1984. Hackers. London: Penguin.

M. Lewis, 2001. "Free spirit in a capitalist world: Interview with Richard Stallman," Computer Weekly (20 April), at http://www.cw360.com/, accessed 27 October 2001.

D.C. Limmerick and B. Cunnington, 1993. Managing the New Organisation: A Blueprint for Networks and Strategic Alliances. Chatswood, N.S.W., Australia: Business & Professional Publishing.

B. Malinowski, 1922. Argonauts of the Western Pacific: An Account of Native Enterprise and Adventure in the Archipelagoes of Melanesian New Guinea. London: Routledge and Kegan Paul.

T.W. Malone and R.J. Laubacher, 1998. "The Dawn of the e-lance economy," Harvard Business Review, volume 76, number 5 (September-October), pp. 145-152.

J.G. March, 1991. "Exploration and exploitation in organizational learning," Organization Science, volume 2, number 1, pp. 71-87.

J.G. March and H.A. Simon, 1958. Organizations. Mew York: Wiley.

J. Martin, 1996. Cybercorp: The New Business Revolution. Mew York: Amacom.

E. Mayo, 1945. The Social Problems of an Industrial Civilization. Boston: Division of Research, Graduate School of Business Administration, Harvard University.

D. McGregor, 1960. The Human Side of Enterprise. Mew York: McGraw-Hill.

P.R. Monge, J.A. Edwards, and K.K. Kirste, 1983. "Determinants of communication network involvement: Connectedness and integration," Group and Organization Studies, volume 8, pp. 83-111.

G. Moody, 2001. Rebel Code: the Inside Story of Linux and the Open Source Revolution. Cambridge, Mass.: Perseus.

J.Y. Moon and L. Sproull, 2000. "Essence of distributed work: The Case of the Linux kernel," First Monday, volume 5, number 11 (November), at http://www.firstmonday.org/issues/issue5_11/moon/, accessed 28 October 2001.

G. Morgan, 1993. Imaginization: The Art of Creative Management. Newbury Park, Calif.: Sage.

T. Nadeau, 1999a. "Learning from Linux: OS/2 and the Halloween Memos," Part 1 - Halloween I, at http://www.os2hq.com/archives/linmemo1.htm, accessed 28 October 2001.

T. Nadeau, 1999b. "Learning from Linux: OS/2 and the Halloween Memos," Part 2 - Halloween II, at http://www.os2hq.com/archives/linmemo2.htm, accessed 28 October 2001.

T. Nadeau, 1999c. "Learning from Linux: OS/2 and the Halloween Memos," Part 3 - Halloween III, at http://www.os2hq.com/archives/linmemo3.htm, accessed 28 October 2001.

J. Naisbitt, 1982. Megatrends: Ten New Directions Transforming Our Lives. New York: Warner Books.

J.J. Naughton, 2000. A Brief History of the Future: From Radio Days to Internet Years in a Lifetime. Woodstock, N.Y.: Overlook Press.

N. Nohria and J.D. Berkely, 1994. "The Virtual organization: Bureaucracy, technology, and the implosion of control," In: C. Heckscher and A. Donnellon (editors). The Post-Bureaucratic Organization: New Perspectives on Organizational Change, Thousand Oaks, Calif.: Sage, pp. 108-128.

K. Nordström and Ridderstråle, 2000. Funky Business: Talent Makes Capital Dance. London: ft.com.

C.A. O'Reilly, 1978. "The Intentional distortion of information in organizational communication: A Laboratory and field investigation," Human Relations, volume 31, pp. 173-193.

T. O'Reilly, 2001. "Remaking the P2P Meme Map," In: A. Oram (editor). Peer-to-Peer: Harnessing the Power of Disruptive Technologies. Sebastopol, Calif.: O'Reilly, see also http://www.openp2p.com/p2p/2000/12/05/images/800-p2p2.jpg, accessed 28 October 2001.

T. Peters and N. Austin, 1985. A Passion for Excellence. London: HarperCollins.

G. Prasad, 2001. "Open Source-onomics: Examining some pseudo-economic arguments about open source," at http://www.freeos.com/printer.php?entryID=4087, accessed 28 October 2001.

J.B. Quinn, P. Anderson, and S. Finkelstein, 1996. quot;Managing professional intellect: Getting the most out of the best,quot; Harvard Business Review, volume 74, number 2 (March-April), pp. 71-80.

J.B. Quinn, J.J. Baruch, and K.A. Zien, 1996. quot;Software-based innovation,quot; Sloan Management Review, volume 37, number 4 (Summer), pp. 11-24.

J.B. Quinn and F.G. Hilmer, 1994. quot;Strategic outsourcing,quot; Sloan Management Review, volume 35, number 4 (Summer), pp. 43-55.

J.B. Quinn, T.L. Doorley, and P.C. Paquette, 1990. quot;Technology in services: Rethinking strategic focus,quot; Sloan Management Review, volume 31, number 2 (Winter), pp. 79-87.

R. Radloff and R. Helmreich, 1968. Groups Under Stress: Phychological Research in SEALAB II. New York: Appleton-Century-Crofts.

E.S. Raymond, 1998a. quot;The Cathedral and the bazaar,quot; First Monday, volume 3, number 3 (March), at http://www.firstmonday.org/issues/issue3_3/raymond/, accessed 28 October 2001.

E.S. Raymond, 1998b. quot;Homesteading the noosphere,quot; First Monday, volume 3, number 10 (October), at http://www.firstmonday.org/issues/issue3_10/raymond/, accessed 28 October 2001.

E.S. Raymond, 1999a. quot;A Brief history of hackerdom,quot; at http://www.tuxedo.org/~esr/writings/hacker-history/, accessed 28 October 2001.

E.S. Raymond, 1999b. quot;A response to Nikolai Bezroukov,quot; First Monday, volume 4, number 11 (November), at http://www.firstmonday.org/issues/issue4_11/raymond/, accessed 28 October 2001.

E.S. Raymond, 1999c. quot;The Magic cauldron,quot; at http://www.tuxedo.org/~esr/writings/magic-cauldron/, accessed 28 October 2001.

E.M. Reid, 1991. quot;Electropolis: Communication and community on Internet Relay Chat,quot; University of Melbourne, Department Of History, Honours Thesis, at http://eserver.org/cyber/reid.txt, accessed 28 October 2001.

P.D. Reynolds, 1979. Ethical Dilemmas and Social Science Research. San Francisco: Jossey-Bass

H. Rheingold, 1993. The Virtual Community: Homesteading on the Electronic frontier. Reading, Mass.: Addison-Wesley.

H. Roberts, 1981. "interviewing women: A Contradiction in terms," In: H. Roberts (editor). Doing Feminist Research. London: Routledge and Kegan Paul.

F.J. Roethlisberger and W.J. Dickson, 1939. Management and the Worker: An Account of a Research Program Conducted by the Western Electric Company, Hawthorne Works, Chicago. Cambridge, Mass.: Harvard University Press.

R. Rothwell, 1992. "Successful industrial innovation: Critical factors for the 1990s," R&D Management, volume 22, number 3, pp. 221-239.

D. Roy, 1960. "Banana Time: Job satisfaction and informal interaction," Human Organization, volume 18, number 4, pp. 158-168.

B. Schneier, 2000. Secrets and Lies: Digital Security in a Networked World. Mew York: Wiley.

M.S. Schwartz and C.G. Schwartz, 1955. "Problems in participant observation," American Journal of Sociology, volume 60, pp. 343-354.

H. Schwartz and J. Jacobs, 1979. Qualitative Sociology: A Method to the Madness. New York: Free Press.

R.W. Scott, 1969. "Field methods in the study of organizations," In: J.G. March, (editor), 1965. Handbook of Organizations. Chicago: Rand McNally, pp. 272-282; and, reprinted in A. Etzioni (compiler), 1969. A Sociological Reader on Complex Organizations. New York: Holt, Rinehart and Winston, pp. 558-576.

P.H. Senge, 1990. The Fifth Discipline: The Art and Practice of the Learning Organization. New York: Doubleday/Currency.

M.E. Shaw, 1978. "Communication networks fourteen years later," In: L. Berkowitz (editor). Group Processes. New York: Academic Press.

M.E. Shaw, 1964. "Communication networks," In: L. Berkowitz (editor). Advances in Experimental Social Psychology. New York: Academic Press, pp. 111-147.

O. Shy, 2001. The Economics of Network Industries. Cambridge: Cambridge University Press.

S.D. Sieber, 1973. "The integration of fieldwork and survey methods," American Sociological Review, volume 78, number 6, pp. 1335-1339.

J.P. Spradley, 1979. The Ethnographic Interview. NY: Holt, Rinehart and Winston.

R.M. Stallman, 1999. "The GNU operating system and the free software movement," In: C. DiBona, S. Ockman, and M. Stone (editors). Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly.

N. Stephenson, 1999. In the Beginning ... Was the Command Line. New York: Avon Books.

C.R. Stones, 1982. "A Community of Jesus people in South Africa," Small Group Behavior, volume 13, pp. 264-272.

D. Tapscott, 1996. The Digital Economy: Promise and Peril in the Age of Networked Intelligence. New York: McGraw-Hill.

D. Tapscott and A. Caston, 1993. Paradigm Shift: Promise and Peril in the Age of Networked Intelligence. New York: McGraw-Hill.

F.W. Taylor, 1911. The Principles of Scientific Management. New York: Harper & Brothers.

L. Torvalds, 1999a. "The Linux edge," Communications of the ACM, volume 42, number 4, pp. 38-39.

L. Torvalds, 1999b. "The Linux edge," n: C. DiBona, S. Ockman, and M. Stone (editors). Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly, pp. 101-119.

I. Tuomi, 2001. "Internet, innovation, and open source: Actors in the network," First Monday, volume 6, number 1 (January), at http://www.firstmonday.org/issues/issue6_1/tuomi/, accessed 28 October 2001.

F. Tutzauer, 1985. "Toward a theory of disintegration in communication networks," Social Networks, volume 7, pp. 263-285.

V. Valloppillil, 1998. "Open source software: A (new?) development methodology," also referred as the Halloween document, unpublished working paper, Microsoft Corporation.

J. Wakeford, 1981. "From methods to practice: a critical note on the teaching of research practice to undergraduates," Sociology, volume 15, number 4, pp. 505-512.

R. Wax, 1957. "Twelve years later: An Analysis of field experiences," American Journal of Sociology, volume 63, pp. 133-142.

S. Webb and B. Webb, 1932. Methods of Social Study. London: Longmans, Green.

M.J. White, 1977. "Counternormative behavior as influenced by deinviduating conditions and reference group salience," Journal of Social Psychology, volume 103, pp. 73-90.

W.F. Whyte, 1943. Street Corner Society: The Social Structure of an Italian Slum. Chicago: University of Chicago Press.

R. Young, 1999. "Giving it Away," In: C. DiBona, S. Ockman, and M. Stone (editors). Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly.

M. Zelditch, Jr., 1969. "Can you really study an army in the laboratory?," In: A. Etzioni (compiler). A Sociological Reader on Complex Organizations. New York: Holt, Rinehart and Winston.

F. Zweig, 1948. Labour, Life and Poverty. London: Gollancz.


Editorial history

Paper received 19 September 2001; accepted 16 October 2001.

ContentsIndex

Copyright ©2001, First Monday

Management and Virtual Decentralised Networks: The Linux Project by George N. Dafermos
First Monday, volume 6, number 11 (November 2001),
URL: http://firstmonday.org/issues/issue6_11/dafermos/index.html