Speak Freely
End of Life Announcement

by John Walker
August 1st, 2003


The time has come to lower the curtain on Speak Freely. As of August 1st, 2003, version 7.6a of Speak Freely (Unix and Windows) is declared the final release of the program. A banner will be added to the general Speak Freely page and those specific to the Unix and Windows versions on the www.fourmilab.ch site announcing the end of life and linking to this document. No further development or maintenance will be done, and no subsequent releases will be forthcoming.

On January 15th, 2004 all Speak Freely documentation and program downloads, along with links to them on the site navigation pages, will be removed from the www.fourmilab.ch site. Accesses to these files will be redirected to a revised version of this document. On that date the speak-freely and speak-freely-digest mailing lists will be closed and their archives copied to off-line storage and deleted from the site. In addition, the Speak Freely Forum will cease operation, along with the Echo and Look Who's Listening servers previously running at www.fourmilab.ch. Ports 2074 through 2076 will be firewall blocked for the fourmilab.ch domain, with incoming packets silently discarded. After the January 15th, 2004 end of life date, all queries, in whatever form, regarding Speak Freely will be ignored. An historical retrospective on the program may eventually be published on the site.

Questions and Answers

Why are you doing this?
The time has come. Speak Freely is the direct descendant of a program I originally developed and posted to Usenet in 1991. The bulk of Speak Freely development was done in 1995 and 1996, with the Windows version designed around the constraints of 16-bit Windows 3.1. Like many programs of comparable age which have migrated from platform to platform and grown to encompass capabilities far beyond anything envisioned in their original design, Speak Freely shows its age. The code is messy, difficult to understand, and very easy to break when making even small modifications. The Windows and Unix versions, although interoperable, have diverged in design purely due to their differing histories, almost doubling the work involved in making any change which affects them both.
To continue development and maintenance of Speak Freely, the program requires a top to bottom rewrite, basing the Unix and Windows version on an identical "engine," and providing an application programming interface (API) which permits other programs to be built upon it. I estimate the work involved in this task, simply to reach the point where a program built with the new architecture is 100% compatible with the existing Speak Freely, would require between 6 and 12 man-months. There is no prospect whatsoever that I will have time of that magnitude to devote to Speak Freely in the foreseeable future, and no indication that any other developer qualified to do the job and sufficiently self-motivated and -disciplined to get it done exists. In fact, the history of Speak Freely constitutes what amounts to a non-existence proof of candidate developers.
Even if I had the time to invest in Speak Freely, or another developer or group of developers volunteered to undertake the task, the prospects for such a program would not justify the investment of time.
What do you mean--isn't the Internet still in its infancy?
If you say so. The Internet, regardless of its state of development, is in the process of metamorphosing into something very different from the Internet we've known over the lifetime of Speak Freely. The Internet of the near future will be something never contemplated when Speak Freely was designed, inherently hostile to such peer-to-peer applications.
I am not using the phrase "peer to peer" as a euphemism for "file sharing" or other related activities, but in its original architectural sense, where all hosts on the Internet were fundamentally equal. Certainly, Internet connections differed in bandwidth, latency, and reliability, but apart from those physical properties any machine connected to the Internet could act as a client, server, or (in the case of datagram traffic such as Speak Freely audio) neither--simply a peer of those with which it communicated. Any Internet host could provide any service to any other and access services provided by them. New kinds of services could be invented as required, subject only to compatibility with the higher level transport protocols (such as TCP and UDP). Unfortunately, this era is coming to an end.
One need only read discussions on the Speak Freely mailing list and Forum over the last year to see how many users, after switching from slow, unreliable dial-up Internet connections to broadband, persistent access via DSL or cable television modems discover, to their dismay, that they can no longer receive calls from other Speak Freely users. The vast majority of such connections use Network Address Translation (NAT) in the router connected to the broadband link, which allows multiple machines on a local network to share the broadband Internet access. But NAT does a lot more than that.
A user behind a NAT box is no longer a peer to other sites on the Internet. Since the user no longer has an externally visible Internet Protocol (IP) address (fixed or variable), there is no way (in the general case--there may be "workarounds" for specific NAT boxes, but they're basically exploiting bugs which will probably eventually be fixed) for sites to open connections or address packets to his machine. The user is demoted to acting exclusively as a client. While the user can contact and freely exchange packets with sites not behind NAT boxes, he cannot be reached by connections which originate at other sites. In economic terms, the NATted user has become a consumer of services provided by a higher-ranking class of sites, producers or publishers, not subject to NAT.
There are powerful forces, including government, large media organisations, and music publishers who think this situation is just fine. In essence, every time a user--they love the word "consumer"--goes behind a NAT box, a site which was formerly a peer to their own sites goes dark, no longer accessible to others on the Internet, while their privileged sites remain. The lights are going out all over the Internet. My paper, The Digital Imprimatur, discusses the technical background, economic motivations, and social consequences of this in much more (some will say tedious) detail. Suffice it to say that, as the current migration of individual Internet users to broadband connections with NAT proceeds, the population of users who can use a peer to peer telephony product like Speak Freely will shrink apace. It is irresponsible to encourage people to buy into a technology which will soon cease to work.
But isn't the problem really the lack of static port mapping, not NAT?
(If you don't understand this question, please skip to the next.) Correct, but experience has shown that a large number of installed NAT boxes either cannot map an externally accessible port to an internal IP address and port, or those who install the boxes do not provide their customers adequate information to permit them to do this. Given the trend, discussed in the last question, toward confining individual Internet users to a consumer role, I believe fewer and fewer users will have the ability to statically map ports as time goes on.
Isn't there some clever way to work around these limitations?
Not as far as I can figure out. As long as a NAT box only maps an inbound port to a local IP address when an outbound connection is established, I know of no way an Internet user can initiate a connection to a user behind a NAT box. With sufficient cleverness, such as the "NAT fix" in the 7.6a Unix version of Speak Freely, a user behind a NAT box can connect to one who isn't, but if both users are NATted (and that's the way things are going), the only way they could communicate would be through a non-NATted server site to which both connected, which would then forward packets between them.
So why don't you just set up such a server?
Because no non-commercial site like mine could possibly afford the unlimited demands on bandwidth that would require. It's one thing to provide a central meeting point like a Look Who's Listening server, which handles a packet every five minutes or so from connected sites, but a server that's required to forward audio in real-time between potentially any number of simultaneously connected users is a bandwidth killer. The www.fourmilab.ch site has 2 Mbit/sec bidirectional bandwidth, about 50-75% of which (outbound) is typically in use serving Web pages. If we assume 1 Mbit/sec free bandwidth, then fewer than 70 simultaneous Speak Freely half-duplex GSM conversations would saturate this bandwidth, half that number if they're full-duplex. Besides, as soon as you set up such a server, within hours it would come under denial of service attacks mounted by malicious children and their moral and intellectual adult equivalents which would render the server unusable to legitimate users. Further, the existence of such server(s) would represent a single-point vulnerability which is the very antithesis of the design of the Internet and Speak Freely. Anybody who thinks through the economics and logistics of operating such a server on a pro bono basis will, I am confident, reject it on the same grounds I have. If you disagree, go prove me wrong!
But won't NAT go away once we migrate to IPv6?
(If you don't know what IPv6 is, please skip ahead to the next question.) First of all, any bets on when IPv6 will actually be implemented end-to-end for a substantial percentage of individual Internet users? And even if it were, don't bet on NAT going away. Certainly it will change, but once the powers that be have demoted Internet users from peers to consumers, I don't think they're likely to turn around and re-empower them just because the address space is now big enough. Besides, the fraction of users who care about such issues, while high among those interested in programs such as Speak Freely, is minuscule among the general public.
Why January 15th, 2004?
January 1st would have made more sense, but I expect to be out of town then, and I don't like to make major changes to the site while I'm on the road. There's no special significance to the date.
Can I go on using Speak Freely after January 15th?
Certainly, Speak Freely is in the public domain; you can do anything you like with it, before or after January 15th, 2004. But as of that date I am not going to have anything more to do with it.
After January 15th, can I distribute copies to other people?
Certainly; see the previous answer. You're free to distribute Speak Freely in source or binary form to anybody you like, post it on your Web site, etc. subject only to whatever governmental restrictions may apply to distribution of the encryption technology Speak Freely employs.
How will I be able to find people once your Look Who's Listening server shuts down?
You can exchange IP addresses with people you wish to call via ICQ, instant messages, E-mail, chat systems, etc. If somebody wants to start a public Look Who's Listening server they're welcome, but history is not encouraging. While the Fourmilab LWL server has run continuously for 8 years, no other public server has lasted more than a year before disappearing.
How can I test without a public echo server?
Beats me. If the need is sufficient, perhaps somebody will set one up, but, as with LWL servers, they never seem to last very long.
Why all the dramatics of an "end of life" announcement?
The fate of most free software projects is "abandonware"--the developer loses interest, burns out, or becomes occupied with other projects and simply leaves the software as-is. In fact, this happened with Speak Freely a few years ago, and the consequences were distasteful. Unix workstation vendors routinely issue end of life announcements to inform customers that as of a given date, or software release, or new hardware platform, an existing product will no longer be supported. This gives those using that product time to weigh the alternatives and decide how best to proceed. Given that the Internet is in the midst of a structural change (widespread adoption of broadband with NAT) which destroys the 30 year old Internet architecture on which Speak Freely (and other true peer to peer programs) relies, I thought it more responsible to withdraw the program in this manner (while, as with a workstation end of life announcement, permitting satisfied users to continue to use it indefinitely) rather than let it wilt and die as the dark pall of NAT falls upon the Internet.
Certainly, as the day grows near, you'll change your mind, won't you?
Perhaps, but bear in mind I'm as stubborn as a room full of cats. I've observed and pondered the trends in the evolution of the Internet which seem to make inevitable the demise of Speak Freely and similar programs for almost two years. I don't see any flaw in the analysis or objective change in the situation likely to justify reversing the decision regarding Speak Freely.
Don't you have an obligation to whatever?
Nope. Writing software and giving it away doesn't incur any obligation of any kind to any person. I've been working on this program off and on for more than 12 years. At my age (don't ask, but if I live as long as Bob Hope did, I'm more than half way to the checkered flag), the prospect of spending another five or ten years dreaming up clever countermeasures to an Internet that's evolving to make programs like Speak Freely impossible, in a climate where creating a tool some people find useful and giving it away only invites incessant malicious attacks upon it motivated solely by nihilism, for a shrinking user community forced to master the ever-growing complexity all of this requires does not appeal to me. Programs, like people, are born, grow rapidly, mature, and then eventually age and die. So it goes. If somebody disagrees and wants to step in, they're more than welcome, but such a person has yet to appear over the entire history of Speak Freely.
By taking down the Speak Freely site, aren't you throwing away all the work invested in the program?
No. While I cannot in good conscience encourage people to become new users of Speak Freely nor developers to invest time in working on it, the entire state of the program as of the final release will remain available indefinitely on SourceForge as separate CVS archives for the Unix and Windows versions. I will make no further additions to these archives, but others are free to download them for their own private development purposes and/or create new projects on SourceForge to develop derivative programs in whatever form they like.
Anything more to add?
It's been fun. Take care.

The Digital Imprimatur

Fourmilab Home Page


The Digital Imprimatur

How big brother and big media can put the Internet genie back in the bottle.

by John Walker
September 13th, 2003
Revision 4 -- November 4th, 2003

imprimatur 1. The formula (=`let it be printed'), signed by an official licenser of the press, authorizing the printing of a book; hence as sb. an official license to print.
The Oxford English Dictionary (2nd. ed.)

Introduction

Over the last two years I have become deeply and increasingly pessimistic about the future of liberty and freedom of speech, particularly in regard to the Internet. This is a complete reversal of the almost unbounded optimism I felt during the 1994-1999 period when public access to the Internet burgeoned and innovative new forms of communication appeared in rapid succession. In that epoch I was firmly convinced that universal access to the Internet would provide a countervailing force against the centralisation and concentration in government and the mass media which act to constrain freedom of expression and unrestricted access to information. Further, the Internet, properly used, could actually roll back government and corporate encroachment on individual freedom by allowing information to flow past the barriers erected by totalitarian or authoritarian governments and around the gatekeepers of the mainstream media.

So convinced was I of the potential of the Internet as a means of global unregulated person-to-person communication that I spent the better part of three years developing Speak Freely for Unix and Windows, a free (public domain) Internet telephone with military-grade encryption. Why did I do it? Because I believed that a world in which anybody with Internet access could talk to anybody else so equipped in total privacy and at a fraction of the cost of a telephone call would be a better place to live than a world without such communication.

Computers and the Internet, like all technologies, are a double-edged sword: whether they improve or degrade the human condition depends on who controls them and how they're used. A large majority of computer-related science fiction from the 1950's through the dawn of the personal computer in the 1970's focused on the potential for centralised computer-administered societies to manifest forms of tyranny worse than any in human history, and the risk that computers and centralised databases, adopted with the best of intentions, might inadvertently lead to the emergence of just such a dystopia.

The advent of the personal computer turned these dark scenarios inside-out. With the relentless progression of Moore's Law doubling the power of computers at constant cost every two years or so, in a matter of a few years the vast majority of the computer power on Earth was in the hands of individuals. Indeed, the large organisations which previously had a near monopoly on computers often found themselves using antiquated equipment inferior in performance to systems used by teenagers to play games. In less than five years, computers became as decentralised as television sets.

But there's a big difference between a computer and a television set--the television can receive only what broadcasters choose to air, but the computer can be used to create content--programs, documents, images--media of any kind, which can be exchanged (once issues of file compatibility are sorted out, perhaps sometime in the next fifty centuries) with any other computer user, anywhere.

Personal computers, originally isolated, almost immediately began to self-organise into means of communication as well as computation--indeed it is the former, rather than the latter, which is their principal destiny. Online services such as CompuServe and GEnie provided archives of files, access to data, and discussion fora where personal computer users with a subscription and modem could meet, communicate, and exchange files. Computer bulletin board systems, FidoNet, and UUCP/USENET store and forward mail and news systems decentralised communication among personal computer users, culminating in the explosive growth of individual Internet access in the latter part of the 1990's.

Finally the dream had become reality. Individuals, all over the globe, were empowered to create and exchange information of all kinds, spontaneously form virtual communities, and do so in a totally decentralised manner, free of any kind of restrictions or regulations (other than already-defined criminal activity, which is governed by the same laws whether committed with or without the aid of a computer). Indeed, the very design of the Internet seemed technologically proof against attempts to put the genie back in the bottle. "The Internet treats censorship like damage and routes around it." (This observation is variously attributed to John Gilmore and John Nagle; I don't want to get into that debate here.) Certainly, authoritarian societies fearful of losing control over information reaching their populations could restrict or attempt to filter Internet access, but in doing so they would render themselves less competitive against open societies with unrestricted access to all the world's knowledge. In any case, the Internet, like banned books, videos, and satellite dishes, has a way of seeping into even the most repressive societies, at least at the top.

Without any doubt this explosive technological and social phenomenon discomfited many institutions who quite correctly saw it as reducing their existing control over the flow of information and the means of interaction among people. Suddenly freedom of the press wasn't just something which applied to those who owned one, but was now near-universal: media and messages which previously could be diffused only to a limited audience at great difficulty and expense could now be made available around the world at almost no cost, bypassing not only the mass media but also crossing borders without customs, censorship, or regulation.

To be sure, there were attempts by "the people in charge" to recover some of the authority they had so suddenly lost: attempts to restrict the distribution and/or use of encryption, key escrow and the Clipper chip fiasco, content regulation such as the Computer Decency Act, and the successful legal assault on Napster, but most of these initiatives either failed or proved ineffective because the Internet "routed around them"--found other means of accomplishing the same thing. Finally, the emergence of viable international OpenSource alternatives to commercial software seemed to guarantee that control over computers and Internet was beyond the reach of any government or software vendor--any attempt to mandate restrictions in commercial software would only make OpenSource alternatives more compelling and accelerate their general adoption.

This is how I saw things at the euphoric peak of my recent optimism. Like the transition between expansion and contraction in a universe with Ω greater than 1, evidence that the Big Bang was turning the corner toward a Big Crunch was slow to develop, but increasingly compelling as events played out. Earlier I believed there was no way to put the Internet genie back into the bottle. In this document I will provide a road map of precisely how I believe that could be done, potentially setting the stage for an authoritarian political and intellectual dark age global in scope and self-perpetuating, a disempowerment of the individual which extinguishes the very innovation and diversity of thought which have brought down so many tyrannies in the past.

One note as to the style of this document: as in my earlier Unicard paper, I will present many of the arguments using the same catch phrases, facile reasoning, and short-circuits to considered judgment which proponents of these schemes will undoubtedly use to peddle them to policy makers and the public. I use this language solely to demonstrate how compelling the arguments can be made for each individual piece of the puzzle as it is put in place, without ever revealing the ultimate picture. As with Unicard, I will doubtless be attacked by prognathous pithecanthropoid knuckle-typers who snatch sentences out of context. So be it.

The Emerging Consumer Internet

The original design of the ARPANET, inherited by the Internet, was inherently peer to peer. I do not use the phrase "peer to peer" here as a euphemism for "file sharing" or other related activities, but in its original architectural sense, that all hosts on the network were logically equals. Certainly, Internet connections differed in bandwidth, latency, and reliability, but apart from those physical properties any machine connected to the Internet could act as a client, server, or neither--simply a peer of those with which it communicated. Any Internet host could provide any service to any other and access any service provided by them. New kinds of services could be invented as required, subject only to compatibility with the higher level transport protocols (such as TCP and UDP).

This architecture made the Internet something unprecedented in the human experience, the first many-to-many mass medium. Let me elaborate a bit on that. Technological innovations in communication dating back to the printing press tended to fall into two categories. The first, exemplified by publishing (newspapers, magazines, and books) and broadcasting (radio and television) was a one-to-many mass medium: the number of senders (publishers, radio and television stations) was minuscule compared to their audience, and the capital costs required to launch a new publication or broadcast station posed a formidable barrier to new entries. The second category, including postal mail, telegrams, and the telephone, is a one-to-one medium; you could (as the technology of each matured) communicate with almost anybody in the world where such service was available, but your communications were person to person--point to point. No communication medium prior to the Internet had the potential of permitting any individual to publish material to a global audience. (Certainly, if one creates a Web site which attracts a large audience, the bandwidth and/or hosting costs can be substantial, yet are still negligible compared to the capital required to launch a print publication or broadcast outlet with comparable reach.)

This had the effect of dismantling the traditional barriers to entry into the arena of ideas, leveling the playing field to such an extent that an individual could attract an audience for their own work, purely on the basis of merit and word of mouth, as large as those of corporate giants entrenched in earlier media. Beyond direct analogues to broadcasting, the peer to peer architecture of the Internet allowed creation of entirely new kinds of media--discussion boards, scientific preprint repositories, web logs with feedback from readers, collaborative open source software development, audio and video conferences, online auctions, music file sharing, open hypertext systems, and a multitude of other kinds of spontaneous human interaction.

A change this profound, taking place in less than a decade (for despite the ARPANET's dating to the early 1970s, it was only as the Internet attracted a mass audience in the late 1990s that its societal and economic impact became significant), must inevitably prove discomfiting to those invested in or basing their communication strategy on traditional media. One needn't invoke conspiracy theories to observe that many news media, music publishers, and governments feel a certain nostalgia for the good old days before the Internet. Back then, there were producers (publishers, broadcasters, wire services) and consumers (subscribers, book and record buyers, television and radio audiences), and everybody knew their place. Governments needn't fret over mass unsupervised data flow across their borders, nor insurgent groups assembling, communicating anonymously and securely, and operating out of sight and beyond the control of traditional organs of state security.

Despite the advent of the Internet, traditional media and government continue to exercise formidable power. Any organisation can be expected to act to preserve and expand its power, not passively acquiesce in its dissipation. Indeed, consolidation among Internet infrastructure companies and increased governmental surveillance of activities on the Internet are creating the potential for the imposition of "points of control" onto the originally decentralised Internet. Such points of control can be used for whatever purposes those who put them in place wish to accomplish. The trend seems clear--over the next five to ten years, we will see an effort to "put the Internet genie back in the bottle": to restore the traditional producer/consumer, government/subject relationships which obtained before the Internet disrupted them.

A set of technologies, each already in existence or being readied for introduction, can, when widely deployed and employed toward that end, reimpose the producer/consumer information dissemination model on the Internet, restoring the central points of control which traditional media and governments see threatened by its advent. Each of the requisite technologies can be justified on its own as solving clamant problems of the present day Internet, and may be expected to be promoted or mandated as so doing. In the next section, we'll look at these precursor technologies.

Technological Precursors

The dark future I dread will be the consequence of the adoption, by marketing or mandate, of a collection of individual technologies, each of which can be advocated as beneficial in its own right. But these technologies, taken together, have consequences less apparent to many yet, I believe, quite evident to some now promoting them. Each of the following technologies is either currently in existence or the object of an active development effort. These items necessarily interact with one another, so it is impossible to entirely avoid forward references in discussing them. If something doesn't seem clear on the first reading, you may benefit from re-reading this section after you've digested the essentials the first time through.

The Firewalled Consumer

Note: this item discusses a phenomenon, already underway, which is effectively segmenting Internet users into two categories: home users who are consumers of Internet services, and privileged sites which publish content and provide services. The technologies discussed in the balance of this document are entirely independent of this trend, and can be deployed whether or not it continues. If you aren't interested in such details or take violent issue with the interpretation I place upon them, please skip to the next heading. I raise the issue here because when discussing the main topics of this document with colleagues, a common reaction has been, "Users will never put up with being relegated to restricted access to the Internet." But, in fact, they already are being so relegated by the vast majority of broadband connections, and most aren't even aware of what they've lost or why it matters.

When individuals first began to connect to the Internet in large numbers, their connection made them logical peers of all other Internet users, regardless of nature and size. While a large commercial site might have a persistent, high bandwidth connection and a far more powerful server than the home user, there was nothing, in principle, such a site could do that an individual user could not--any Internet user could connect to any other and interchange any form of data on any port in any protocol which conformed to the underlying Internet transport protocols. The user with a slow dial-up connection might have to be more patient, and probably couldn't send and receive video in real-time, but there was no distinction in the ways they could use the Internet.

Over time, this equality among Internet users has eroded, in large part due to technical workarounds to cope with the limited 32-bit address space of the present day Internet. I describe this process in detail in Appendix 1, exploring how these expedients have contributed to the anonymity and lack of accountability of the Internet today. With the advent of broadband DSL and cable television Internet connections, a segmentation of the Internet community is coming into being. The typical home user with broadband access has one or more computers connected to a router (perhaps built into the DSL or cable modem) which performs Network Address Translation, or NAT. This allows multiple computers to share a single fast Internet connection. Most NAT boxes, as delivered, also act as a rudimentary Internet firewall, in that packets from the Internet can only enter the local network and reach computers connected to the broadband connection in reply to connections initiated from the inside. For example, when a local user connects to a Web site, the NAT router allocates a channel (port) for traffic from the user's machine to the Web site, along with a corresponding inbound channel for data returned from the Web site. Should an external site attempt to send packets to a machine on the local network which has not opened a connection to it, they will simply be discarded, as no inbound channel will have been opened to route them to the destination. Worms and viruses which attempt to propagate by contacting Internet hosts and exploiting vulnerabilities in software installed on them will never get past the NAT box. (Of course, machines behind a NAT box remain vulnerable to worms which propagate via E-mail and Web pages, or any other content a user can be induced to open.)

The typical home user never notices NAT; it just works. But that user is no longer a peer of all other Internet users as the original architecture of the network intended. In particular, the home user behind a NAT box has been relegated to the role of a consumer of Internet services. Such a user cannot create a Web site on their broadband connection, since the NAT box will not permit inbound connections from external sites. Nor can the user set up true peer to peer connections with other users behind NAT boxes, as there's an insuperable chicken and egg problem creating a bidirectional connection between them.

Sites with persistent, unrestricted Internet connections now constitute a privileged class, able to use the Internet in ways a consumer site cannot. They can set up servers, create new kinds of Internet services, establish peer to peer connections with other sites--employ the Internet in all of the ways it was originally intended to be used. We might term these sites "publishers" or "broadcasters", with the NATted/firewalled home users their consumers or audience.

Technically astute readers will observe, of course, that NAT need not prevent inbound connections; a savvy user with a configurable router can map inbound ports to computers on the local network and circumvent the usual restrictions. Yet I believe that as time passes, this capability will become increasingly rare. It is in the interest of broadband providers to prevent home users from setting up servers which might consume substantial upstream bandwidth. By enforcing an "outbound only" restriction on home users, they are blocked from setting up servers, and must use hosting services if, for example, they wish to create a personal home page. (With consolidation among Internet companies, the access supplier may also own a hosting service, creating a direct economic incentive to encourage customers to use it.)

In addition, it is probable that basic broadband service will be restricted to the set of Internet services used by consumers: Web, FTP, E-mail, instant messages, streaming video, etc., just as firewalls are configured today to limit access to a list of explicitly permitted services. Users will, certainly, be able to obtain "premium" service at additional cost which will eliminate these restrictions, just as many broadband companies will provide a fixed IP address as an extra cost option. But the Internet access market has historically been strongly price sensitive, so it is reasonable to expect that over the next few years the majority of users connected to the Internet will have consumer-grade access, which will limit their use to those services deemed appropriate for their market segment.

In any case, the key lesson of the mass introduction of NAT is that it demonstrates, in a real world test, that the vast majority of Internet users do not notice and do not care that their access to the full range of Internet services and ability to act as a peer of any other Internet site has been restricted. Those who assert that the introduction of the following technologies will result in a mass revolt among Internet users bear the burden of proof to show why those technologies, no more intrusive on the typical user's Internet experience than NATted broadband, will incite them to oppose their deployment.

Certificates

A certificate is a digital identification of a physical or abstract object: a person, business, computer, program, or document. A certificate is simply a sequence of bits which uniquely identifies the object it pertains to. In most cases it is guaranteed that there is a one-to-one mapping between certificates and objects. To make this less abstract, consider a non-computer analogue: passports. A passport (or, more precisely, a passport number, as individuals may, in certain circumstances, obtain multiple physical passports bearing the same number), uniquely identifies a person as a citizen of the issuing country. No two people are given the same passport number, and one person's attempting to obtain two different passport numbers is considered a crime involving a fraudulent declaration. A digital certificate is much like a passport. It is issued by a certificate authority, which vouches for its authenticity. (In the case of a passport, the certificate authority is the issuing government.) The certificate authority trades on its reputation for probity--to obtain high-grade personal certificates from recognised authorities, documentation equal to or better than that required to obtain a passport is necessary. As with passports, certificates issued by obscure or disreputable authorities will engender less trust than those from the big names.

Certificates are in wide use today. Every time you make a secure purchase on the Web, your browser retrieves a certificate from the e-commerce site to verify that you're indeed talking to whom you think you are and to establish secure encrypted communications. Most browser E-mail clients allow you to use personal certificates to sign and encrypt mail to correspondents with certificates, but few people avail themselves of this capability at present, opting to send their E-mail in the clear where anybody can intercept it and you-know-who routinely does.

When you obtain a personal certificate, the certificate authority that signs it asserts that you have presented them adequate evidence you are who you claim to be (usually on the basis of an application validated by a notary, attorney, or bank or brokerage officer), and reserves the right to revoke your certificate should they discover it to have been obtained fraudulently. Certificate authorities provide an online service to validate certificates they issue, supplying whatever information you've chosen to disclose regarding your identity. Having obtained a certificate, you're obliged to guard it as you would your passport, credit cards, and other personal documents. If another person steals your certificate, they will be able to read your private E-mail, forge mail in your name, and commit all the kinds of fraud present-day "identity theft" encompasses. While stolen certificates can be revoked and replacements issued, the experience is as painful as losing your wallet and worth the same effort to prevent.

A certificate comes in two parts: private and public. The private part is the credential a user employs to access the Internet, sign documents, authorise payments, and decrypt private files stored on their computer and secure messages received from others. It is the private part of the certificate a user must carefully guard; it may be protected by a pass phrase, be kept on a removable medium like a smart card, or require biometric identification (for example, fingerprint recognition) to access. The public part of the certificate is the user's visible identification to others; many users will list their public certificate in a directory, just as they list their telephone number. Knowing a user's public certificate allows one to encrypt messages (with that person's public key, a component of the public certificate) which can only be decoded with the secret key included in the private certificate. When I speak of "sending the user's certificate along with a request on the Internet" or tagging something with a certificate, I refer to the public certificate which identifies the user. The private certificate is never disclosed to anybody other than its owner.

The scope of objects certificates can identify is unlimited. Here are some examples, as they presently exist and may be expected to evolve in the near future.

Trusted Computing

"Trusted Computing," in the current jargon, has little or nothing to do with traditional concepts of software reliability or data security. Instead, it refers to an effort to embed end-to-end validation of the origin and integrity of data into computing hardware and system software. One key component is the identification of each computer by a unique certificate, but the ramifications go far beyond this. In addition to protecting computer users from insecure software (software not signed with a recognised vendor's certificate and verified unmodified by its digital signature), users are also protected against corruption of data on their own computers. Data on a user's own hard drive is encrypted and signed, permitting access rights and data integrity to be verified every time a file is loaded into memory. This will completely eliminate the risk of viruses corrupting installed programs or data files. It permits a software vendor to block the execution of any program deemed harmful, even retroactively (since certificates will be verified online). If a vulnerability is found in a software product installed on millions of users' machines worldwide, it may be instantly disabled before it puts them at risk, forcing them to immediately upgrade to a new, secure version. In many cases this will occur automatically--the user need do nothing, nor even be aware of the upgrade to the system.

On a Trusted Computing system, the ability to back up, mirror, and transfer data will be necessarily limited. Hardware and compliant operating systems will restrict the ability to transfer data from system to system. For example, software bound to a given machine's certificate will refuse to load on a machine with a different certificate. Perforce, this security must extend to the most fundamental and security-critical software of all--the ROM BIOS and operating system kernel. Consequently, a trusted computing platform must validate the signature of an operating system before booting it. Operating systems not certified as implementing all the requirements of Trusted Computing will not be issued certificates, and may not be booted on such systems.

Micropayment

Today, buying stuff on the Internet is a big deal--something which many people remain hesitant to do, being well aware of the risks of having their credit card hijacked and the myriad distasteful sequelæ thereof. With the advent of certificates and Trusted Computing, these fears will dissipate. With one's personal certificate (bound, perhaps, to one or more computers to which one has exclusive access, and secured by a pass phrase, smart card, or biometric identification) guaranteeing the security of the connection, and certificates on the other end validating the identity of the vendor, much of the tedious process of present-day Internet commerce can give way to a seamless surfing and shopping experience.

A micropayment exchange permits payments to be made between any two certificate holders. A user makes a payment by sending a message to the exchange, signed with the user's private certificate, identifying the recipient by their public certificate and indicating the amount to be paid. Upon verification of the payer's and recipient's certificates and that sufficient funds are available in the payer's account, the specified sum is transferred to the recipient's account and a confirmation sent of arrival of the funds. Micropayment transactions can be performed explicitly by logging on to the exchange's site, but will usually be initiated by direct connection to the exchange's server when the user makes an online purchase.

Micropayment differs from existing online payment services such as PayPal and e-gold in that transaction costs are sufficiently low that extremely small payments can be made without incurring exorbitant processing fees; with micropayment it will be entirely practical for Web sites to charge visitors a ten-thousandth of a Euro to view a page; credit cards or existing online payment services have far too high overhead to permit such minuscule payments. Note that there need be no upper limit on payments made through micropayment exchanges, and hence "micropayment" simply implies that tiny payments are possible, not that larger payments aren't routinely made as well. The first broadly successful micropayment exchange is likely to be technology driven, but as micropayments become a mass market and begin to encroach on other payment facilities, pioneers in the market are likely to be acquired by major players in the financial services industry.

No more e-commerce paranoia . . . when you do business with vendors with certificates you consider trustworthy, you needn't enter any sensitive personal information. Just click "buy", select which of the credit cards or bank accounts linked to your certificate with which you wish to pay (never giving the number), and your purchase will be shipped to the specified address linked to your certificate. Even if your certificate is stolen, a thief can only order stuff to be shipped to you.

Each user can set their own personal default maximum price per page, per item purchased, per session, per day, per week, and per month. I call this their "threshold of paying." No need to subscribe to a magazine's site to read an article--just click on it and, if it costs less than your €0.05 per-item threshold and all of the other totals are within limits, up it pops--your account is debited and the magazine's is credited. If you're a subscriber, your certificate identifies you as one and you pay nothing . . . and all of this happens in an instant without your needing to do anything. The magazine gets paid for what you read, so they'll put their entire content online, not just a teaser to induce you to subscribe to the printed edition. And if you like what you read, you'll return and spend more money there.

Want to start your own magazine? Decided your blog is worth €0.001 per day to read? No problem . . . tag it with your certificate, set up a "pay to read" link to it, and listen to the millieuros tinkle into your virtual cookie jar.

Certified micropayment exchanges will, of course, be required to comply with "know your customer" and disclosure regulations, adhere to international conventions against money laundering, terrorism, and drug trafficking, and disclose transactions to the fiscal authorities of the jurisdiction of the buyer and seller for purposes of tax assessment. This will largely put an end to the use of the Internet for financial crimes and eliminate the need for further regulations or constraints on Internet commerce.

Micropayment and Funding Internet Resources

Micropayment provides a new business model to support Internet sites which attract large numbers of visitors but which have so far failed to fund themselves with subscriber or advertiser models. Micropayment permits a site to make access available to whomever chooses to visit the site on a per-page basis (or, as discussed below, even for excerpts from pages). There is no

Funding Fourmilab:
A Worked Example

Just for fun, and to ground this discussion in reality, I worked the numbers for simplistic pay-per-page at my www.fourmilab.ch site. All things considered, this site costs about €5000 per month to operate: the bulk being communications and ISP charges, the rest depreciation on server hardware, utilities, and other overhead. (I don't count the value of my time, which amounts to many hours but, as one declares on the customs form, is of "no commercial value".) That works out to about €167 per day. I get about 500,000 page hits a day, so to fully fund the site, I'd need to charge about €0.0003--that's three hundredths of a centime per page. These "hits" are generated by about 30,000 daily visitors (see the Fourmilab Access Statistics for details), so the average cost to each of these users for the pages they view at Fourmilab would be about €0.005, half a centime per visit. Now it's certainly possible folks who visit Fourmilab would run away screaming at the thought of paying half a centime for the stuff they find here but somehow I doubt it, or at least I'm confident enough in the value of my scribblings to think that, after the initial shock, they'd come back. By way of comparison, a user paying €50 per month for cable modem or DSL access presently spends about €0.07: seven centimes per hour for Internet access whether they're using it or not.

A naïve calculation of the average duration of a visit to the site from the number of visits per day suggests the average visitor lingers here about 20 minutes. So, if they spend €0.005 for documents retrieved during their visit, they'd spend only about 20% more for content than they're already paying for Internet access. And that would more than defray the costs of operating this site. I invite proprietors of subscription and advertiser supported sites to work these numbers based on their own access profiles.

need for a user to open an account or establish a commercial relationship with the site. As long as the per page fee is less than the individual's threshold of paying, the per-page charge is debited automatically from the user's account and credited to the site's.

There's no question that if many present-day sites started to charge, say, €0.001 per page, their traffic would collapse. But what about the sites you read every day? Is it worth a tenth of a centime per page? Have you compared what you'd pay for pages with what you're paying now for access to the Internet?

Micropayment, Excerpts, and "Deep Linking"

The emergence of Weblogs ("blogs") and other forms of independent Internet journalism has raised a variety of issues regarding free use of copyright protected material. To what extent may a blog excerpt a document published on the Web (with or without a link to the original source)? Is it permissible for a Web document on one site to link directly to a document deep within another site's archives, potentially bypassing advertisements on the site's main page which fund its operation?

Micropayment provides solutions for many of these problems. As envisioned by Ted Nelson almost 40 years ago in his original exposition of Xanadu, the problem with copyright isn't the concept but rather its granularity. (I'd add, in the present day, the absurd notion that copyright should be eternal, but that's another debate for a different document.) Once micropayment becomes as universal as E-mail, a blog will simply quote content from a Web site using an "excerpt URL" (I'll leave the design as an exercise for the reader) or provide a link to the entire document. Readers of the blog will, if the excerpt is below their threshold of paying (and the total of all excerpts in the blog is also below the threshold), see it automatically. Otherwise, they'll have to click on an icon to fetch it, approving the payment, before it is displayed. Similarly, when following a link to a document licensed under one of the Digital Rights Management (see below) terms of use, you'll automatically pay the fee and see the document unless it exceeds your threshold, in which case you'll have to confirm before retrieving it.

Micropayment and Ubiquitous Wireless Internet Access

Micropayment will greatly facilitate the deployment of wireless Internet access (Wi-Fi and its descendants). Wireless access today has an unsettled business model; some coffee shops and bookstores provide free access to their clients (and, constrained by Maxwell's equations, those in the parking lot outside) as an added value, while hotels, airline lounges, and soon long distance flights en-route provide access for a fee. With micropayment, your wireless network interface will simply listen for bids of access and choose based on bandwidth and cost, normally accepting the best offer below the cost threshold you set. If it's higher than your threshold, or there's an extreme tradeoff between cost and performance, you may be asked to choose, but usually you'll just light up your laptop, wait a few seconds, and you're online. No mess, no fuss, and it's guaranteed to cost less than your "threshold of paying".

Micropayment and Internet Taxation

According to folklore, Michael Faraday, who discovered the principle of electromagnetic induction in the 1830's, was asked by a British politician to what conceivable use electricity might be put. Faraday replied, "Sir, I do not know what it is good for. But of one thing I am quite certain--someday you will tax it." This quotation is, in all likelihood, a myth, but nonetheless there is truth therein applicable to our times. For electricity, a laboratory curiosity in Faraday's time, was eventually taxed and, in many unfortunate jurisdictions, made a government monopoly or regulated to such an extent it was indistinguishable from one, inevitably becoming scarce, expensive, and unreliable.

Like electricity, the Internet will eventually be taxed. As long as there are governments, this is inescapable. While taxation is never without pain, micropayment can at least eliminate most of the bookkeeping headaches for both merchants and customers, with taxes due for Internet use and commerce collected automatically and remitted electronically to the jurisdiction they are owed to.

Digital Rights Management

Microsoft also warned today that the era of "open computing," the free exchange of digital information that has defined the personal computer industry, is ending.
Microsoft Tries to Explain What Its .Net Plans Are About
by John Markoff, The New York Times, July 24, 2002.

Digital Rights Management (DRM) is the current buzzword for the technological enforcement of intellectual property rights in digital media.

Do you own the data
on your hard drive?

Many computer owners would be astonished to discover the answer to this question is, in large part, no. A present day mass market computer is typically delivered with more than 1 Gb of software pre-installed, virtually all covered by a vendor's copyright and licensed to the computer's owner subject to an End User License Agreement, to which the user must assent, implicitly or explicitly, before using the software. While the owner of a computer may be permitted by the hardware and software to copy such files, whether making copies is permissible depends upon the terms of the licensing agreement and governing copyright law.

Many files downloaded from the Internet are similarly subject to copyright even if no explicit assent to a license is required on the part of the user; documents of all kinds are "born in copyright", and absent an explicit declaration to the contrary by the publisher or evidence that copyright has expired, may be used only as permitted by law.

The only files a user can be said to own outright are those containing content exclusively created by the user, and/or from the public domain; these files a user is free to copy, modify, and distribute at their sole discretion.

Digital Rights Management makes no changes whatsoever to the laws governing use of copyright or licensed material; it simply embeds enforcement of the already applicable rules into the hardware and software of users' computers.

DRM will implement several categories of right to use content, some of which have no direct analogues in traditional publishing.

Pay Per Copy

This is the traditional model of books, recorded music, videos, and shrink-wrapped software. You pay a fee for a copy and usually assent to an implicit license not to copy and redistribute it. However, there is no technological prohibition against your doing so and, in some cases, your purchase entitles you to lend the original document to others without paying additional fees to the publisher.

Pay Per Instance

This is a phrase I've coined to denote the concept of a document sold to a given individual which is either not transferable or, if so, cannot be used to create additional copies. When you purchase a pay per instance document, it's "bound" to your personal certificate and possibly that of the computer on which you intend to view it. If you copy the document you've downloaded (assuming your Trusted Computing platform even permits this) to somebody else's system, they won't be able to read it because they don't have your certificate. Giving them your certificate is equivalent to handing them copies of all of your credit cards and identity documents . . . unlikely. If the document is, in addition, bound to a given computer system, you can read it on that system but, in order to transfer it to another (for example, from your desktop computer at home to your PDA when going on holiday), you'll need to perform a transfer which will render it readable on the PDA but no longer on the desktop. You can always, upon your return, transfer it back in the other direction.

Pay per instance also permits (publisher permitting), transfers similar to lending a printed book to a friend. Suppose you've downloaded a book to your computer, read it, and now wish to send it to your daughter at college. No problem--just re-encode the book with her public certificate and E-mail it to her. Of course, once you've done that, you won't be able to read the book any more on your own system. There may be a small fee associated with passing on the book but, hey, micropayment makes it painless and you'd probably have to pay a lot more to mail a printed book anyway. Publishers can sell library editions, perhaps at a premium, which can be transferred any number of times but, just like a book, the library can't check out a volume to another person until a borrowed copy is returned.

Pay Per Installation

Pay Per Installation is similar to Pay Per Instance, except the content is bound to the certificate of the computer on which it's installed, as opposed to the personal certificate of an individual. Any person who uses that computer is authorised to access content bound to its certificate, but such content cannot be used on a different computer. This category will primarily be used by commercial software installed on a computer. Pre-installed software will, of course, already be bound to the computer's factory-installed certificate. When you purchase software, whether off the shelf or by downloading from the Internet, you will receive a copy which, before it can be used, must be activated online, which will bind it to the certificate of the machine on which it is installed. The purchase of a copy of the software will usually entitle the customer to a single activation; additional licenses for other computers may, of course, be bought as needed.

Just as with Pay Per Instance, the publisher of a Pay Per Installation product may permit you to transfer the product to a different computer. If, for example, you replace your old clunker with a TurboWhiz 40 GHz box, you may be able to move your existing programs to it, going through an activation procedure which will render them unusable on the old machine and bound to the new one. Or, on the other hand, the publisher may not permit this; it's up to the specific terms of the license.

Pay Per View

This is how movies worked when I was a kid. If you wanted to see the movie, you went to the box office, plunked down your fifty cents (I was a kid a long time ago), and received a ticket which entitled you to see the movie (plus the newsreel, the cartoon, etc.) once. When the show was over they turned on the lights and chased everybody out. If you just had to see it again . . . another four bits, thank you very much. This is the golden age media barons dream of while sleeping off the diverse intoxicants they've ingested at sybaritic Hollywood parties.

As with Pay Per Instance, the content you download is bound to your personal certificate or that of your computer but, in addition, it's limited to being played a maximum number of times, for instance, once. Now, instead of struggling to find a song on a music sharing service under constant attack by music moguls, you can simply visit your favourite online music store, find the song that's been going through your head for the last few hours, download it for a small fee and listen to it . . . once. If, having listened to it, you'd like to play it over and over or put it on a CD for your own use, pay a little more and buy a Pay Per Instance copy. No more need to buy an album to get one or two hit singles--of course the singles cost more than the filler. And no, you can't give a copy of the CD you made to your friends, since the songs on it are bound to your certificate and machine. You can make as many copies as you like of your "killer tracks" CD and give them to your friends or sell them on the Net, but everybody who receives one will have to pay the license fee for each track in order to obtain the right to play it.

Note that pay per view has applications outside traditional entertainment media; evaluation copies of software can be licensed to a user for a maximum number of trial runs, after which the user must either purchase a license permitting unlimited use, or some number of additional runs. Software vendors offering evaluation copies on this basis are protected, since they record the user's certificate when issuing evaluation copies, and refuse to issue more than one evaluation copy to any user. This application of pay per view to software closes the loopholes which have made shareware a difficult business model.

Circumvention Prevention

Earlier attempts to protect intellectual property in the digital age have sparked an arms race between copyright owners and those who wish to freely copy protected works. There are reasons to believe a comprehensive implementation of Digital Rights Management on a Trusted Computing platform will be a much tougher nut to crack, evolving in time toward effectively complete security (defined as the point at which losses due to copying are negligible compared to the cost to reduce them further), much as has happened with digital satellite television broadcasting. In the United States, the Digital Millennium Copyright Act, enacted in 1998, criminalises reverse engineering and circumvention of copyright protection mechanisms, and has been interpreted as applying to even the dissemination of information regarding the design and implementation of copy protection technologies. Given the political consensus which enacted this law, the stakes involved for media companies, and the investment now being made in Digital Rights Management technologies by computer hardware and software vendors, there is every reason to expect the near-term deployment of a highly secure system implementing all the varieties of right to use described above, which will not be widely circumvented.

Trusted Internet Traffic

Once Trusted Computing platforms which protect intellectual property rights are in place, this security can be extended to the Internet itself. The ARPANET, precursor of the Internet, was designed to explore highly fault-tolerant networks for military communications. In such networks, all communications links could be secured and the identity of all nodes on the network was known. In today's global, open access Internet neither of these conditions obtains, and many of the perceived problems of the present-day Internet are their direct consequences.

Tomorrow's Secure Internet will be implemented in Trusted Computing platforms, in conjunction with Internet Service Providers and backbone carriers. Today, any computer on the Internet can connect to any other connected computer, sending any kind of packet defined by Internet protocols. This architecture means that any system on the Internet, once found vulnerable to some kind of attack, can be targeted by hundreds of millions of computers around the world and, once compromised, be enlisted to attack yet other machines.

The Secure Internet will change all of this. Secure Internet clients will reject all connections from machines whose certificates are unknown (this will be by service; a user may decide to receive mail from people whose certificates aren't known to them, but choosing otherwise will block all junk mail--it's up to the user.) On the Secure Internet, every request will be labeled with the user and machine certificates of the requester, and these will be available to the destination site. There will be no need to validate login and password, as the Secure Internet will validate identity, and, if registered, a micropayment account will cover access charges and online purchases. Internet Service Providers will maintain logs of accesses which will be made available to law enforcement authorities pursuant to a court order in cases where the Internet is used in the commission of a crime.

In addition, The Secure Internet will protect the intellectual property of everybody connected to it. Consumers will be able to download any documents on the terms defined by their publishers, which will be enforced by Digital Rights Management. Publishers will serve documents, each identified by a certificate which identifies its publisher and its terms of use, and includes a signature which permits verification the document has not been corrupted subsequent to publication.

The Secure Internet

The technological precursors discussed above provide the foundation for the Secure Internet. A typical individual Internet user visiting Web sites, performing searches, buying products and services online, sending and receiving E-mail and instant messages, participating in chat rooms, news groups, discussion boards, and online auctions will notice little change from the present-day Internet except, perhaps, fewer of the irritations which currently detract from these activities. But the Secure Internet will be a very different kind of place, due to fundamental changes in the way those connected to it interact. This section discusses each of these changes in detail. The following section will sketch the consequences for various kinds of activity on the Internet once they have all been implemented.

The End of Anonymity

Many of the problems of the present-day Internet, which engender numerous, mostly ill-considered, proposals for legal remedies, are due to the fundamental lack of accountability on the Internet. The Internet, as presently implemented, affords its users a rather high degree of anonymity which permits them, if so inclined, to engage in various kinds of mischief with relative impunity.

Historical Precedent:
Telephone Caller ID

The Internet isn't the first communications technology to suffer from accountability problems. In the years when telephones went from curiosity to ubiquity, the technology of the time was hopelessly inadequate to create a log of all local calls. Whether completed through an operator or dialed directly, it was more economical not to meter connections nor log them than go to the expense of keeping records and billing by call. Trunk (long distance) calls were relatively rare in those years (I was a teenager before it became possible to directly dial another city, and I am younger than Benjamin Franklin), and they could be individually logged on paper, and later electromechanically for individual billing.

Notwithstanding a century's tradition that local calls were anonymous (unless one recognised the voice), most developed countries undertook the substantial cost to retrofit accountability in the form of Caller ID to the telephone system in the 1970's and 80's, rendering movie plots of earlier years which hinged on anonymous telephone calls as difficult to understand by those who grew up with Caller ID as are old movies which prompt present-day teens to inquire, puzzled, "Why didn't she just call 911 on her cell phone?"

Caller ID is an excellent model for the evolution of the Internet. Its origin was in remediation of the perceived social shortcomings of anonymity created by technological constraints which no longer existed. And it was implemented and woven into the social fabric in a way which balanced competing priorities. Individuals can block Caller ID when calling others, permitting anonymous calls to crisis centres and support groups; emergency services can override the block to identify those who call them, and law enforcement, with a court order, can obtain telephone logs for crime investigations. So it will be with the Internet.

Providing, or rather restoring, accountability to the Internet is the key technological foundation for fixing a large majority of its current problems. The present-day anonymity of the Internet wasn't designed in--it is largely an accident of how the Internet evolved in the 1990's; see Appendix 1 for details.

Let us explore how accountability will be restored to the Internet.

User Certificates: No ID, no IP

The first step in restoring accountability to the Internet will be the introduction of the Internet User Certificate. This certificate, without which no packets will be transferred across the Internet, uniquely identifies the person (individual or legal entity) responsible for sending them. The best analogy to this certificate is not a telephone number, but rather the call sign with which radio and television stations, including amateur radio operators, identify their transmissions. The Internet User Certificate is simply the unique identification of the person responsible for sending a packet across the Internet. An Internet User Certificate is the credential which identifies its sender.

Compared to contemporary Internet access accounts, access by certificate has gravitas. First of all, one may expect that, given the legal ramifications which certificates will have, sanctions against obtaining or using a certificate under false pretenses will be akin to those for obtaining a passport with forged credentials or presenting a forged driver's license to a policeman in a traffic stop. Accessing the Internet with a false certificate is equivalent to driving on public highways with a bogus number plate on your vehicle or crossing a border with a fake passport and will be subject to comparable penalties.

When you connect to the Secure Internet, your certificate will be transmitted to the point of access, which will then validate your certificate. If its issuing authority fails to confirm its validity, or it has been revoked by its owner due to a compromise, or has been blocked pursuant to a court order, access will be denied. Once your certificate is validated, you'll be granted full Internet access, precisely as at present. Your certificate will be logged along with the connections you make and furnished, on demand, to all sites to which you connect. This will make e-commerce painless and secure. Once you've registered with a merchant, all subsequent communications are secured with your certificate. You needn't memorise a user name and password for each site, nor worry about a merchant's site being compromised threatening your security. As long as you protect your certificate as you would your wallet or credit cards, you're secure and, in the worst case, should your certificate be compromised, you can always revoke it and replace it with another.

Computer Certificates

In addition, the computer you're using to access the Internet will be identified by its own certificate, which will also be provided on demand to sites you access. While the most commonly used credential is your personal certificate, the computer's certificate can be used to validate access to remote software components you've licensed or, for example, to secure remote backups of files from the computer against access from any other computer. Computer certificates will eventually be built-in by the manufacturer, much like the CPU serial number in Pentium III and later processors or, as is common in Unix workstations, in the form of an identity ("hostid") chip which can be transferred from one machine to another in case of hardware failure. The machine's certificate will become the primary means of licensing commercial software installed on the computer. Unlike present day ad hoc machine signature schemes or serial number checks in Unix workstation software, programs licensed to a machine's certificate will be stored in encrypted form and decrypted with the machine's private key from its certificate when loaded into memory. This decryption will be performed in hardware or by the kernel of the Trusted Computing operating system, which itself will be locked to the machine certificate.

The large installed base of computers without certificates or hardware support for Trusted Computing operating systems will necessitate a protracted period of transition during which computer certificates are implemented in software and consequently less secure. Users could, for example, obtain certificates for their own computers by presenting their personal certificate to the issuing authority. The certificate would be delivered as a file to be installed on the machine to identify it. Users may revoke machine certificates when a computer is scrapped or sold.

Once a machine's certificate is embedded in hardware, computer theft becomes a less attractive criminal enterprise since a stolen machine will report its identity, and the personal certificate of its user, at the moment it connects to the Secure Internet. If a machine is stolen, its owner may revoke its certificate, rendering it incapable of connecting to the Internet. Even with certificates implemented in software, revocation (or, in the case of theft where one hoped to eventually recover the computer, suspension) of the certificate would block all software licensed to that computer at the moment it next connected to the Internet and performed a certificate validation. Personal data on the hard drive of a stolen computer would be inaccessible to a thief because it is encrypted with the personal certificate of the owner.

Everything is Encrypted

With the advent of certificates for individual Internet users and computers, the Internet will go dark to snoopers. Those who use the Internet will finally have grounds for confidence their private data, messages, and online financial transactions are secure.

Secure Internet Commerce

Today, when you connect to an Internet commerce site, your browser receives and validates a certificate from the site which it uses to determine you are, in fact, connected to the site you think you are, not a false storefront put up by a crook intent, say, on collecting credit card numbers. Your browser then negotiates a session key to encrypt the balance of your transaction with the site. Typically then, if you're already a customer, you log in with a user name and password you've chosen for the site, which are protected against interception by the session's encryption key. If you're a new customer, the user name and password you select, and your address, credit card number, etc. are similarly protected against interception.

With the advent of the Secure Internet, both parties to the transaction, you and the merchant you're doing business with, will be uniquely identified by their certificates. When you connect to the merchant's site, an encrypted channel will automatically be established based on your certificate, your computer's certificate, the merchant's certificate, and that of the merchant's computer. Compromise of all four certificates would be required to intercept the data you send during the connection. There will be no need for user names or passwords--your certificate will identify you. If you've decided to permit such disclosure, the merchant can even obtain information such as your shipping address, privacy preferences, and the like while validating your certificate. If you prefer to keep such information private, you'll have to enter it as you do now, or authorise its transmission to merchants on a case-by-case basis when first doing business.

But certificate-based encryption will extend well beyond Internet commerce. On the Secure Internet, everything will be end-to-end encrypted in this manner. When you establish a connection to any site at all, in any protocol, the four certificates involved (yours, your computer's, the site's, and its computer's) will be validated and used to negotiate a key for the connection, which will be used to encrypt all data exchanged: E-mail, instant messages, Internet telephony audio, Web pages, everything. What you exchange with a site while connected is entirely between you and the site. Snooping by third parties is impossible. Not only needn't you worry about somebody reading your mail or snatching your credit card number, a snoop won't even be able to know which pages you request from Web sites you visit, since the URLs of the pages you request and the content you receive will be encrypted. (It will remain possible to determine which sites you visit by snooping packets and looking up the IP addresses of those you connect to.)

Private File Storage

Certificate-based encryption will protect data on your computer even when you're not connected to the Internet. The file system in a Trusted Computing platform will automatically and transparently encrypt all files which belong to you with your certificate. If multiple people share one computer, each will be able to read only their own files; without the certificate of the other user, files belonging to that person, even if physically readable, will be gibberish. If a computer is stolen or an unauthorised person gains physical access to it, users' files cannot be read unless the criminal has also managed to obtain the certificates of their owners.

Files stored on removable media will be encrypted in the same fashion. Compromise of private data by scanning backup media (remarkably, many security-conscious people fail to ponder this threat) cannot occur since the backed up files are encrypted with the certificates of their owners. When sending a file to another person on a physical volume such as a floppy disc or recordable CD, it can be signed with the sender's certificate and encrypted with the public key of the intended recipient who can thereby verify the identity of the sender. Should the shipment be intercepted by a third party, its contents cannot be read without the intended recipient's certificate.

We Know what You've Read

With every Internet transaction tagged with the personal certificate of the requester and that of the computer where the request originated, operators of Web sites and other Internet services will be able to "know their customers". For the first time, Web sites will be able to compile accurate readership statistics, subject to audit by circulation bureaux, as for print publications. This, in turn, may restore the viability of the advertiser-supported business model for popular Web sites.

Internet traffic can be logged and audited by others, for their own purposes, as well. The ability to potentially recover a list of certificates of those who accessed a site containing prohibited content such as child pornography will deter those who now rely on the anonymity of the Internet to shield them from prosecution. Sites indulging in hate speech and/or material of interest to terrorists will find their regular visitors scrutinised by the authorities concerned with such matters. Societies which wish to control the flow of information across their borders can monitor the activity of their nationals to determine whether they are violating imposed restrictions. Parents will be able to monitor the activities of their minor children using certificates they've obtained for them which are linked to the parent/guardian's certificate.

Intellectual Property Protection

Digital Rights Management, secured technologically by Trusted Computing systems and legally by sanctions against reverse-engineering and contravention, will provide comprehensive protection for intellectual property of all kinds. Items downloaded from the Internet: Web pages, books and magazines, music or video files, and all other forms of content will bear certificates which define the terms under which they are licensed to the user, which will be enforced in hardware and software. Only data created entirely by the user (for example, documents they've written, pictures they've taken) and content in the public domain will be able to be freely copied, modified, transmitted, published, and used in other ways. Naturally, users may apply Digital Rights Management themselves to content they create, specifying the terms under which others may use it. For example, when circulating an E-mail draft of a scientific paper to a group of colleagues for comment, you may wish to "license it" exclusively to their certificates to prevent further dissemination should one of them prove indiscreet.

Intellectual property protection can be applied at a fine-grained level. A Web page may include images and citations from other Web content licensed on various terms; when the page is viewed, each inclusion is retrieved subject to its own license. If an included item requires payment, confirmation will be required before downloading it or, if the fee is below the reader's designated threshold of paying, the fee will be transferred automatically via micropayment.

Document Certificates: The Digital Imprimatur

With all parties on the Internet identified by the certificates they're required to use to gain access and exchanged with all transactions, and hence mutually accountable for their online interactions, and Trusted Computing platforms guarding against fraudulent credentials or misuse of intellectual property, the foundation will be laid to fully apply certificates to content: every document transmitted across the Internet.

The first application of document certificates is already in use: signed applets downloaded by Web browsers which are run only if the certificate is verified as belonging to a trusted supplier and contains a signature which matches the content of the downloaded code. (The MD5 checksums or PGP/GPG signatures posted for many OpenSource software distributions can be thought of as a crude kind of document certificate, manually validated by the user against a checksum or signature published on the Web site whence the package is downloaded.)

Trusted Computing systems will require all software they run to be signed with certificates, will verify the signature of each program before executing it and, when online, will (periodically) re-validate the certificates of installed programs with their suppliers. If a program's certificate has been revoked (for example, if a critical security flaw has been found in it which requires an update to correct), the Trusted Computing platform will refuse to run the program, informing the user of the reason for the certificate's revocation. The computer's operating system will bear its own certificate, which will be validated by the BIOS before the system is booted, protecting against un