Editors’ Note: The following essay is an excerpt from Ruling the Root: Internet Governance and the Taming of Cyberspace, forthcoming from MIT Press.

During the past six years, Internet domain names have generated global controversy. The business of registering domain names has swollen into a $2 billion annual revenue stream, and seems to be doubling every year. The rise of the domain name industry has produced a number of policy debates. Organizations are competing for the right to operate new top-level domains like .BIZ, .WEB, or .UNION. There are also huge fights over trademark rights in second-level names. The long-term impact of the “domain name wars” are significant: they have created a new international regulatory organization, the Internet Corporation for Assigned Names and Numbers (ICANN), and a new, global system of domain name-trademark dispute resolution dominated by the World Intellectual Property Organization.
All of this would have greatly surprised the Internet engineers who developed the Domain Name System (DNS) protocol between 1982 and 1985. To them, domain names were not commercial commodities and had no relationship to trademarks. The purpose of DNS was to decentralize the assignment of names to computers. Naming computers goes back to the early 1970s because names were easier to use than numerical addresses, and interposing them between machine addresses also provided a more stable identifier. To function as addresses on the Internet, names must be globally unique character strings. Uniqueness requires some form of coordination, which usually implies some kind of central authority. On the pre-Internet ARPANET (1969-1981), names were assigned to computers by a central registry known as the DDN-NIC.
But by the early 1980s, there were too many computers on the Internet to feasibly permit one central authority to keep track of all computer names and their associated numerical addresses. So DNS created a hierarchical name space and distributed the authority for the assignment and translation of names down the levels of the hierarchy. Only the top level of the hierarchy (currently, a mere 257 names) had to be maintained centrally. The designers of DNS thought of domain names as names for machines — computers on the Internet — not products, documents — and had no intention of using domain names as an index or directory of the resources on the Internet.
So how did we get from there to today’s global policy controversies?
A dramatic change in the status of domain names came from the emergence of the World Wide Web between 1990 and 1995. The World Wide Web was a software application that made the Internet easier to navigate and more fun to use by linking and displaying documents (or other objects stored on networked computers) by means of a graphical user interface. The software code for Web servers and the first portable browser were created by European physicists at CERN in 1990. The application was popularized by the public release of a graphical browser that exploited the new WWW capabilities called Mosaic in early 1993 by the National Center for Supercomputer Applications in the USA.1
In January 1994, only a year after Mosaic’s release, there were 20 million WWW users, and the World Wide Web’s hypertext transfer protocol (HTTP) had become the 2nd most popular protocol on the Net, measured in terms of packet and byte counts on the NSFNET backbone.2 By early 1995, the World Wide Web passed the venerable file transfer protocol (FTP) as the application generating the most traffic on NSFNET. Browser software became a commercial industry with the founding of Netscape in 1994, and the freeware release of the first Navigator browser at the beginning of 1995. Microsoft followed with Internet Explorer at the end of the year. With user-friendly, point and click navigation freely available to users, the Internet was able attract a much broader market of household consumers and small businesses.
The rise of the Web produced a qualitative change in the status of domain names. The Web had its own addressing standard, known as Uniform Resource Locators (URLs). URLs were designed to work like a networked extension of the familiar computer file name. Web documents or other resources were given names within a hierarchical directory structure, with directories separated by backslashes. In order to take advantage of the global connectivity available over the Internet, URLs used a domain name as the top-level directory. The basic syntax of a URL could be represented thus: http://<domain name>/<directory or resource name>/<directory or resource name>/etc… The hierarchy to the right of the domain name could be as shallow or as deep as the person naming items within the web site wanted.
By embedding domain names in URLs, the Web altered the function of domain names in profound and unanticipated ways. As the term “Resource Locator” suggests, Web addresses were names for resources – which meant any kind of object that might be placed on the web: documents, images, downloadable files, services, mailboxes, and so on. Domain names, in contrast, had been originally intended to name host computers. And URLs were not just addresses but locators of content that would be automatically displayed to humans. A user only needed to type a name into the URL window of a browser and (if it was a valid address) the http protocol would go fetch the corresponding resource and display it in the browser. A URL included “explicit instructions on how to access the resource on the Internet.” (Berners-Lee, 1993) Domain names, in contrast, were originally conceived as locators of IP addresses or other resource records of interest to the network, not things that humans would be interested in seeing.
The Web made it so easy to create and publish documents or other resources on the Internet that the number of Web pages began to grow even faster than the number of users. It did not take users long to discover that shorter, shallower URLs were easier to use, remember, and advertise than longer ones. The shortest URL of all, of course, was an unadorned domain name. Thus, if one wanted to post a distinct set of resources on the Web, it made sense to register a separate domain name for it, rather than creating a new directory under a single domain name. For example, a car manufacturer like GM with many different brand or product names such as Buick or Oldsmobile eventually learned to just register buick.com and use that as the URL rather than gm.com/cars/buick/ — even if all the information resided on a single computer. The DNS protocol made it fairly easy to point multiple domain names at the same computer, so there was not much waste of physical resources. Thus, domain names began to refer to content resources rather than just network resources.
As more and more users began to type domain names into their browsers’ URL windows, yet another fateful transformation of domain names’ function occurred. Many novice users did not understand the hierarchical structure of DNS, and simply typed in the name of something they wanted. The Internet would interpret this simple name as an invalid address, and return an error message. As a “user friendly” improvement in Web browser software, the browser manufacturers began to use .com as the default value for a name typed in with no top-level extension, instead of returning an error message. If the user typed “cars” into the URL window, for example, the browser would automatically append .com to the end and www. to the beginning, and display to the user the Web site at http://www.cars.com. In doing so, the browser manufacturers reinforced the naďve end user’s tendency to treat domain names as a kind of directory of the Internet. This practice also massively increased the economic value of domain names registered under the .com top-level domain. For millions of impatient or naďve users wary of search engines and other more complicated location methods, the default values turned the DNS into a search engine exclusively devoted to words registered under the .com domain.
Although it took several years for the full economic effects to be felt, the “webification” of domain names was the critical step in the endowment of the name space with economic value. It massively increased the demand for domain name registrations, and gave common, famous, or generic terms under the com space the commercially valuable capacity to deliver effortlessly thousands if not millions of Web site “hits.”
This serendipitous intersection of distinct technologies, resulting in a transformation of one technology’s function, exemplifies what sociologists like to call the “social construction of technology.” The constructionist method of analyzing technology history, however, often downplays the rationality of technical and economic influences upon a technical system’s specific configuration, emphasizing instead cultural factors, power relationships, and historical accidents. In this case, however, the elevation of domain names was driven by very rational economic concerns about visibility in an emerging global marketplace. In the early days of the Web, a simple, intuitive name in the .com space might generate millions of viewers with very little investment. If someone else controlled “your” name in that space, your reputation or customer base might be eroded. Thus, for economic and legal reasons, DNS policy has ever since been fixated upon the use of domain names as locators of web sites, and obsessed with the intersection with trademarks. The forms of regulation and administration being imposed on DNS by ICANN are largely based on the assumption that DNS is used exclusively for that purpose.
Technologists who object that “DNS was never designed to be used this way” are correct in a narrow sense, but miss the larger point. Many technologies end up being used in ways that their designers never intended or visualized. These unanticipated uses, in turn, can generate inflection points in a technology’s evolution by provoking new forms of economic activity — and new forms of regulation. This in turn can reward certain technological capabilities and effectively foreclose others.
1 Mosaic was the outgrowth of a program written by the Software Development Group at the NCSA called Collage, designed to enable researchers to collaborate over networks. As the project neared completion, programmers at the Group got wind of the World Wide Web project and quickly realized that Web compatibility could turn the Collage project into something much broader than a collaboration tool. http://www.webhistory.org/historyday/abstracts.html
Milton Mueller is Associate Professor at the Syracuse University School of Information Studies. He does research on the history of telecommunication technologies and industries. His publications include Universal Service: Competition, Interconnection, and Monopoly in the Making of the American Telephone System (Cambridge, Mass: MIT Press; Washington, D.C.: AEI Press, 1997). The father of Antenna, Milton has just became a father in a far more important sense. We are delighted to report that little Maxwell and his parents are doing well.
You must be logged in to post a comment.