Special Reports Archive
- Written by Gordon Cook
- Category: Special Reports Archive
- Published: 03 July 2008
- Hits: 20693
Our draft left with US Congress Office of Technology Assessment on February 28, 1992
Gordon Cook, Editor Publisher COOK Report on Internet
If private gain, is not to outstrip the public good, we must prioritize whom the National Research and Education Network (NREN) is to serve. Business or education? All education or just an elite? And within those cam puses that have the network: all disciplines or just the "techies?" Making informed decisions will be impossible without identifying the stakeholders and understanding the development and interplay of their interests. In late November of 1991 time to do this appeared to have been purchased by the decision of the National Science Foundation to rebid the cooperative agreement for the operation of the NSFnet Backbone. However, given the amount of acrimony displayed by the stakeholders since the rebid announcement, the matter may yet be in doubt.
Some suggest that thanks to the rebid decision, the NREN may become an important step on the way toward a national information infrastructure. Others point to the bickering that broke out between ANS and its competitors within two weeks of decision to rebid. They complain that the NSF seems to have become too biased towards the interests of ANS. This study summarizes the complex events leading to this important decision - one that unfortunately seems to have produced only a fresh policy muddle. It also identifies key policy issues that need to be addressed during the next five years and suggests some possible solutions.
The NREN is a planned nation wide computer data network that is also expected to have voice and video capability, By the end of the decade the NREN may become, on a national level, the most powerful tool ever created for finding, manipulating and disseminating information. Its existence is unknown and unimagined by most Americans. Yet, to the extent that information is power, the network's existence will shape their lives. How it does this depends on whom the NREN will serve. The choices are stark: all Americans on the basis of universally available and affordable telephone service, or comparatively few - those who at our wealthiest universities and richest corporations can afford the access costs of a system creat ed to serve elites? In tracing the development and evolution of the NREN, this short study focuses on the divergent policy goals of service to an elite through technology transfer and rapid commercialization and privatization, and service to research and education through wide availability. As the National Science Foundation (NSF) has allowed the network to slide perhaps prematurely towards privatization, the conflicts inherent in these increasingly divergent policy goals have complicated the development of the network to the point where hearings or legal action may yet be required to unravel the conflicting interests involved.
The NREN is a collection of significant, confused and sometimes conflicting interests. It encapsulates:
A search for a means of justifying the addition of new technologies to the network.
The definition of a high-tech "nirvana" initially focused on the perceived needs of a supercomputing elite.
However, by the time the legislation creating the network became law, supercomputers were increasingly common and for the most part did not require the high speeds of the network to be used effectively.
A top down imposition of new technologies on users who are not adequately coping with the old technologies.
A performance gap between the speeds of the backbone and the speeds available on most campuses. An awareness is just beginning that billions of dollars investment in campus networks will be needed to bring even 45 megabit per-second uses into the hands of students and faculty. An infusion of federal money into the hands of segments of the telecommunications and computer industry to use for furthering their own strategic agenda.
The network does represent a tool of immense power and immense potential value to every American. The complexities behind its development have however been over simplified and occasionally misrepresented.
Some of the questions that are unanswered are:
The extent to which the network will serve to further technology transfer for the development of new communications industries versus the extent to which it serves as a tool that enables improved research and education.
Whether it will emphasize speed at the expense of ease of use?
Whether it will be implemented as a public or a private good?
The NREN was first proposed in a November 1987 report to the Congress by the Federal Coordinating Committee for Science Engineering and Technology (FCCSET), a entity within the ex ecutive branch Office of Science and Technology Policy (OSTP). It was offered as a means of increasing the speed of the nation's academic computer networks, known collectively as the Internet, from 56,000 bits per-seco nd to over a billion bits per-second (gigabit) by the mid 1990s.
The NREN Vision
Over the next several years, in a series of reports, the NREN was presented as making possible a startling variety of "information age" applications. These ranged from making it possible for researchers at sites remo tely located from supercomputers to take full advantage of these powerful machines, to travel through "electronic" libraries, and to engage in video conferencing as a means of normal communication.
Carried to an extreme, the NREN was presented as a "vision" of a network that would become a means of "virtual transport" for its users through video, sound and data within a decade. To achieve this, some of the addi tional functions that supporters suggested would be built into existing networks were:
Transfer of electronic mail and data files in multi- media formats, including high resolution graphics, full motion video, and sound.
Multi-media computer conferencing capabilities. Development and use of utilities that rely on resources distributed across the network. For example creation of a single digital library that could automatically draw on geographically dispersed collections.
Facilities that would enable scholars to build, flexibly and collaboratively, data bases containing comprehensive collections of knowledge on chosen subjects.
Knowledge management systems that would provide standard, consistent and intuitive interfaces to network services and resources.
Intelligent programs called "knowbots" (knowledge robots) that could be sent out by users to travel through the network in search of specified kinds of information.
While some of these services could require for a single user, network speeds of close to Broadband ISDN (B-ISDN, or 155 megabits per-second), none would require gigabit speeds. Still, providing this broad range of ser vices across a national network that could be called on to serve tens of millions of users would be very costly. Therefore, some said that the only reasonable way to achieve these capabilities would be to enable users to choose and pay for only those tran smission services and speeds that met their needs.
One problem with this scenario is that, while a large number of academic and research users of the network might desire such capabilities, the tight budgets at our colleges and universities would permit very few to pay for them. Furthermore, because the current network has been established by computer specialists operating on shoestring budgets, its entire structure is now based on fixed transmission speeds and single levels of service. The culture of network use has become firmly rooted in fixed cost c harges to an institution for a fixed amount of network bandwidth (speed). Actual use of the network by individuals is unmetered and uncharged for. Installing individual usage-based charging capability would require a major and expensive effort and would meet with considerable resistance from present users.
An additional, more fundamental problem exists at the level of infrastructure. Federal money brings the network to the campus doorstep and not to the professor's computer. While it is possible that the NSF may be able to help the mid-level networks incr ease their speed, taking advantage of only some of the technologies of the NREN vision will require the replacement of most campus local area networks with FIDDI LANs capable of 10 0 megabit per second speeds. In December 1991, the Chronicle of Higher Education estimated that the cost of necessary campus upgrades to be between ten and 100 billion dollars - an amount of money that is simply unavailable to higher education in the nea r future.
Meanwhile, by early 1992, among industry executives, there was a growing awareness that the High Performance Computing and Communications legislation that had just passed was, in the words of Apple CEO John Sculley, "focusing on the narrow problems of res earch and engineering" and ignoring investment "in an infrastructure that can dramatically change people's lives." The Computer Systems Policy Project, a lobbying organization formed in 1991, began to suggest that money from the HPCC legislation be allo cated in such a way that it produce a network less oriented towards big science and having "more meaning to industry and individual Americans." This would be done by assuring broad access to a network that could accommodate a wide range of applications. W ith the HPCC legislation passed, this group was beginning to acknowledge the difference between the NREN vision and the reality of the current network.
The Reality of the Current Network
While few would deny the value of the current network, its use is much more mundane than the applications emphasized by proponents of the NREN vision. Nearly three quarters of the traffic that travels over the network expected by 1996 to become the NREN is attributable to : 1. electronic mail; 2. remote logins to distant computers, and 3. file transfers from or to data bases at remote file servers.
Almost all the remaining use of the network is devoted to the overhead functions needed to route network traffic. These conventional uses are likely to dominate the network for the foreseeable future.
While advocates of the NREN "vision" emphasize the sophisticated capabilities that would enhance the ability of the network to serve as a tool to facilitate the needs of the research and education community, they gene rally avoid discussion of the problems faced by users of the current network. Depending on their fields of specialization, somewhere between 5 and 20% of those who should be able to profit from the use of the network actually have made the considerable e ffort necessary to become users.
Network users currently encounter many problems that frustrate or discourage use of the network, A study by Charles McClure of Syracuse University, identifies such problems as having to cope with
complex procedures that network center managers believe to be easy and hence do not adequately explain to novices;
the difficulty of transferring materials between different networks - such as NSFnet and BITnet;
the existence of multiple editors. Files created with one editor may have to be cleaned up and reformatted by their recipients, if they happen to use a different editor;
a "savage" user interface having a large number of conflicting commands and procedures, all of which must be understood, if the network is to be used effectively;
unavailable training. A user must either be assisted by a more advanced colleague, or "gut it out" with inadequate manuals;
documentation that is either not available, or out of date, or too difficult to use. Online documentation, when it exists, is generally too difficult to find in order to be really useful.
Technological overkill becomes for some users a problem in its own right. McClure's study found that although a few users with requirements for moving vast amounts of data complained that network bandwidth was occasio nally restricted, for many users, the very rapid change in network capabilities became a barrier to use. Users would often state that more applications become counterproductive when they are "unable to use and understand the network technology and applica tions they currently have."
For many who are supposed to be able to benefit from the NREN vision, the current, much-more-limited network is unusable. Some suggest that network applications should concentrate, not on increased speed, but on a fe w common services, such as directories -- electronic white and yellow pages that list the electronic addresses of users and services -- user friendly interfaces and operating procedures, access to libraries and bibliographic services, documentation of ne twork operations, training in network use, and outreach to potential users who are not familiar with the network.
While the National Science Foundation does plan to open a new and much improved National Network Information Center sometime in 1993, this will be only the first of many steps needed in order to make the network available to significantly larger percentag es of the user population that it is ostensibly designed to serve. Some critics complain that network promoters, in a misguided emphasis on speed and technology "sizzle," are ignoring the real benefits that the current network can make available not only to higher education but to the K-12 community.
The Two "Drivers" of the NREN
The emphasis on "sizzle" is occurring, at least in part, because the story of the network is marked by dual conflicting policies. First is a policy effort to bring improvements to research and education. But there is also , increasingly in control, a po licy designed to enable the creation of a multi-billion-dollar-per-year commercial high-speed computer data networking industry having very little to do with the needs of the research and education community. Part One of this study tells that story.
While the NREN has been shaped by these two very different policy "drivers," it has also been marked by two implementation efforts occurring in parallel with each other without coordination and integration. At the sa me time that Congress was attempting to legislate the NREN's creation, the NSF, with backing from the President's Office of Science and Technology Policy, has begun its own administrative implementation. To summarize: from its inception to the present, t he NREN has been hampered by a confusion of direction and control that could prevent it from ever becoming the national resource that Congress appears to have intended.
The NREN began in 1987 as a mission to enhance the ability of government supported research and academic networking to support high-end computational science - in other words: supercomputing. In 1989, as a strategy for gaining broader political support, plans for it to serve all of higher education were announced. By 1991, although the legislative language remained vague, many advocates had extended the mandate from high school through kindergarten. Also vague was the question of who would have ultimate control over the network. The early legislation pledged privatization after 1996. The law as passed, while vague, gives the impression that privatization is an accomplished fact.
During this same period the National Science Foundation was already building the NREN along the lines of a three-stage plan outlined in 1989 by the Federal Research Internet Coordinating Committee. The first stage was a T-1 (1.5 megabits per second) backbone. The second was a T-3 (45 megabits per second) backbone scheduled for 1992 and the third, it was pledged, would be multi-gigabit speed.
In the spring of 1990 the NSF planned for the changeover of eight of 13 backbone nodes to T-3 speeds. Three new nodes and the conversion of all 16 nodes to T- 3 speeds were budgeted from FY 1991 funds. Stage two of the FRICC implementation plan would be complete by the end of 1991 - a year ahead of schedule. The network called for by the High Performance Computing and Communications (HPCC) legisl ation was being implemented by executive action. Until the last days of 1991 the network's implementation and plans for its rapid privatization were being carried out very independently of the intent expressed by the legislation which called for privatiza tion of the network as soon as practicable at the end of the five year period of the HPCC program.
The Emergence of ANS
On September 17, 1990, the NSF, MERIT, IBM and MCI announced the formation of a 501 (c) (3) entity called Advanced Network and Services (ANS). With former IBM Vice President Al We is as President and CEO, and ten million dollars from its corporate sponsors, ANS stated its commitment to use the NSFnet to build a gigabit network infrastructure for the benefit of American research and education. I BM and MCI had been "joint study appointees" to MERIT. MERIT was the Michigan Education and Research Infrastructure Triad that ran a state network in Michigan and held the "cooperative agreement" with the NSF to build and manage the T-1 backbone until November 1, 1992. MERIT contracted with ANS for operation of the T-1 backbone and the building of the new T-3 backbone. ANS obliged by contracting back to MERIT and the day-to-day oper ation of the network.
Under the new arrangement ANS was allowed to set up and run two virtual (independent) networks over the same physical equipment. The first (NSFnet) would be for those organization s that received government subsidies to connect. The second (ANSnet) would be for those organizations that purchased direct attachments from ANS. Such organizations could use their attachment for commercial purposes such as selling products and services t o the research and education community, or intra-company transactions having nothing to do with the academic community - actions that were barred from the government sponsored backbone. ANS consequently became the thir d commercial provider of internet services, joining Uunet Technologies of Falls Church, VA and Performance Systems International of Reston, VA.
There was, however, one significant difference between ANS and its two competitors. ANS had Fortune 50 corporate sponsors and, with a ten million-dollar-a-year contribution from the NSF for the upgrade to T-3 backbone speeds, ran the backbone that connected all 32 mid-level networks providing network attachments for more than 1000 educational, governmental and commercial institutions. Its compe titors ran small commercial backbones attached by gateways to the NSFnet backbone, connected primarily commercial clients, and received no Federal subsidies.
By the last days of 1991 some had begun to wonder about the "deal" that the NSF was getting in the new T-3 backbone. The new structure was embedded in 12 core nodes within MCI's national network with switches co-located at MCI "pops" or network "points of presence." Each of the 16 backbone nodes, referred to as end nodes, were located on university campuses . End nodes were linked to core nodes by a single T-3 connection. Consequently if a link between an end node and a core node failed the mid-level network or in several cases networks connected to the end node would be cut off from the rest of the nation . Within the MCI backbone core network, eleven sites had only two data paths, while one had three. The new topology was essentially a star with a ring at the center. Not what network designers would call highly redu ndant. In the event of failures alternate pathways would either be few or non existent. Furthermore the new T-3 network was continually plagued by crashes stemming both from hardware and software problems. Critics complained that the old T-1 network wa s not saturated. They noted its mesh topology with every node except one having three paths to every other node offered far more redundancy than the new failure prone backbone for which the NSF was paying 330% more. As the T-1 backbone had to be kept open and continued to carry two thirds of the network's traffic, complaints about the NSF's arbitrarily privatizing the T-3 backbone grew louder.
In addition to running the backbone, in 1991 ANS also began to sell network attachments. This occurred much to the discomfort of both its smaller commercial competitors and many o f the 32 mid-level networks, which were also free to sell attachments - having been encouraged by the NSF to do so in order to become financially self-sustaining. In December 1990 ANS gained the North Carolina state network (NCMC) as its first customer. NCMC had been a customer of SURAnet, the largest of the mid-level networks. Over the next six months ANS continued to make its presence felt by hiring, as vice presidents, directors of three mid-level networks and the chairman of the Internet Engineering Task Force (IETF) - the Internet's standards body. In June of 1991 it established a for profit subsidiary ANS CO+RE (Commercial + Research). Its by-laws stated that profits from CO+RE would be used by ANS solely to b uild the core of the national backbone devoted to the research and education community.
What was not immediately clear about ANS' strategy was that it was seeking to attach as many of the Fortune 1000 as possible to that backbone and that its ability to do this would depend on its being able to plow the necessary resources back into the backbone. In July 1991, Telecommunications published an interview with Alan Weis, the ANS CEO. Here Weis made clear ANS' intention to market the Internet to corporate users. Weis emp hasized that ANS would offer corporations running private T-1 networks the opportunity to out-source such demands to ANS, which would offer turnkey packages to increase connectivity of corporate local area networks. Nowhere was ANS' original research and education charter mentioned.
Confusion of Markets
In the meantime law makers were laboring under an enormous confusion about the need for and cost of high-speed data networks. The May 1991 Report on S272, the Senate version of the HPCC legislation states that "the Federal funding called for in the legisl ation will guarantee a market for commercial high speed networking services, thus stimulating private sector investment in multi-gigabit networking." Two sentences later we read that the private sector is "reluctant to make the multi-billion dollar invest ments needed to build a national multi-gigabit network in part because the technology has not yet been demonstrated and the market has not been proven." Part One below shows that any implication that a "market for commercial high speed networking service s" is in need of Federal funds to guarantee it is false.
Furthermore in its use of the term multi-gigabit networking without defining whether aggregate or clear channel is meant, the report is confusing. Aggregate gigabit speed will be attained by multiplexing hundreds and perhaps thousands of slower data tran smissions together over a single strand of fiber. The technology exists to do this now and the expense of doing it is not huge. In fact it is done routinely. Telephone switching and transmission on major inter city r outes currently aggregates at gigabit speeds thousands of 64,000 bit per second "voice" channels, that can also function as cariers for fax and high speed data transmissions
On the other hand, clear channel gigabit speed is the subject of the NSF and DARPA funded gigabit testbeds. It is understood as giving a single network user a bandwidth of 622 megabits per second or more. The technolo gy to do this does not now exist. If it can be acquired, it will be enormously expensive. David Farber, one of the principal investigators on the AURORA Gigabit Testbed, estimated in the summer of 1991 that an NREN offering such capability would cost a billion dollars a year to operate. In March of 1991 DARPA's Ira Richer at a public meeting of the Federal Networking Council, stated that a single clear-channel gigabit line from coast-to-coast would cost 31 million dollars per year. More-over what one would do with such capacity is uncertain. Distributed supercomputing where multiple supercomputers linked across a wide area network exchange data in "real time," is a major object of investigation in the gigabit test beds. Unfortunately, linking supercomputers in such a fashion is a worthwhile endeavor that can be accomplished much more cost-effectively by means of local area networks.
In 1991 the General Accounting Office undertook for the Congress two small studies, the results of which focused the mistaken emphasis on high-speed uses which had come to characterize the rhetoric surrounding discussions of the NREN. The first "Industry Uses of Supercomputers and High-Speed Networks" reported that none of the corporate users of supercomputers could envision an individual need for network transmission speed in excess of the 45 megabits (T-3) available commercially today. The second report "High-Speed Computer Networks in the US, Europe and Japan," pointed out that European networks are in general slower than the old T-1 NSFnet and have no current plans to migrate to T-3 speeds. The fastest speed available in Japan was T-1. While it is true that NTT's plan for fiber to the home in Japan by 2015 exceeds American plans, speeds involved are not expected to exceed the 155 megabits per second of B-ISDN. Moreover they depend on NTT generating sufficient operating revenue which, in turn, is dependent on its being able to deliver cable television to the home.
What is powering the development of the NREN? The answer is to be found in the needs of business communications. While the current market for wide area networking in the academic and research community is about $80 mi llion per year, the market for Fortune 1000 TCP/IP wide area interconnection is several orders of magnitude larger. In fact, it is so large as to render meaningless the market creation assertions of the Senate Report o n S272.
The Culmination of the Data Revolution
A decade ago when IBM marketed its first personal computer, critical data in Fortune 1000 companies resided on IBM mainframes or, to a lesser extent, on minicomputers - smaller machines where Digital Equipment Corp. (DEC), had the largest market share. D ata could be shared among these large computers by means of proprietary protocols: Systems Network Architecture (SNA) in the case of IBM and DECnet< /a> in the case of DEC. Computing was centralized and focused on large and rather expensive machines. With its proprietary protocols linking their mainframes and minis together, IBM and DEC each had strangle holds over their respective markets.
Between 1981 and 1991 an explosion of personal computers and local area networks (LANs) designed to link these computers together eliminated this strangle hold and changed corporate data communications forever. Mainfr ames and minis lost their monopolies on corporate data to these distributed networks of PCs. According to an AT&T survey, by 1993 70% of all businesses will have LANs. Twenty million terminals and PCs currently attached to LANs will grow by then to 50 million. At that point the total number of LANs will be somewhere between five and ten million. Most will need to be interconnected in order to complete the migration of data from the mainframe to the desktop and allow the corporation to both distrib ute completely and, when needed, interconnect its information. Furthermore, as Electronic Data Interchange (EDI - electronic ordering and invoicing via computer network) grows, customer and vendor LANs will also have to be able to interconnect on demand.
But data growth does not stop there. In the past year or two, as CAD CAM and medical use of networks has expanded, high resolution images of photographic quality have become frequent network travelers. At three hundred to one thousand times the size of a n average piece of electronic mail, such images are major users of network bandwidth. In corporations like Boeing, such applications are driving local area network upgrades from 10 megabit per-second Ethernet speeds to 100 megabit Fiber Data Distributed Interface (FDDI) speeds. The Eastern Research Group, estimates that spending for image transmission execeded $500 million in 1990 and is growing at 22% annually.
Extrapolation of these trends leads one to the conclusion noted in the GAO study that T-3 or 45 megabit per second network will be needed by some corporations. Aggregate the traffic of enough such corporations across trunks forming national backbones and you will also soon reach combined gigabit speeds.
These trends are pervasive and are changing the very foundation of telecommunications. In 1985 network traffic was 80% voice and 20% data. In 1990 the gap had narrowed to 55% voice and 45% data. By 1995 projections call for 69% data and 31% voice. Wha t's more in 1995 those data networks will be able to carry voice and data multiplexed together with ease. Faced with these trends, Local Exchange Carriers (LECs), the telephone companies that provide services to homes and local businesses, are gradually realizing that they must learn to become network data carriers or else see their market shares shrink and customer's rates rise. Furthermore, in facing this challenge, they realize that they are faced with significant regulatory constraints that competitors providing private alternatives can overlook.
Rise of a Commercial TCP/IP Market
Asked in 1991 to rank their most important service problem, network managers of Fortune 1000 companies predominantly identified their need to interconnect the LANs of their corporation's scattered offices. Another surv ey revealed that 45% of all 1992 capital expenditures would be for LANs and LAN interconnection equipment such as bridges and routers. Telephone equipment would account for only 20%. For the first time, half of 1992 operating budgets will be spent on data communications.
In view of these trends it should be no surprise that in early 1991 three separate reports by Forrester Research Inc., Frost and Sullivan, Inc., and International Data Corp. (IDC), found that the corporate market for connecting LANs will triple between 1990 and 1995. A Forrester report stated that $560 million in TCP/IP products and services were sold to the commercial market in 1990. It projected the 1992 figures at $1.1 billion. 1991 sales to all markets reached 1.2 billion.
Frost & Sullivan $607 million $1.8 billion
IDC $588 million $936 million
Forrester (routers only) $156 million $565 million
Eric Buck, a securities analyst with Donaldson Lufkin & Jenrette Inc., remarked that the router market was growing so strongly in May of 1991 that demand exceeded supply and there were not enough companies availabl e to serve it.
Protocols are to network transmission what programming languages are to computer software. In view of the earlier emphasis by IBM and DEC on proprietary protocols, it is ironic that what made all this dramatic growth possible was the public domain TCP/IP protocol. TCP/IP had been developed in the government run ARPAnet during the 1970s as a protocol designed to allow computers from different manufacturers to communicate with each o ther across wide area networks. By the early 1980s it had become a standard for "internetworking," and had given academic local area networks the connectivity that their corporate counterparts were beginning to dream about. By 1990 70% of the Fortune 50 0 had private telecommunications networks. Eighty-five percent of the Fortune 500 used TCP/IP in some portion of their networking operations. Of the remaining 15%, one third used TCP/IP corporate wide, one third used IBM's SNA and one third were holding out for OSI, the new European developed international standard. TCP/IP converts included such companies as Bristol Meyers Squibb, K-Mart, and Wal-Mart.
Although TCP/IP came to dominate the market, its dominance had been expected to be only temporary. A 1989 article in Telecommunications stated: "the single most important and widespread vendor independent suite of comm unications protocols in commercial use today is that commonly referred to as TCP/IP." All indications are that within a few years, the single most important and widespread vendor independent suite of protocols in comm ercial use will be those defined by the International Standards Organization (ISO), referred to as Open Systems Interconnection (OSI)." Since then has OSI developed more slowly th an anticipated and TCP/IP, lacking any viable competitor, has become an international standard.
On June 19, 1991 IBM's Vice President and General Manager for Networking Systems, Ellen Hancock, stated that "TCP/IP has become a standard in its own right and [is] not simply an interim step to OSI." Acknowledging that many customers were adopting TCP/IP rather than OSI and that OSI would not intercept TCP/IP's growth path, she stated that IBM had "agreed to invest more than originally planned in TCP/IP, while continuing [its] OSI and SNA investments." Part of this change was also driven by the success of UNIX workstations using IBM's RISC/6000 technology. These workstations demanded TCP/IP as a means of communication between them and were begin ning to create an environment where SNA speaking mainframes could become isolated if their controllers were not promptly adapted to TCP/IP. Not surprisingly IBM announced that this integration would be available in the third quarter of 1992.
Two weeks earlier Digital Equipment Corp. announced that it had added TCP/IP support to its new generation of network products which was to be OSI based. The new product would be n amed Advantage Networks and would support DECnet, OSI and TCP/IP. While DEC envisioned a multi-protocol environment with OSI as the best choice, its customers showed no signs of mig rating away from TCP/IP which leads OSI in worldwide sales $1.2 to .55 billion in 1991. These changes mirrored what had been happening in the marketplace during the preceding year with the development of multi-protocol ro uters.
Beginning in 1990 companies like Cisco, Proteon, 3Com, BBN Communications, UltraNetwork Technologies, and Wellfleet, which had developed routers for the connectionless data traffic of the Internet, began to add support for SNA, DECnet, Ethernet, token ring, FDDI, and other protocols. These products were welcome developments in the business community where network managers had been operating redundant physical networks each based on different protocols.
For the first time separate networks could be collapsed into common routes with traffic tailored to flow over reconfigurable links as network conditions demanded. The changes saved corporations money by enabling them to use (momentarily at least) less bandwidth and fewer wide area links. Network flexibility increased and downtime decreased as the average time to activate alternative network links dropped from over a minute to five seconds.
In September 1991 Wellfleet announced a new gigabit per second router offering four 256 megabit-per-second data paths designed to capitalize on these needs. The router offered redundancy in power supply, backbone, and interface logic. Most significant of all, failed components could be replaced without taking the router off line and having to interrupt network traffic. Users saw the router as the first that could accept multiple in puts from 100 megabit per-second FIDDI LANs and 45 megabit per-second T-3 wide area links.
Meanwhile, in August, IBM, lagging, as it often had in the past, well behind the demands of the market, announced plans for a family of multiprotocol routers. Early four port and eight port versions of the routers sch eduled for a December 1991 announcement would support Ethernet and token ring LANs and TCP/IP. They would route SNA via Data Link Switching that would greatly reduce SNA overhead. In the last week of January 1992 the routers were announced for a June 1992 release. Called 6611 Network Processors, they would support DECnet, NetBIOS, TCP/IP, and Novell's Internetwork Packet Exchange (IPX). FIDDI LANs would not be supported. Two higher level protocols developed in the academic Internet - Open Shortest Path First routing, and the Simple Net work Management Protocol would also be supported.
The network press reported that while users welcomed IBM's entry into the internetworking marketplace, the move may have come too late. As a director of network planning and design at Pennsylvania Blue Shield observed: "few users are willing to wait a ye ar for IBM like they did in the old days." However, an executive from Travelers Insurance said that while non-IBM shops have been investing in routers for some time, many major IBM users are just starting to examine t heir options. If IBM delivers its product by the promised spring 92 release date, it may not be too far behind the curve since about one third of the Fortune 1000 have yet to acquire routing technology.
In the 1990s and beyond data networks and computing will merge in a way that will endanger the continued existence of any company - including IBM - that does not learn to compete in the networking arena. Al Weis, the ANS< /a> CEO, dramatized the importance to IBM of its participation in the building of the NSFnet backbone, in November of 1990 when he stated that, without its participation with MCI a nd MERIT in the construction and management of the NSFnet T-1 backbone, IBM wouldn't have learned how to incorporate TCP/IP into the networking of its mainframes. Failure to do th is would have isolated IBM's mainframes and could have endangered their continued survival.
Weis and Gordon Bell had been in 1987 among the earliest supporters of the NREN. With IBM having been selected to participate in the building of the NSFnet T-1 backbone, Gordon Bell authored an article in the February 1, 1988 issue of the IEEE Spectrum in which on page 57 he called for the government to select a single vendor to build the NREN and charge users for its services. When in Septem ber of 1991 ANS as operator of the T-3 NSFnet backbone released its plan for charging for access on November 1 of 1992, some began to see a degree of prophetic vision in Bell's thr ee and one-half year-old statement.
If IBM were going to become a significant player in the networking arena, it would need a network switch. Therefore, it was not surprising when Weis pointed out that Dr. Allan Baratz of the IBM T. J. Watson Research Center was in charge of the development of a gigabit switch that would play the major role in ANS' plans to contribute to the NREN.
IBM, long known for its interest in proprietary protocols, was beginning to embrace TCP/IP as an open standard at level 3 and 4 of the seven layer network stack. Level 2 the transpo rt layer could use several options on which TCP/IP could ride. The telephony community through CCITT was developing an international standard known as Asynchronous Transfer Mode (ATM - fixed length packets) to use over SON ET at speeds up to multiple gigabits in fiber optic transmission. IBM's switch, named PARIS, would use Packetized Transfer Mode (PTM - variable length packets) over SONET. IBM's design philosophy for a "private in tegrated voice/video/data network" was articulated in an article entitled "PARIS: An Approach to Integrated High-Speed Private Networks" received on May 4, 1988 by the International Journal of Digital and Analogue Cabled Systems and published on page 77-8 5 of volume one. PARIS was also desired to interact with a proprietary LAN known as MetaRing. In May of 1991 PARIS became a product and was moved from the laboratory into IBM's Educational and Multi-media Products Di vision.
The stakes with PARIS appear to be quite large. If the PARIS switch were to become extremely successful in the provisioning of high-speed networks, it could hinder the ability of the RBOCs to offer their own planned hi gh speed data service known as Switched Multi-Megabit Digital Service (SMDS) - a service dependent on ATM capable switching. A vice-president of Bell Atlantic said the following: w e want at least to be able to sell SMDS services where the possibility of reasonable profit exists. If we are locked out and can sell only "dumb" bit pipes (Level 1 service), it wil l be like selling hundred pound sacks of potatoes at ten cents profit per-sack. Gaining enough revenue to modernize the network upon which these services depend would depend on not being locked out of 90 to 95% of the profits. A manager at another RBOC i nvolved with IBM in the development of PARIS as part of the AURORA gigabit testbed agreed that the dominance of such a switch could have a serious economic impact on the Public Switched Telephone Network (PSTN). He als o acknowledged that by the summer of 1991 IBM had agreed to add ATM capability to PARIS. However, he was very sceptical that IBM was committed enough to the goal to ensure that the result would be successful. By late 1991 IBM was talking about an ATM "i nterface" for PARIS. The switch would remain a PTM based device and gain the capability to encapsulate ATM cells inside the much larger PTM packets. What this meant in terms of the issues of importance to the RBOCs and LEC s was unclear.
In early 1991 IBM announced a three year study of its PARIS technology with Bell South Services, Inc. a Bell South subsidiary and in October of 1991 it signed an agreement with Rogers Cable T.V. Ltd., a Toronto based subsidiary of Rogers Communications to use PARIS to better define the technologies needed to support interactive multi-media applications.
The time table for the addition of ATM as an interface to PARIS is not clear. The best indication came in an IBM announcement from Geneva Switzerland in October 1991. Further ATM development of PARIS now renamed PLAnet and MetaRing renamed Orbit would tak e place in La Guade, France with commercial release anticipated in 1993 or 1994.
IBM's long term strategic viability depends on its ability to move quickly and adroitly to stay at the leading edge of high-speed computer networking. In its partnership with MCI and MERIT it has enjoyed a center-ring seat in the development of the NSFnet, the world's largest testbed for the development of TCP/IP internetworking. As plans for NREN matured during 1991, it presumably continued to think that through its ANS subsidiary it would inherit the backbone for the Interim Interagency NREN. It is likely that the November 26 199 1 NSF announcement of the backbone rebid (where the NSF said that it would award a new backbone contract in 1993 to at least two different service providers) came as a major shock. Still there is nothing that would prevent ANS from being named as one of the two service providers for the new backbone. By January of 1992 Cisco has announced its intention to team with ANS in bidding on the new solicitation.
With its participation in the NSFnet backbone MCI has gained a cooperative and very forgiving testbed in which to develop the first nationwide TCP/ IP LAN interconnect T-3 backbone where single application bandwidth can vary in size all the way up to and including 45 megabits per-second. Because academic and research users gain access to the net normally from fees paid by their institutions and don't have to pay themselves for individual use, they really have no choice but to be more forgiving of network glitches than their corporate co unterparts would be. A bandwidth-on-demand nationwide beta test group of several hundred thousand users is nothing to scoff at. Just ask those who get network trouble reports and have seen how buggy the T-3 backbone has been since its first links were i nstalled a year ago.
The NSFnet environment fits well with MCI's strategy of "bandwidth-on-demand - the ability for users to instantaneously call up and tear down bandwidth in increments of 64 kilobits per-second." "Our strategy," says Donald Heath, MCI's Vice President of Data Networking, "is to put up a platform . . . [that lets]customers do frame-relay today, SMDS tomorrow and then [move] on to asynchronous trans fer mode and broadband ISDN." MCI is well on its way. As of December 1991, it is the only carrier to offer switched (on-demand) T-3 service. The rate is $127 per-minute prime time.
According to Computerworld's annual premier 100 rating which ranked MCI as its number one telecommunications company, MCI spends as much on the development of software systems as it does on communications. These systems are giving MCI enormous flexibility in tailoring its services on a world wide basis and in pushing intelligence down into the network, something that should be useful if the NREN ever began to bill on a usage basis.
In the late spring of 1991 MCI announced two services that, while dependent in part on what it had learned in the NSFnet cooperative agreement, were independent of its direct role in the Internet. One was a Virtual Pr ivate Data Network service using TCP/IP. Corporations could come to MCI for LAN interconnection service without being directly connected to the academic Internet. The other service was christened Infolan and offered by Infonet, a multi-national company owned 35% by MCI and 65% by 15 European and Asian PTTs.
Infolan offers TCP/IP internetworking for corporate LANs located in the United States and 115 foreign countries. Companies contracting for the service will have Cisco routers installed on their premises. These transmit over leased lines to the nearest entry points in the Infonet global network. Infonet is also a major sup plier of EDI services. Infolan will enable Infonet to offer EDI to a wider customer base. Infonet also manages its customers' connections 24 hours a day using Cisco's Net-Central Station software which based on the I nternet developed SNMP protocol. In managing its customers connections, unlike most LAN to WAN services, Infonet is offering out-sourcing to global companies with widely scattered relatively low volume data networking needs.
As the dominance of LANs changes the technical landscape of corporate communications, major corporations are beginning to hire specialists to provide for their telecommunications requirements. This practice is known a s out-sourcing. Among large corporations, most out-sourcing is focused on data network management because of management's difficulty in finding enough personnel skilled in local area networks and in network management.
The size of the movement to out-sourcing was brought home in September 1991 when General Dynamics Corp. signed a ten year contract valued in excess of 3 billion dollars with Computer Sciences Corp. for the management of its data processing and network ope rations. By placing the responsibility for computer and network modernization on Computer Sciences, the move was expected to save General Dynamics large sums of money over the life of the contract. Computer Sciences meanwhile having taken over ownership of General dynamics computing facilities was able to generate additional income by selling off the surplus capacity on its mainframes.
At the time of the CSC-General Dynamics deal, British Telecom underscored the growing importance of out-sourcing on a global scale by launching Syncordia, a new global out-sourcing company with headquarters in Atlanta, and customer service centers in Lond on, Paris and Tokyo. Targeting a global profile of 4,000 large corporations, Syncordia will use BT's existing Global Network Services which furnish data services to customers in 1,000 locations worldwide.
Private Versus Public Solutions
Out-sourcing is but one more example of a pervasive movement toward the provision of Private solutions that ride on top of the "dumb bit-pipes" of the Public Switched Telephone Netw ork. Value Added Networks (VANs) provided by Enhanced Service Providers (ESPs) is the regulatory terminology for what is happening across the nation and, indeed across the world, as computer data network traffic growin g at rates of 30% or more per-year meets and overwhelms a telephone and regulatory system ill prepared to comprehend what is happening to it. To the sorrow of "Aunt Millie" when she gets her phone bill five to ten years from now, the revolution in comput er data networks just described may be preparing to make a mockery of the division by the Modified Final Judgement (MFJ) of our phone service into 198 local areas or LATAs served by L ocal Exchange Carriers (LECs) and long distance routes that would be served by long distance companies known as Inter Exchange Carriers (IXCs).
For many reasons, including regulatory and rate of return restrictions on their lines of business, the LECs have so far failed to become viable providers of data network services. They all view Switched Multi-Megabit Digital Service (SMDS) using ATM cells over SONET carriers as their final major opportunity. SMDS will give both single personal computers and en tire local area networks the opportunity to plug into Metropolitan Area Networks with data transport rates in 64 kilo-bit per-second increments up to a total of 45 megabits per-second on demand. By 1995 the top SMDS s peed is expected to equal the 155 megabits per-second of broadband ISDN. Unfortunately for an LEC SMDS ends at a LATA boundary. Barred by the MFJ from carrying inter-LATA traffic, the LEC must rely on an IXC to offer the inter LATA service and buy local SMDS access from it.
Not surprisingly most IXCs are also getting ready to do SMDS. Here the only restrictions they face are access fees to be paid to the LEC. In prep aring for SMDS meanwhile, the LECs are also inhibited by the rate setting structures of no less than 50 state Public Utility Commissions (PUCs) which, in order to protect Aunt Millie's phone bill in the short run, have tended to price such business orient ed services rather high. In order to kill the LEC's offering of SMDS, all the unregulated IXCs and private bypass providers such as Metropolitan Fiber Systems have to do is set their rates 20 to 30 % lower than the LEC s.
Although the price of the current regulatory mismatch of good intentions with new technologies may take a few years to be truly brought home, what these trends point to is Private solutions that may continue to drive technological modernization out of the PSTN, ensuring in turn the continued growth of more Private solutions, hindering the ability of the PSTN to modernize and, as its user base becomes nibbled away by new wireless as well as fiber technologies, eventually weakening it to the extent that further modernization will have to be paid for by sharp rate increases for private subscribers, eventually endangering universal access to the network.
The stories of Metropolitan Fiber Systems and the Enterprise Integration Network offer examples of how these trends both interlink with the current development of the NREN and make for strange bed fellows as the speed of technology change overwhelms the capacity of our political and regulatory system. The likely result threatens to be the development of a two tier communications system - a feature rich, corporate financed, private system and an expensive but impoverished system for those unfortunate enough not to have access to the upper tier.
Metropolitan Fiber Systems and EINET
Metropolitan Fiber Systems (MFS) was founded in January 1988 as a fiber-optic telecommunications provider running private networks in the high density business districts of Chicago, Philadelphia and Baltimore. By the summer of 1991 MFS had expanded its networks to eleven cities. In August it announced a significant new venture. Using TCP/IP and FDDI local a rea network technology it would begin to offer 100 megabit per-second speed LAN interconnection service in each of its city networks. In effect it was offering - with some significant differences - the equivalent of < A href="/glossary.html#29" mce_href="/glossary.html#29">SMDS Metropolitan Area Networks (MANs) at least a year before the LECs could offer them. The network speed offered was more than twice that of SM DS. However, in the form of by-pass of the PSTN commonly referred to as cream-skimming, the private networks offered by MFS would reach only the largest and potentially most profitable subsets of urban users. The LA N interconnect service would become available first via MFS' 849 fiber mile 30 building network in downtown Houston - a complex characterized by a heavy concentration of oil companies and medical centers. Within the next six to twelve months these service s will be extended to the remaining ten MFS networks. Late in 1991, anticipating FCC clearance, Royce Holland, the MFS CEO, said that he believed that his business could expand into a total of about 50 cities.
MFS' announcement had a missing link that was identified in conversations on the Internet mailing list known as Com-Priv during August. There a Houston area affiliate of MFS explained that it had designed its intercon nect service so that the ANSnet or a similar national backbone could be used to connect its eleven metropolitan area networks into a single TCP/IP based national network. ANSnet Vi ce President Guy Alms (formerly the Director of SESQUInet, the Houston based mid-level network) attended the MFS press conference announcing the new service. While MFS would be a very significant account for ANS, sig ning up with ANS was by no means the only option that Holland has. The ANS backbone is embedded in twelve core nodal system locations that make up the critical links of MCI's national network backbone. Nothing would prevent MCI from offering the service directly.
Finally at the end of the second week of November another substantial service provider for businesses emerged. The Austin Texas based Microelectronics and Computer Technology Corp. (MCC) announced that it would begin to spend one quarter of its annual bu dget of $55 million to oversee the development and operation of the Enterprise Integration Network (EINet), a nonprofit TCP/IP based network. According to Allan Schiffman, Chief Technology Officer at Enterprise Integr ation Technologies Corp. which is helping to design the network, EINet, architecturally similar to the Internet, would carry commercial traffic and offer better security features.
Four industrial consortiums with an emphasis on American industrial competitiveness pledged support for EINet stating that they would use it for internal communication and encourage its use among their members and associates. These four are: Sematech, als o located in Austin; the Iacocca Institute of Lehigh University in Bethlehem, PA; the National Center for Manufacturing Sciences and the Industrial Technology Institute both of Ann Arbor, Michigan. EINet, which will begin operation in March 1992, will off er a direct link with an X.500 directory being developed by DARPA on a pilot basis for the academic Internet. Companies will pay an annual fee to access EINet in addition to transmission charges according to the bandwidth of their connection. The fee will be according to the user corporation's annual gross revenues:
Revenues Annual Fee
Greater than $500 million $60,000
$50 to 500 million $30,000
Less than $50 million $10,000
Additional ripples from the EINet announcement were felt when during the second week of December plans for a Factory America Broadband Network were announced at the Agile Manufacturing Conference in Lake Beuna Vista, Florida. The net was promoted as a pa rt of the critical infrastructure needed "to ensure that U. S. manufacturers remain competitive 15 years from now." A six month long $500,000 study by the Iacocca Institute called for the establishment of the network possibly as a part of the EINet. Fin ally in January Paul Hurray of the University of South Carolina, announced a version of an industrial competitiveness network. In early 1992 business use of wide area networks appeared to be growing even more rapidly than academic use.
Many observers, in view of the size of the ANSnet backbone and its strong corporate orientation, expected the award to go to ANS. On January 8, 1992 in a major surprise, MCC annou nced the award of the backbone to Uunet Technologies of Falls Church, Virginia. The contract value on an annual basis was not announced. Further questions revealed that Uunet would be paid only a percentage of the revenues generated by each EINet member. If EINet is very successful, it success is likely to strengthen the CIX (see page 13 below) and help level the playing field. This outcome is one scenario that could lead to the CIX backbone being upgraded in such a way that it emerges as a true competitor for the NSFnet-ANSnet backbone. It would not be surprising to see a significant number of mid-levels switching to the CIX backbone. Some observers predict that ANS will eve ntually either have to join the CIX and give up the commercial charging plans discussed in Part Two of this study, or find itself completely isolated and largely without customers.
With the arrival of what are being called the Public Data Internets (ANS, PSI and Uunet) corporations may now use not only the academic Internet solely for enterprise networking, but also have available a host of purely private solutions. MFS, EINet and Infolan either have opened or are about to open private, Fortune 1000 focused efforts not directly connected to the academic Internet. Backers of the privatization of the NSFnet have expressed the hope that enough corporations would join the network to enable large economies of scale and profits that can be used to cross subsidize academic uses of the highe r speed more expensive network. A major question, in view of these new arrivals in the marketplace, is how many corporations will opt for this solution instead of a private TCP/IP, non Internet solution where they are likely to find more options to ensure the security of proprietary data, greater reliability and more services tailored specifically for their business needs.
What Is an Academic Network Doing in a World Like This?
The Evolution of ANS
Meanwhile in the academic Internet, events during 1991 moved at a very rapid pace as ANS installed, what for all practical purposes was a privatized T-3 backbone, a portion of which it provided to the NSF to connect the 32 mid-level networks at 16 sites around the country, and the remainder of which it worked assiduously at marketing to corporate users. During 1991, ANS moved somewhat imperiousl y into a culture that it did not understand. This culture was based on a tradition of openness and cooperation that dated back more than twenty years to the beginning of the ARPAnet. By emphasizing voluntarism and sharing, the Internet had developed as a very fertile testbed for new technologies and had created a whole series of networking standards affecting all layers of the protocol stack - standards based on what worked best for the community not on what one stoo d to profit by, as former Lotus Development Corp. founder Mitch Kapor observed.
A tradition of the Internet community was its willingness to discuss issues affecting it by means of open electronic mail lists where statements made or questions asked would be reflected in writing, dated and time stamped to hundreds and sometimes thousa nds of other network users. Performance Systems International (PSI) had started one such mail list in the spring of 1990. Called Com-Priv, it was devoted to a discussion of the commercialization and privatization of the Internet. ANS read the messages sent to the list but refused to participate otherwise in the conversation.
In a parallel effort, from November 1 1990 through April 15, 1991, the author, in his role as NREN Assessment Director at the United States Congress, Office of Technology Assessment, moderated a restricted on-line con ference that included over 100 policy makers involved in the creation of the NREN or in the issues that its creation raised. ANS received the traffic from this discussion, but until some questions about its role were raised in a way that it felt impossible not to answer, it did not join in the discussion. During the first two weeks of January, it provided some useful input before falling silent again. In the meantime, by the time an ANS technician began a regular an d quite skillful series of replies to the Com-Priv list on behalf of his employer in June of 1991, the network community had decided that it neither trusted nor liked ANS. For the community this was a serious matter because the general expectation was th at ANS would inherit complete control of the backbone and hence the network when the cooperative agreement between MERIT which had subcontracted backbone operation to it and the NS F expired on November 1 of 1992.
In the meantime ANS attended countless academic and professional association meetings where it described its role as one of building an American research and education networking infrastructure second to none in the w orld. While some wondered whether investments in the network designed to make it easier to use should have priority over increases in speed, CEO Alan Weis emphasized that ANS would build faster network infrastructure indicating a desire to move to 622 me gabit per-second backbone speeds by late 1992 or early 1993.
To Privatize or Not to Privatize? The Great Backbone Rebid Announcement
On November 26, 1991 the NSF announced its decision taken the preceding Friday with the blessing of the National Science Board to rebid the backbone. The decision was clearly reluctantly arrived at on the part of the NSF in the face of strong opposition by FARNet members and Educom's National Telecommunications Task Force to the NSF's desire to complete the privatization of the network by giv ing control of the backbone to MERIT and ANS at the end of the five year cooperative agreement on November 1, 1992. The NSF's change of face left insufficient time for planning a transition to a new cooperative agreement by the time the old one expired. Therefore the NSF stated its intent to extend the current cooperative agreement with MERIT at the same rate of $10,000,000 per year for up to another 18 months. ANS would be lef t in its position of uncontested power until early 1994 when the cutover to new backbone operations could be made.
The agreement, which will be a complicated one to implement, called for at least two network service providers and for the contracting of network routing authority to a neutral third party. (MERIT currently has routi ng authority.) If routing authority were left in the hands of either service provider, it could be used to the detriment of the other provider.
The NSF mystified some observers by stating that it expected the costs for provision of the new higher speed backbone (presumably 622 megabits per second) to decline dramatically in comparison to the old one. In year one it anticipated paying $6 million, in year two $5 and in year three $4 million. While it did not explain why it anticipated such a bargain, some observers noted that some of the current problems with ANS appeared t o come from the fact that MERIT and its partners had apparently made a below cost bid for the network and appeared to be able to extract significant concessions down the line for having done so. They wondered if the NSF were looking for a repeat scenario. They also wondered whether the figures cited by ANS would effectively freeze out a successful bid by AT&T or any of the RBOCs, which as regulated carriers are severely constr icted in how low they can bid on such contracts. Was the desire to get the best monetary deal by the NSF tantamount to ensuring that NREN would be deployed as a private network and was this in the best interests of na tional telecommunications policy? These were questions without ready answers.
Among FARnet's November 1 1991 recommendations to the NSF on the provision of backbone services was a plea that the mid-levels be given a choice of backbone providers. The promise made by the NSF to provide multiple providers presumably could mean dividing the network into Eastern and Western halves. However under this scenario, mid-level networks would, by virtue of geography, be likely to have a backbone provider thrust upon th em. Giving them a real choice could mean that the U.S. Government would find itself in the awkward situation of providing two backbones. A third possibility exists. For $3,200,000 per year (or $100,000 per network) the 32 mid-levels could connect to bot h the East and West coast CIX interchanges. This would provide aggregate T-3 bandwidth to the mid-levels. The NSF could decide whether to subs idize clear channel T-3 access to ANSnet for the nation's top 25 research universities. The NSF's decision is expected to become known when the new backbone solicitation is released on May 4, 1992.
Uneven Foundation for the Commercial Market
While the NSF decision to rebid the backbone may eventually help to level the playing field, at the beginning of 1992, the only reasonable conclusion is that it is not level. Two small commercial national providers (PS I and Uunet) and a mid-level (CERFnet) run by a commercial company (General Atomics) have formed an alliance called the Commercial Internet Exchange (CIX). They have interconnected their T-1, 1.5 megabit per-second ba ckbones forming a commercial backbone that connects a small but growing fraction of the commercial and NSFnet community. During the first quarter of 1992 the CIX gained three new members: BARRnet (a California based mid-level), U.S. Sprint, and Unipalm of Cambridge U.K. While the CIX remained a T-1 network with only one interconnection point, it nevertheless connected more than 3,000 commercial firms - including the top 20 computer companies in the US.
Against the CIX in early 1992 stands ANS, another small company. But ANS provides service to the NSFnet back bone carrying 28 times the traffic of the CIX backbone and connecting all 32 mid-level networks covering the entire United States. Thinking in terms of broad connectivity, it is as though ANS were positioned to compete to become a national airline wit h routes that included Boston, New York, Washington, Atlanta, St Louis, Chicago, New Orleans, Pittsburgh, Philadelphia, Detroit, Denver, Salt Lake City, Seattle, San Francisco, and Los Angeles while the CIX opened only from Boston, New York, Washington, C hicago, Denver, San Francisco and Los Angeles.
Figure 1 on the next page shows the current T-3 ANSnet backbone. The heavy black lines are the "core" nodes and are a part of the MCI national backbone. The names in ellipses are the 16 backbone "end" nodes of the NSFnet. RTP stands for the Research Triangle Park where ANS connects CONCERT, the North Carolina State Net work. West Lafeyette identifies ANS' connection of Purdue University and Blacksburg its connection of the Virginia Polytechnic Institute. What the map does not make clear is that two of these three ellipses represent single campuses while the remaining 16 represent 32 mid-level networks with over 1000 attached institutions.
Those who say the playing field is not level point out that ANS has behind it two major corporate sponsors. One of them, IBM, appears to be investing in the network as a major strategy to revive sagging parts of its o wn business. The other, MCI, seems to be using the network as a testbed in order to develop and market, before AT&T and the RBOCs are able to, data services that are projected to become dominant within the telephon e industry. PSI and Uunet, the competitors of ANS, are very roughly the same size. However, they do not, by virtue of a Federally-funded cooperative agreement, run a backbone
connecting over 1,000 institutions
costing their parent companies, by some estimates, as much as fifty to sixty million dollars, and
bringing them an annual operational cash flow of ten million dollars.
Possession of the major network backbone is strategically important for an additional reason. The network is based on the operation of trunks, known as bit pipes, of various capacities at fixed cost regardless of the traffic that goes through them. Traffic has never before been metered. Whether or not it should be is a matter of controversy within the research and education (R&E) networking community. Regardless of whether metering traffic would be a wise thing to do, from a technical point of view, the potential for installing metering in the next year or so is small. Therefore economic viability in the network community is a function of how many customers a provider can attach at fixed bandwidth.
The size and direct reach of the backbone connecting customers to each other, if all other costs were equal, would likely be a determining factor in the decisions of customers in search of a network provider. Since ma ny network costs tend to remain fixed no matter how many customers a provider acquires, economies of scale become significant. If ANS can connect a sizable number of new organizations to the network, the foundation p rovided by its larger infrastructure will make it a difficult force to compete against.
Under these circumstances PSI and Uunet and the mid-levels must hope to compete with ANS primarily in terms of cost. Since, courtesy of the Federal government, ANS has a larger immediate cash flow, and, courtesy of i ts corporate sponsors, a core backbone network embedded within MCI's network, not to mention 32 mid-level networks with over 1000 customers dependent on it for connectivity, its competitors feel the contest is uneven. From the point of view of the mid-levels matters are even worse since all three of their commercial competitors (ANS, PSI, and Uunet) can act both as long distance and local carriers. In other words they can provide both local and national connectivity by linking a customer directly to a backbone instead of to a mid-level.
"Appropriate use" of the government funded NSFnet backbone has been a driving factor behind the changes taking place. Commercial traffic is banned. While well over 100 commercial companies are on the network, they may not use the network to do business with universities or each other. Appropriate use is defined as communication with other network members about research and education. With most of the network's commercial members having products or services of interest to the networks non commercial members, this creates many grey areas. For example, the manufacturer of a new workstation may send a new product announcement to a mailing list discussing engineering advancements in workstations (informing) but not to individuals (selling). To cite another example, the publisher of a major legal database, makes its wares available via the network free of charge to law students while law firms may not use the network to retrieve mate rial even for a fee. Although commercial use restrictions are enforced primarily by an honor system (the NSF does not inspect packets), there are so many grey areas that all parties are eager to be rid of the restrictions.
Unfortunately the dual (on the one hand government subsidized while, on the other hand, commercially available) backbone run by ANS creates another problem of imbalance among the n etwork players. With commercial traffic only legitimate over certain parts of the network, commercial customers of the CIX, or of the mid-levels, wishing to do business with academic institutions reachable only via th e NSFnet backbone, would need to route their traffic via the ANSnet backbone - something for which ANS has established charging procedures. In support of its charging procedures, ANS announced that commercial custome rs signing up at T-1 rates will pay contributions of almost $5,000 per year to a "national infrastructure pool" designed to support research and education use of the network. Such a connection has a total annual cost of about $75,000 per year.
Analysis of the likely annual corporate contribution to this infrastructure pool reveals that it would be unlikely to play a major role in cross subsidizing academic use. The total number of corporations able to afford such costs would be unlikely to exce ed one thousand. Of these about 100 to 200 are already connected to the network. Many are unlikely to connect because of security concerns. (These may opt for TCP/IP based services isolated from the national network. ) Given that PSI and Uunet charge considerably less for the same connection, and that Union Carbide and Abbott Laboratories, ANS' most recent customers purchased 56 kbs connections, estimating 100 customers per year for ANS at T-1 speeds would seem exceedingly generous. This would generate less than $500,000 per year for the "national infrastructure pool." Considering that the NSF's current contribution to the network on behalf of its research and education users i s about $18 million per year, one would not expect $500,000 to go very far.
As the network moved towards the termination of the cooperative agreement with Merit in November of 1992 and what participants assumed would have been full operation by ANS, two c amps developed. On the one hand ANS was telling each mid-level network that it could sign agreements that will permit commercial service over ANS net for an annual fee of approximately $80,000 per year at T-1 speed and $325,000 a year at T-3 speed. (The amount that each mid-level will have to pay ANS for its connection to the backbone depends on how many commercial customers it has and at what band width they connect. A mid-level with 15 commercial customers could be forced to pay about twice the basic $80,000 connect fee.)
On the other hand, at the beginning of August the CIX began to invite TCP/IP service providers to join it for a one-time fee of $10,000 at T-1 speeds plus the annual cost of a circuit to either the east or west coast CIX connection point. Those who joined would pledge to forgo settlement fees for traffic passing over each other's backbones. This condition pu t the approach of the CIX into stark contrast with that of ANS which was anticipating significant income from what would, in effect, be mid-level settlement fees.
As if this were not complex enough, it created what participants assumed would be only a hypothetical problem thanks to the decision to rebid the backbone. Most had assumed that the AN S charging procedures announced in September would not go into effect because of the decision to rebid, With the end of the year dispute over whether the mid-levels could be forced to sign connectivity agreements, the ANS pricing agreements no longer appear to be moot. Whether the NSF has the power to prevent ANS from enforcing them is not clear. Certainly the problem is worth examining to understand the potential chaos into which the network may be heading.
If ANS is able to insist on imposing these commercial use fees, the mid-levels could attempt to escape them by joining the CIX. Unless the CIX paid an annual gateway fee to ANS, the mid-levels who did join the CIX would not be able to reach those who remained with ANS . Such a fee would be directly proportional to the number of commercial users within the CIX member networks. Some observers believed that it could have been as much as a million dollars per year.
The CIX was saying to ANS join us. But, for ANS, because of the greater cost of running its higher speed backbone and its commercial ambition s, the prospect of joining the CIX was unthinkable. At the same time the CIX let it be known that it would not accept ANS' stiff payment provisions for the right to send traffic over its backbone. By September 1991 two rather hostile camps had developed. "Flaming" on the Com-Priv List grew so intense that PSI's Bill Schrader and ANS' Al Weis felt compelled to call a truce saying that ANS and the CIX had agreed to negotiations. By January of 1992, the acrimony had reappeared and it was clear to all that unless someone took forceful action, the CIX and ANS would likely refuse to interconnect if the NSF let ANS' pricing agreements stand, and approved its continued privatization of the network.
At Risk: The Research and Education Users
Privatization would put education and research users at risk by raising costs to pay for higher bandwidth and the reliability of a production quality network. Disruptions in service could occur due to the balkanization of the network by a possible dispute between CIX and ANS and/or disruptions of the mid-levels. An increased possibility of the imposition of met ered service could mean further disruption of a network culture based on the flexibility to experiment. Privatization could make the improvements of the NREN vision less likely to happen because the commercial sec tor would pay for these only if there is money to be made from providing them.
Before these conditions became so starkly apparent, the NSF had seemed willing to gamble that the commercial growth in the network would be so great that costs will be driven down by economies of scale. Furthermore it assumed that profits will be so sizab le that they can be used to cross subsidize the academic users. Lower costs and higher profits would buy the NREN vision and ensure that users will be able to afford to stay on the network. Unfortunately, while the hope is alluring and the price of T-1connections may be declining, the overall cost of network use is unlikely to decline as new applications encourage higher speed and hence more expensive connections. Even more important, the creation of commercial TCP/IP internets like EINet, not to mention choices expected to be available from the public switched network with the next 12 to 24 months, means that the commercial customers that the academic Internet was hoping to attract are likely to have multiple means of wide area networking and are much less likely to be attracted to a privatized NSFnet.
The Policy Dynamic in 1991
The NSF in attempting to reach a final decision about the fate of the backbone faced a very complicated decision. Examination of the context for the decision shows that the vision for a National Research and Education Network emerged out of a successful two-decade-old Federal policy of supporting the development of computer networking technology. This policy has provided a neutral environment where the network development could be driven by two separate but loosely sy nergistic policies. First the network was treated as an enabling technology to link academic communities to each other and to remote computers. In doing this it became a tool that could be used to facilitate research and education. Second the network fac ilitated technology transfer to the private sector by serving as a testbed for the development of new communications protocols and the associated computers and programs necessary to exploit them. With the network paid for largely by Federal funds and under the control, first of DARPA and then the National Science Foundation, the entity that provided the money was able to balance the goals of technology transfer and enabling technology for improving research and educat ion should they ever threaten to conflict. With the expectation of rapid network privatization in 1991, ANS' needs began to dominate the decision making process, making this balancing act increasingly difficult. The process that is now unfolding appears to be placing academic and research users of the network in an environment structured for commercial needs and driven by commercial costs.
Despite the NSF's decision to rebid the backbone, its bias towards ANS appears to remain so strong that the building of a high speed network designed for academic and research use may yet take place as an overlay of a high speed network crafted to suit the needs of the Fortune 1000. Network policy makers should be using the time before the award of the new backbone contract in April 1993 to ascertain whether the two needs can be o verlaid on each other in any cost effective manner or whether research and education networking needs are sufficiently different as to make their provision by a publicly funded networking entity desirable.
The Dichotomy of Mixed Commercial and Academic Use
Corporate uses of a commercial NREN would be significantly different than those of academia. The difference is highlighted by the list of the twelve most popular ISDN applications compiled by the North American ISDN User's Forum in October 1991.
1. Videoconferencing 2. Telecommuting 3. Telephone and workstation integration 4. Multi-point screen sharing 5. Customer service call handling 6. ISDN access in geographically remote locations 7. Image communications 8. Automatic number identification 9. Multi-document image storage and retrieval 10. Multi media services 11. Multiple ISDN telephones on a single BRI loop 12. ISDN interface to cellular, mobile radio & satellite systems
Of the twelve applications only the seventh, ninth and tenth are of significant importance to the academic and research community. Nowhere to be seen are such NREN dreams as multi-media electronic mail, and access to vast shared library databases by intelligent user agents known as knowbots. In an environment of low communications costs and vast bandwidth, the academic community has shown some interest in video conferencing. How ever, such an interest would fall far behind the ones just mentioned. Finally we find no mention of supercomputing, the original raison d'etre for the NREN.
One of the problems of overlapping an academic network onto a network designed for and driven by corporate use is that the two are very different worlds and use information in very different ways. The corporate world's data is generally proprietary. Whil e data does need to be shared within a corporation in controlled ways, management must guard, at all costs, against leakage of proprietary data to the outside. To cite but one example, security concerns would now prevent a corporate supercomputer from be ing placed on a publicly available network. It is true that companies increasingly will need to communicate across corporate boundaries in order to use computer networks to buy and sell goods and services from each other. Known as Electronic Data Interc hange (EDI), such services will be tightly controlled by the companies that use them and will be only marginally useful to the academic community.
In contrast to the corporate world, the academic and research community will use the computer network to share information. Within this community, the network benefits research and education in direct proportion to the extent that it enables sharing of i nformation across institutional boundaries. Network resources created to protect and guard proprietary information are not high on the list of academic needs. By the same token, network wide white and yellow pages, and universally accessible databases ar e not high on the list of corporate needs.
Of course, given enough Federal subsidies, the academic network planners can buy or have created by the corporate network proprietors, anything that the academic world does want. However, the consequence of overlapping the networks with their different n eeds would be likely to mean that the cost of overlaying academic tools on a network driven by the needs of corporate users may be far greater than the cost of building these tools into a network designed for academic users from the ground up. Moreover g iven our chronic budget deficits, the high costs are likely to mean that the academic community may have to do without the tools it desires.
Under such conditions, it seems likely that a significant percentage of the NSF's NREN appropriations could yet be spent in support of the higher costs of a faster commercialized network. In other words the NSF could find that providing the same level of support to network users in pursuit of science and education policy goals would have cost significantly more, because of the changes brought to the network by privatization.
The Mid-levels' Role as the "Glue" for the NREN in the Face of the Changing Marketplace
The NSFnet is made up of 32 mid-level networks connected to each other by the national backbone. These networks provide connectivity to over a thousand educational institutions, go vernment agencies and corporations. If ANS' commercial pricing agreements are allowed to stand, the mid-levels cost of access to the backbone would increase, as would their operating costs. The NSF could provide fres h subsidies to the mid-levels to offset such costs. This would be likely to anger PSI and Uunet which would see such subsidies as yet another form of subsidization for ANS. In the absence of such subsidies, the mid-levels' most likely significant source of new revenue to offset these costs (and indeed their best chance for financial viability) would come from connection new corporations to the network. Unfortunately they can no longer count on what, as recently as 1989, was a monopoly in providing thes e connections.
Their monopoly is gone because the mid-levels are no longer the only "glue" between the local campus networks and the backbone. The old tripartite division of the network in a backbone and campus networks connected by the mid-levels is no more. ANS, the backbone operator is now competing with the mid-levels in selling direct connections, as are PSI and Uunet, the smaller commercial competitors of ANS. With such competition it is unlikely that mid-level income from new commercial connections will increase significantly.
The mid-levels provide perhaps 90% of the connectivity for academic and research users of the Internet. The NSFnet backbone connects not only the 32 mid-levels but also, through th e Federal Internet Exchange (FIX), the NSI backbone with about a dozen NASA laboratories and the ESnet backbone with a similar number of DOE national laboratories. MILnet, as the open portion of the Defense Data Network is known, connects a larger number of military bases here and abroad.
If a scientist doing work for NASA, DOE or DARPA is not located at one of the respective labs or military bases, he or she will generally be found at one of the 1,000 institutions connected to the NSFnet and will obta in connectivity to other researchers and to sponsors at NASA, DOE, or DARPA through the NSFnet mid-level, the backbone, and the Federal Internet Exchange gateway at the appropriate site or sites on the other three networks. Therefore, even to the mission agencies, if the NSFnet is to become the NREN, and is to fulfill the goal of making the network available to scientists and educators, the cont inued existence of the mid-level networks, in the absence of some other means of affordable connection for their client institutions, is of critical importance.
However, current market forces are bringing the role of the mid-levels into question. Barring a government decision to subsidize massively the attachment of secondary schools, and two and four year colleges, the market for new network growth is predomina ntly commercial. The mid-levels are predominantly run by academics most of whom have no knowledge of the commercial market and technical demands therein for standards of service. If the mid-levels are ever to become independent from NSF subsidies, they m ust continue to be able to attach new commercial clients - something that throughout 1991 they had been able to do, on the whole, more cheaply than ANS and PSI. (Some say that Uunet's prices are generally quite low be cause they assume considerable customer expertise and provide relatively little customer assistance. But others assert that Uunet has now begun to offer a full range of services.)
One of the problems to be solved by any plan for an orderly transition to a completely commercial market will be the need to decide under what conditions the mid-levels are or should be sustainable. The mid-level's viability is currently made more difficu lt by ANS's aggressive pursuit of the connection of the same commercial firms. With the T-3 backbone in place as an overlay of the MCI national network backbone, ANS has considerable flexibility in attaching its commercial clients. Therefore it will be able to compete more effectively with the mid-levels on the basis of cost. The mid-levels' problem is further compounded by the fact that PSI an d Uunet are also selling connections to commercial clients.
What does this situation mean when viewed from the framework of the policy goal of making the network available as a tool for the use researchers, educators and students? If one or more mid-levels should collapse, how can one predict the impact on the us ers? If the policy goals that Congress chooses for user connection to the network include only the largest of our universities, the answer is that the impact on these users would probably not be serious. Such institutions would buy high-speed connections from ANS or another commercial provider. The next tier of universities would be assisted by the NSF in doing the same thing. However, at some point the NSF's ability to help would run out.
On the other hand if Congress desires to promote broad access to the network, the value of the mid-levels begins to become apparent. Aided by approximately 40 million dollars in NSF subsidies over the past five years, the mid-levels have installed their own infrastructure that allows them to bring low cost, low bandwidth service to small schools and to small, technically oriented, businesses. ANS does not offer network connection at less than 56kbs.
Unfortunately for the small clients of PSI and Uunet which do offer slow speed connection, all of their clients depend on connectivity through the ANS backbone to reach much of the total network. Consequently, the ability of PSI and Uunet to connect these institutions would depend on two things: First on the willingness of ANS, a competitor, to allow them use of its backbone to offer them access to the entire network; Second on t heir ability to connect them as cheaply as the mid-level had done - something likely only for those located very near to one of their backbone nodes.
Consequently, unless a mid-level were taken over and its operation preserved largely intact, its demise would very likely harm its low end users. What is not possible to predict is precisely where the line would be drawn between those who could afford to connect to a commercial provider and those who could not. A reasonable estimate would be that at least 40% of the more than 1000 institutions connected to the NSFnet would be at risk if their mid-level collapsed. U ntil the commercial market matures to a point where it can offer ubiquitous, low cost, low speed connections to the network, the conflict between the goal of technology transfer for commercial development and the goal of expanding the network as a tool fo r the improvement of research and education is likely to continue.
No Clear Financial Picture by Which to Project the Financial Strength of the Mid-levels
In looking at the evolution of the mid-levels in a privatized market, policy makers could hope to be better guided in their decisions on a case-by-case basis if they had an accurate picture of the finances of each mid-level. Readily available summary hist ories of what kinds of grants for what purposes had been made by the NSF to each of the mid-levels on an annual basis would make it easier to see which mid-levels were fiscally the healthiest and to draw conclusions as to why. With such a picture it woul d be easier to make decisions about how to apply continuing subsidies with maximum effectiveness. Unfortunately this data has never been separated from individual grants and aggregated by the NSF into a form that would make such conclusions easy to draw.
In meetings in January and May of 1991, NSF officials explained that their relationship with the mid-levels has been driven by proposals originating from the mid-levels. That is to say they do not assume to know what the communications needs are in the t erritory of any given mid-level and issue a request for proposals to fill them. Instead the mid-levels come to them with proposals for cooperative agreements to connect new groups of colleges or to build new and faster links between existing schools. (C olleges and universities wanting to connect to the network may apply directly to the NSF Connections Program or may negotiate their connectivity through a mid-level.)
The NSF has the proposals initiated by the mid-levels reviewed. If they are judged to be reasonable and money is available, they are funded. Records of what has been funded are kept by the NSF. However, such records have apparently never been systematic ally categorized and audited for each mid-level and according to the purpose of each grant. Doing this would probably be useful. Nevertheless, because the mid-levels are so diverse in their organization and management, it is difficult to predict exactly what would be learned.
For example, the 32 mid-levels fall into several groups: state networks (North Carolina, Pennsylvania, Ohio, Michigan, Texas, New York, Minnesota), very large regionals (SURAnet, Westnet, Midnet) and smaller networks focusing on parts of a state or parts of several states (BARRnet, CERFnet, NEARnet, JVNCnet, CICnet). The latter two groups are usually the result of the formation of university consortiums. For the most part the state networks are supported by state governments and are well grounded in the local economy of the state. The remaining networks are in various stages of financial independence from NSF subsidies. Some (NEARnet, CICnet, CERFnet, NYSERnet) are run by commercial operators. Others are operated by collections of academic computing cen ter types who have other duties (SESQUInet, Michnet, BARRnet), and still others by professional staffs employed by the network and located at a university campus (JVNCnet, Northwestnet, SURAnet, Westnet).
Mid-Levels in Search of New Markets
NSF funding for the mid-levels has been uncertain. Since 1986 the NSF position has been that they were expected to become financially independent within three to five years. They have been responding in different ways with varying degrees of effectivenes s. At the same time, as plans for an NREN have matured, the NSF has pushed them to connect new sites. As a result, while they have expanded further, the very act of expanding has rendered significant portions of their budgets subsidy dependent.
As Congress decides the scope of network service that it wishes to underwrite, it will need to consider whether the mid-levels are to have a role in delivering them. Meanwhile the mid-levels, interested in their own survival, have been attempting to defin e new patterns of service. Some have remained low cost providers, while others are following a model described by Richard Mandelbaum of Rochester University as limited network providers. According to Mandelbaum, market maker and full service provider ar e two other evolutionary possibilities. An assessment of which of these roles are the most viable could prove helpful to Congress in deciding how to serve the interests of research and education users.
Some mid-levels (low cost providers) assume considerable technical sophistication on the part of their customers. Consequently they provide cheap connectivity and minimal technical assistance. Still others (limited network providers) are characterized by low prices but also include an emphasis on strong network operations services. Such services may not include a seven day a week, 24 hour-per-day network operations center, but are likely to include monitoring and trouble shooting of customer connections . While the limited network provider's customer base is broader than that of the low cost provider, it is not likely to include turnkey services that would be demanded by commercial operations increasingly interested in out-sourcing the operation of thei r networks.
The third pattern is that pioneered by PSI and followed by ANS and Uunet: full service providers offering a broad array of consulting services over an extensive geographic area. Organized and run as a business with at tention to long range planning, strategy and marketing, the full service provider has the potential for large economies of scale. Unfortunately such economy is so far unrealized and has kept prices charged significantly higher than those of the competitio n from the low cost or limited network provider.
Critics see some grey areas in the distinction between limited and full service provider models. Increasingly many mid-levels are extending the range of services to the point of matching or exceeding those provided by the full service providers. Increas ingly, the most significant point of distinction seems to be the national scope of the full service provider.
While most mid-level networks might want to provide a full scale national service, they generally lack the necessary capital resources. Their long term viability may depend on their ability to:
1. form buying cooperatives, comprised of a number of mid-level networks, that could create pressures for competitive pricing among the telephone companies.
2. continue to stimulate the demand for network services.
In their immediate geographic areas, the mid-levels can be expected to know a broad section of customers and potential customers, among whom they quite literally live, far better than the full service providers who must cover the entire nation. Consequen tly the mid-level as market-maker is in an excellent position to market the concept of networking. The mid-levels could reach small businesses, high schools, community colleges, undergraduate colleges and other public institutions and sign them up for lo w-cost services. Furthermore the mid-levels, as market-makers, could become involved in areas of the information technology business not currently attractive to the commercial sector such as network yellow and white pages. In other words, the market-maker 's function is to fertilize the marketplace at the grassroots level by bringing into the market those customers that are too small or have needs that are too specialized to be attractive to the three current full service providers: ANS, Uunet and PSI. Meanwhile ANS, Uunet and PSI are left in the very desirable position of serving the largest and likely the most profitable customers.
The viability of a market-maker role for the mid-levels depends to a great extent on the outcome of their relationship with ANS and the providers of the CIX backbone (PSI, Uunet, and CERFnet). If they can get adequate backbone connectivity, provide good service to their customers and are not dramatically undercut by their commercial competition in the cost of hooking commercial users to the ne twork, they should be able to survive. If the network is to further its education and science policy goals, and if it is to do this by extending connectivity downward to smaller and less wealthy institutions, until and unless the commercial service provi ders show an ability commensurate with that of the mid-levels to serve such clientele, the survival of the mid-levels seems to be desirable.
However, the commercial telecommunications market may mature in such a way that it could provide cost-effective connectivity for these institutions a year or two hence. SMDS and frame relay are new technology options that should permit the RBOCs to also provide cost-effective connectivity for the institutions connected by the mid-levels. If the network evolves in such a way that interconnection with the backbone is affordable, and if these other services are priced competitively by state PUCs, then cost-effective alternatives will almost surely arrive. In the meantime survival of the mid-levels seems to be t he most cost-effective way for the network to reach the largest possible education and research community.
Mid-Levels in Search of New Relationship with NSF: On the Horns of an Intractable Dilemma?
During the first three quarters of 1991 while ANS made clear its intentions to pursue corporate clients, PSI and Uunet continued to build up national backbones and sign up their ow n commercial clients. During this time, the mid-levels became very interested in what their relationship to the NSF and the rest of the network community would be after the backbone agreement with MERIT expired on Nov ember 1 1992.
In September of 1991 ANS made its plans known through the release of what it called the ANS Plan for Commercial Services. In offering the mid-levels backbone connectivity through what it calls a Connectivity Agreement, ANS announced that it would charge in part according to the number of commercial clients each mid-level has. The more commercial connections the higher the charge. The pricing formulas released by ANS in September 1991 were complex and not immediately clearly understood by the parties involved.
Not surprisingly, ANS emphasized the importance it places on the commercial market by offering the mid-levels two variations on the basic Connectivity Agreement. In the first called a Gateway Attachment Agreement, by paying fees to ANS for each of its commercial clients, the mid-levels are allowed to send their traffic over the ANSnet commercial backbone. In the second, called a Cooperative Ag reement the mid-level turns over control over all its commercial clients to ANS in return for a bargain attachment price.
Access to the backbone had been a free good for the mid-levels provided courtesy of the NSF. With the publication of the ANS Plan all mid-levels realized that if the backbone were not rebid they would get access bills from ANS in the fall of 1992 that could range in amount from $80,000 a year if a small mid-level like CICnet with few commercial clients signed up to connect at T-1 speed to over $500,000 if a large mid-level like SU RAnet connected at T-3 speed. Unless they raised their customer's bills substantially, they would have no way of paying for them. The NSF would have to bail them out.
The conditions under which the commercial pricing agreements would come into effect were not made clear. With further discussion on the Com-Priv list and response from Al Weis quoted in an article in the December 23, 1991 Communications Week, it began to look as though the NSF had given its approval to the immediate implementation of the agreements. This would mean that commercial clients of mid-levels would not be able to use the national backbone unless the mid-leve l paid the appropriate charges to ANS. It was also unclear when and under what conditions mid-levels might have to pay for access for their academic clients.
In the face of this uncertainty, some members of the network community had speculated that the NSF would take funds that had subsidized mid-level connectivity to the backbone and redistribute them to campuses to help t hem pay for higher connect fees that the mid-levels would be forced to pass down to pay ANS. On October 1, 1991 Steve Wolff, the Director of the NSFnet dismissed this possibility.
"Whether there will be a recompetition for the BackBone or a competitive distribution of BackBone funds to mid-levels is open, but putting that money at the campus level is a non-starter." He also asserted that the NSF would continue to respond to funding proposals from the mid-levels to the extent that its budget allowed. Noting that the mid-levels indeed faced competitive pressures, he went on to say that "the mission agencies rely on the mid-levels for access to the ir grantees, contractors, and other investigators. . . . Their first request to NSF as the NREN implementer was not to increase connectivity, but rather to put money into hardening the regionals' [i.e. mid-levels]. An d we plan to do that."
The November 26, 1991 decision to rebid the backbone, begins to appear more and more like a diversion on the continuing march toward privatization. It seems likely that fresh and sizeable Federal subsidies may have to be given by the NSF to the mid-levels so that they can pay ANS' bills. One wonders if this is what was meant by putting "money into hardening the regionals."
Strategic Options for the Mid-Levels
On a long-term basis the mid-levels will have to make important decisions about their strategic direction. The connection of commercial users will remain a key to their future because without maintaining the commercial clients that they currently have an d adding new ones in the future, they will be unlikely to become financially stable, self-sustaining businesses.
If the mid-levels wish to survive in order to serve as the most cost-effective vehicles for bringing network services to research and education users, they appear to have four options.
Second, large mid-levels could form an alliance with an inter-exchange carrier interested in using them as a test bed to develop expertise in marketing TCP/IP based services to business.
Third, they could bow to the general force of market competition, out-source their operations to a commercial service provider and shrink down to two or three person consulting operations that would advise academic, research, and business organizations wh at their various technical options for obtaining network connectivity were.
A fourth possibility exists. One or two of the most successful mid-levels may grow into successful commercial providers of TCP/IP networking.
In the meantime the major policy unknowns are :
1. whether over the next five years there is likely to be any more cost effective way of connecting institutions to the network and
2. whether the network is being constructed in such a way as to deliver the services of most use to researchers and educators.
Because the mid-levels can be expected to focus on their communities, it is likely that they will be, until the new technology options presented by Frame Relay and SMDS become clear , the most cost effective means of delivering low end services to secondary schools and other public institutions. This is especially true when one realizes that university based mid-levels can serve as technical training grounds for commercial service. Staffed with a sizable portion of students, and having therefore lower operating costs, the mid-levels should be able to charge public sector institutions, to which rock solid reliability is not as mandatory as for commercial customers, less for network a ccess than commercial network service providers.
The Ambiguous Role of Federal Subsidies for a Research and Education Network
In adding Education to the National Research Network Congress adopted a policy goal that would permit educators to lobby for vast increases in educational access to the NREN. When a college or university joins the NSFnet, its first two or three years of access to the network has traditionally been subsidized by the NSF. After its second or third year of connection it is expected to pay the for the full cost of its access. Given the current fiscal state of education nationwide, it is unlikely that new universities, colleges, junior colleges and high schools seeking access to the network will be able to forego theses subsidies. Consequently continued expansion of the network with in the education community will demand continued subsidies.
If the cost of using the network stays roughly the same, subsidies for those colleges and universities presently paying their own connect charges are unlikely to be necessary. However, a significant change in network costs, brought on by changed technolo gy or changed market conditions, could change this part of the equation.
The HPCC legislation gives the NSF approximately 200 million dollars for NREN user subsidies over the next five years. It will face some interesting challenges in using them effectively. In considering their use, we should dismiss immediately the idea of subsidization of network connections with no strings attached. Paying whatever a network provider chose to bill would lead to a wasteful use of federal funds.
Further complicating the issue of subsidies is the question of whether to continue to provide access to the backbone as a free good. The NSF could have chosen to do this via a direct service purchase of backbone conne ctivity from ANS. But such a choice would have been perceived as playing favoritism with ANS and could have invited legal action from an AT&T or a Sprint accusing the government of using public funds without a com petitive solicitation to set up a competing service. (This possibility appears to have been rejected by the NSF with its November 1991 announcement that it would rebid the backbone.)
The NSF could also have announced that the mid-levels could compete for subsidies and use subsidies both for the cost of the backbone connection and for connecting new institutions to the regionals. Unfortunately, doin g this would have raised the problem of who could claim to be a regional? Would the NSF be enshrining the existence of the current 32 regionals? If this subsidization process were not to cause new groups of universities to sprout up claiming to be new re gionals and eligible for subsidies, the NSF would have to enshrine each of the current regionals as having a geographic monopoly. Such a goal would be rather difficult because the territories of some regionals like CICnet and MERIT overlap.
The NSF could have announced that subsidies would be available to holders of NSF research grants that is to the real "end users." (The HPCC legislation makes it possible for the first time for grant holders to use their grants to pay for network service s.) The problem here is that this is not very practical until there is a national network grid established making it possible to track individual use and compensate individual providers for someone's specific use of the network. Also such a plan would d ivide faculty and students into two classes: those who could use the network and those who had no access because they had no grant. Such a plan would restrict rather than increase network use.
The final possibility would have been for subsidies to go to institutions for institutional connection to the network. If the NSF established criteria for being an eligible service provider, such a means of subsidization could have helped to level the pla ying field,. The most important of these criteria would have been that all providers would pledge to interconnect with all other service providers, establishing in time, not one or more backbones but a national TCP/IP network grid. Another criterion should have been that eligible service providers offer competitively priced dial up capability to individuals who want to use the network but have no institutional affiliation. (This se rvice is currently available from Uunet, and PSInet and a number of regionals.)
With these rules in place, institutions could be asked to put out RFPs for connection to the network by ANS, Uunet, PSI or the appropriate mid-level. The NSF could then have developed guidelines for what portion for the winning RFP it would pay while the institution signed a service contract with the winning provider.
These subsidy problems do not end with grants to institutions to network providers who agree to play by NSF rules. If the NSF serves as the major disbursing agency for increased subsidies, several questions arise:
What staff increases would it require to handle the new paper work?
Does it have the demonstrated ability to hand out sums of money to network providers involving payments on behalf of hundreds institutions while keeping all parties accountable for what has been spent on various phases of network development?
Would it be called upon to arbitrate disputes over charges or over network connectivity?
Such a plan is asking the NSF to function as a quasi-bureau of public-utilities. But the NSF has never been a regulatory agency. It probably doesn't want to be such an agency, and such a role may not appropriate for it.
Finally, when a company is running two different networks over the same physical equipment for two different constituencies, one may be justified in asking how it sets prices that it charges the government versus its commercial prices? Since ANS has far more government customers than commercial customers, critics of ANS believe that public funds could be used to cross subsidize its commercial operations. These same critics say that a national infrastructure pool that would be unlikely to reach over $500,000 a year of subsidies for the research and education market would be unlikely to redress the balance. But it also seems unlikely that the use of federal funds could ever be audited in such a way that one could ever even find out whether cross subsidization was occurring. The NSF should do everything that it possibly can to avoid this ambiguity in its rebid of the backbone. The fact that the NSF will make an award to at least tw o different service providers should lessen the likelihood that it could be seen as indirectly subsidizing a private network operation.
To conclude, in the absence of any plan for network oversight and regulation, nothing in the current scenario suggests that answers to the apparent need for indefinite subsidies are likely to be found. If commercial use of the network increased very grea tly, economies of scale might make it possible for science and education users to gain affordable unsubsidized access. Unfortunately, there is no way of assuring that this will be the case. At the end of 1991, none of the NREN participants appear to have any clear knowledge of the most cost-effective way to guarantee science and education users access to the network.
At Risk: the Network Environment for Technology Development
While some say that the process of privatization alone may cause harm to the network environment important to the academic community, others say that harm will come only if the market becomes dominated by a single large company and, as a consequence, does not remain free and open. Whichever prediction is correct, if the transition is not handled well during the next five years, the following may be at risk.
The experimental cooperative environment that has enabled the internet to be a seedbed for the startup of many commercial companies - several of which have become nearly billion dollar a year operations.
The cooperative standards development process of the IETF.
The ability to change easily parts of the network to meet developing technology needs of its user community because the network will be driven by the commercial need for predictability and stabil- ity.
The possible loss of the ability of the community to continue the development of such information delivery systems as FTP driven by the likelihood of imposing usage based cost considerations.
The stakes behind the right decision are significant. The Internet/NSFnet has been a very productive data networking technology R&D proving ground. As such it has very likely paid off many times the government inv estment thanks to new products and services that have been successfully commercialized by the computer and telecommunications industry.
If, in pursuit of the market solution of a commercial network, we lose the ability to experiment and innovate, we have made a serious mistake. Certainly we should not be looking to the the current commercial players for this capability. They appear to b e too small to provide anywhere near the equivalent facilities of Bell Laboratories or Bell Communications Research for wide area networking. Some suggest that the CNRI Gigabit Testbeds, if they continue to be succes sful, may link together. But the prospect of this is very uncertain, as is the question of whether linked testbeds would be able to provide adequate scale for continued development of the technology.
At Risk - The Integrity of the NREN
Observers took the backbone bid to be a signal that the network would not be precipitously privatized. It took only two weeks for whatever good feeling had been built up by the November 26th announcement to unravel int o such bitter acrimony on the Com-Priv mailing list that the charges and counter charges became the subject of a December 19th story by John Markoff in the New York Times.
On December 7th, Al Weis and Eric Aupperle, MERIT President stated that on the instructions of the NSF they were placing routing filters to prevent traffic from ANS commercial cli ents from reaching Mid-level networks that had not yet signed connectivity agreements with ANS. Most observers had thought that such agreements announced by ANS in September were rendered moot by the decision to rebid the backbone. They were wrong. As Bill Schrader published what he said (and no one else denied) were secret agreements between the NSF and ANS giving the privilege of commercial use of the network to ANS, concern mounted that the connectivity agreement would affect the use of the backbone by the commercial clients of the mid-levels. ANS stated that the connectivity agreement could be signed without further obligation by a mid-level. When asked whether the cooperative and gateway agreements that did carry obligations still existed, ANS said that they did still exist while declining to comment whether mid-levels would ever be asked to sign them..
At this point, on December 11, 1991 Mitch Kapor and Dave Farber published an extraordinary joint letter to the Com-Priv mailing list in which they came close to accusing the National Science Foundation of malfeasance in creating a non competitive environm ent favoring the interests of ANS. They asked for suggestions for solving the imbroglio. Immediately the suggestion that ANS join the CIX was made. (Such a move would mean that ANS would have to give up its commercial pricing agreements.) Within hours an announcement eerily reminiscent of the one made in September ending the last major dispute on the mailing list came: ANS was indeed negotiating possible membership in the CIX .
While the Com-Priv debate temporarily cooled, when the New York Times article appeared a week later PSI's Bill Schrader complained that the Government had turned over a valued public property to a private company. "It's like taking a Federal Park and giv ing it to K Mart," Mr. Schrader said. "Its not right, and it isn't going to stand. As a taxpayer, I think its disgusting." Schrader was also quoted as saying that he might ask the Internal Revenue Service to look at the business relationship between ANS' profit and non profit operations. One would doubt that this very high level of public acrimony would be conducive to ANS' joining the CIX because such an action would be broadly perceived as an acknowledgment that Schrader was correct in the position he had taken.
Between Christmas and the end of the first week of January, discussion on the Com-Priv list centered on an effort by ANS to explain what it was trying to achieve by asking the mid-levels to sign connectivity agreement s. That ANS was not very successful was underscored when Dr. James Bruce, Vice President for Information Services at MIT, Director of Project Athena and former Dean of the MIT School of Engineering, and member of the National Academy of Sciences, stated that he considered the foundation of commercial charging on which ANS was operating to be broken. Saying that he did not wish to attempt to build the NREN on a broken foundation, he expressed the hope that someone wo uld soon come up with a new foundation. As participants on the Com-Priv list continued to tear holes in the logic of the ANS commercial charging plan which the NSF had approved, some observers wondered how long the NSF would continue to maintain a stance that appeared indefensible. But other observers believed that the problem was not so much the unreasonableness of the NSF as its inability to control ANS which allegedly took an NSF request to get the permission of each mid-level to send commercial traf fic into it and turned it into a mandate that each mid-level sign the ANS agreements as a condition of being able to receive commercial traffic.
In early December, Dialog, the large commercial database information services provider, had gone on line as ANS' first commercial customer. Because approximately 75% of the mid-levels refused to sign, the ANS agreeme nts, Dialog could access only about one fourth of the network. Shortly before Christmas it requested ANS to change its status from commercial to research and education so that it could have access to all the mid-levels. On January 23, 1992 ANS complied.
The ANS "agreements" controversy centers on a dispute over whether the NSF is buying access to facilities which are owned by ANS or whether it is buying from ANS a slice of bandwidth over which it has complete control. Under the slice of bandwidth argument, the NSF could insist that networks like PSI and Uunet be linked to its infrastructure without discrimination. Most policy makers are striving to make sure that the slice of ban dwidth argument prevails. They reason that a completely open network to which anyone can connect without discrimination is the best way to ensure a competitive enviroment.
The privatization issues that we have just discussed center on actions that could pose near term threats to the entire fabric of the network. They serve to illuminate only a few of the problems that will need to be addressed and solved well before the ex piration of the new backbone contract in 1997. Others abound.
For example the NSF and/or the Corporation for Public Networking -- should one be established -- will need to articulate as soon as possible a viable financial model for the future growth of the network. The role of the mid-levels in that model will need to be clarified. And above all, NSF policies on the use of subsidies for the mid-levels need to be clearly stated. Some would suggest that awards to mid-levels should be public knowledge, as should be their yearly balance sheets. Subsidies should be s een by the entire network community as rewarding excellence and never as permitting a poorly managed mid-level to survive. Since corporations with multiple national locations could find it feasible to connect to any of a number of mid-levels, care must be taken to see that a mid-level could not use subsidies from academic connections to enable it to offer a corporation a bargain price to connect to it rather than to some other mid-level.
Although care in the use of subsidies is important, it pales before a far more general and more serious problem: the potential split of the network into two hostile camps. Given the current level of mistrust and enflamed feelings, ANS is not likely to join the CIX. If it does not join, the conflicting interests of ANS, on the one hand and PSI, Uunet and the mid-levels, on the other hand are unlikely to be solved. Accusations a re likely to continue and what public support there is for NREN is likely to diminish rapidly.
The Next Five Years - Where Shall We Go From Here?
The Dilemma of an Unregulated Public Resource in a Free Market Environment
As currently structured, the NSFnet and American Internet provide access to several million researchers and educators, hundreds of thousands of remote computers, hundreds of databases, and hundreds of library catalogu es. Money being invested in the network as a result of the High Performance Computing and Communications initiative should considerably increase the numbers and variety behind this unprecedented collection of resources. No other computer network on eart h currently comes close to providing access to the breadth and depth of people and information. If access to information is access to power, access to the national computer network will mean access to very significant power.
Furthermore access to the American Internet and NREN is also access to the worldwide Internet. According to the Director for International Programs at the NSF in February 1992, the development of the Internet over th e past twelve years has been one of exponential growth:
Date Connected Hosts
August 1981 213 October 1985 1,961 December 1987 28,174 January 1989 80,000 January 1991 376,000 January 1992 727,000
These hosts are computers to which anyone in the world with network access can instantaneously connect and use if there are publically available files. Any host may also be used for remote computing if the system admi nistrator gives the user private access. These seven hundred thousand plus hosts are located in more than 38 nations. But they are only part of the picture. By system-to-system transfer of electronic mail they are linked to probably a million additional hosts. According to Dr Larry Landweber of the University of Wisconsin, as of February 10, 1992, Internet electronic mail was available in 106 nations and territories.
Unfortunately our current regulatory system does not distinguish between the unique nature of the Internet and commercial systems like Prodigy and Compuserve where perhaps a million people pay monthly fees for access to systems offering a few dozen databa ses run from two or three hosts and electronic mail to several hundred thousand people instead of many million. (The picture is made somewhat fuzzy by the fact that Compuserve does provide electronic mail access to t he Internet through a gateway and for an extra charge.) The Federal Communications Commission considers all three to be Value Added Networks (VANs) run by Enhanced Service Providers . All use common carriers to provide their enhanced services and the FCC, in refusing to regulate them, reasons that all services are roughly alike. If for example Compuserve charges too much, the consumer can quit Compuserve and move to Prodigy. Or if t he monthly cost of access to the Internet were to become too much, access to Prodigy or Compuserve would be basically the same thing. Here unfortunately the analogy fails: the Internet now and NREN to be, with its unp aralleled resources, is not the same. Nevertheless the FCC points out that without Congressional action it is powerless to regulate NREN service providers.
Regulation is a key NREN policy issue.
Perhaps there will be no need for regulation. Hopefully the marketplace for the provision of network services will remain competitive and higher prices and cream skimming will not keep the national network out of the reach of the general public who wish to avail themselves of what it has to offer. However given the scope and power of what is contemplated here, Congress should realize that there are important considerations of social and economic equity behind the question of access to the network. This i s especially true since libraries and groups representing primary and secondary schools are demanding what could be considered as universal access to the network without having any knowledge of how such access might be funded.
The economic stakes are huge. Other players such as US West's Advanced Communications division are entering the market and AT&T is expected to do so by the spring. When combined with the award of the EINet backbone to Uunet, their entry should help to level the playing field. While one company is less likely to dominate such an uncontrolled, unregulated market, those concerned about widespread affordable access to the network would do well to watch unfolding eve nts with care.
Policy makers may ask how much priority the Federal government should continue to give technology transfer in a market where the technology that allegedly still needs aiding is showing remarkable signs of maturity? As they debate the course on which they wish to take the network over the next five years, policy makers may find that one answer to the apparent disparity between the emphasis in the legislation on the provision of the network by the government, and the growing number of commercial sources of network availability is that the market matured very rapidly while the HPCC legislation remained unchanged.
In view of all the remarkable commercial achievements (outlined in part one of this essay) in the four years since the NREN idea arose, perhaps the policy objective of technology transfer for economic competitiveness could be considered to be achieved! A commercially viable high speed data networking industry, with the entrance of Sprint in January 1992 and the anticipated entrance of AT&T, has reached maturity.
Therefore, having successfully achieved its technology transfer goals, the Congress must decide whether to continue to underwrite the network as a tool in support of science and education goals. It seems reasonable to assume that this support could be un dertaken in a way that would not seriously undermine the commercial TCP/IP data networking market place.
The Context for Policy Setting
In order to make informed choices of goals for the network, Congress must understand the context of a rapidly commercializing network. The resulting context is likely to produce serious impacts both on the user community and the development of future net work technology. It is likely to make some goals more easily attainable than others. Given its maturity, the commercialization of TCP/IP wide area networking technology is inevitable.
Some have already begun to question whether the government should be providing backbone services where commercial alternatives are currently available and are expected to grow in number. Supporters of the NREN vision argue that the NSF is using government funds to build a leading edge network faster than the commercial alternatives. They say that use of public funds on such technology development is appropriate. Their critics st ate that the T-3 technology (called DS-3) is dead end and point out that the next logical step is refining the network so that it can use ATM and SONET. For aggregate gigabit speed s along the backbone, use of ATM and SONET will be necessary. Critics claim that the T-1 backbone could be engineered to accomodate the network for a while longer while Federal Funds would be more appropriately invested now in an ATM and SONET developmen t effort. They say that Federal policy is being used to enable IBM to have a testbed for the development of DS-3 TCP/IP routers when Network Technologies makes a comparable produc t that is already proven and reliable.
Whether the Federal Government should be providing backbone services or merely support for access and improved network features is a key policy issue.
Finding the best answer to the questions raised by this issue is likely to center on the ability of the Federal mission agencies involved in high speed network development to articulate a long term plan for the development of new network technology over t he next decade. How we shall use what is learned in the gigabit testbeds has not yet been clearly addressed by policy makers. Continuation of the testbeds is currently uncertain. There is also no plan to apply the outcome to the production NREN. These are areas deserving of federal involvement. The current players seem to be incapable of addressing them. Some possible courses of Federal action will be identified in the discussion of a Corporation for Public Netwo rking to follow.
In the meantime we face a period of four to five years where the NSF is scheduled to take the NSFnet backbone through one more bid. While Federal support for the current productio n backbone may be questionable on technology grounds, policy makers, before setting different alternatives,
must understand very clearly the dual policy drivers behind the NREN,
must define very clearly the objectives of the network, and
must carefully define a both a plan and perhaps a governing mechanism for their achievement.
A sudden withdrawal of Federal support for the backbone would be likely to make a chaotic situation more so. However the application of focused planning could define potentially productive alternatives to current polic ies that could be applied by the time of the backbone award announcement in April of 1993.
Whom Shall the Network Serve?
The HPCC legislation gives the FCCSET a year to prepare a report to the Congress on goals for the network's eventual privatization. Thanks to the NSF's decision to rebid the backbone, this task may no longer be rendered moot by premature network privatization. The FCCSET Report needs to address many questions. One question is the extent to which, in the higher education environment, Congress through the National Science Foundatio n, or perhaps through another entity of its own choosing will continue to underwrite networking. A related question is whether or when Congress should act in order to preserve a competitive networking provider environment. A question subsidiary to this i s whether a competitive commercial environment is adequate to ensure a fertile data networking technical R&D environment? Another related question centers on what is necessary to preserve network access that is as widely available to post-secondary ed ucation as possible? Further issues center on what type of access to promote. Should Congress support the addition to the network of many of the expensive capabilities promoted by the advocates of the NREN vision? W hat if funds spent here mean that other constituencies such as K-12 do not get adequate support?
Access to the NREN is a key policy issue.
If network use is as important for improving research and education as its supporters allege it to be, Congress may wish to address the issue of why, at institutions presently connected to the network, only a small minority of students and faculty are act ive users. If it examines the network reality carefully, Congress may sense that it is time to leverage investment in the network by improving the network's visibility and useability within the communities it is supposed to serve through improved documen tation and training rather than by blindly underwriting massive increases in speed.
How Far To Extend Network Access?
With the broadening discussion of the NREN vision, expectations of many segments of the population not originally intended to be served by the network have been raised. An avid group of educators wishing to use the n etwork in K-12 education has arisen. If commercialization brought significant price increases, it could endanger the very access these educators now have to the network. Native Americans have begun to ask for access to the network. How will Congress re spond to them? And to the general library community which with the Coalition for Networked Information has been avidly pressing its desires for NREN funds? And to state and local government networks?
Congress should recognize that choices about network access for these broader constituencies will be made at two levels. Access for large numbers could be purchased by the government from commercial providers at considerable expense - an unlikely develop ment in view of Federal budget deficit. In the meantime, given the current mix of government supported and commercial providers, the environment for these user classes is quite competitive. Those who are able to pay their own way can generally gain acce ss to the network from a choice of providers at reasonable cost. Congress can act on behalf of these constituencies by ensuring that the market for the provisioning of network services remains open and competitive. Short of either regulating the industr y or establishing a new government operated network, careful use of subsidies will have the most impact on ensuring and open and competitive network.
Congress can also choose to view access as a function of price. If Congress does opt for this course, it has several choices to ensure that prices will be affordable. It could seek to impose regulations on the network providers through the FCC at a nati onal level or urge the state PUCs to do it at the local level. (Of course the viability of state PUC regulation, becomes questionable by the near certainty that there would be little uniformity in how the PUCs in each state would treat a national service .) Congress also could impose a tariff on network providers profits and use the tariff to subsidize universal access. It should, of course, understand that these courses of action would raise touchy questions of conflicts between Federal and state juris diction.
Congress may also have been vague in dealing with these broader network constituencies, because it wishes to sidestep making these difficult choices. The origin of most of these choices may be traced to the addition of education policy goals for the Netwo rk symbolized by the changing of its name from the National Research Network to the National Research and Education Network in the OSTP Program Plan in September 1989. While this action got the attention and support of new constituencies for the Network, it did not bring any significant shift to the science and mission agency oriented direction of network development. The legislation remained essentially unchanged: "educators and educational institutions" were as specific as the language of the bills ever got. Perhaps this was almost on purpose? Having goals that were more specific might imply the need to justify with some precision why some individual segments of the networking community deserved service while some did not.
Unless Congress were able to construct a separate rationale for the needs of each of the network constituencies - from supercomputer users to grade school students - specific goal setting by Congress might imply that Congress was arbitrarily judging some network constituencies to be more worthy than others. This would be a difficult course to follow because those who were left out would want to know what the basis for such a judgement would be? Solid answers would be difficult to come by because networki ng as enabling educational technology is so new that no one is as yet quite sure how to measure its value. Without such assurances, it may be difficult for Congress to know how to justify its spread on any other grounds than equity of opportunity.
Indeed there is a constituency of grass roots-oriented, small-scale network builders allied with elements of the library community. This constituency suggests that computer networks will very quickly become such powerful means of access to information th at lack of access to them will soon will carry serious implications for social and economic equity within the nation. These groups can be expected to be very vocal in their demands that some minimal level of access to the national network be widely avail able and affordable. They are likely to ask that Congress turn its attention to the feasibility of establishing the goal of universal access to the national network. Although the technology and economic conditions are quite different from the conditions of the 1934 Communications Act, they are likely to demand action analogous to that.
Motivated by these concerns, Mitch Kapor has been arguing very eloquently for the building of the NREN as a National Public Network. Asked to define what he saw as being at stake, he said the following to the author in September 1991: "Information networking is the ability to communicate by means of digitally-encoded information, whether text, voice, graphics, or video. Increasingly it will become the major means for participation in education, commerce, entertainme nt, and other important social functions. It is therefore important that all citizens, not just the affluent, have the opportunity to participate in this new medium. To exclude some is to cut them off from the very means by which they can advance themse lves to join the political social and economic mainstream and so consign them to second-class status forever. This argument is analogous to that which was made in favor of universal voice telephone service - full social participation in American life wou ld require access to a telephone in the home." Kapor through his Electronic Frontier Foundation, (EFF) is working hard to make sure that Congress is compelled to address the question of universal network access. The EFF has also begun to press for the u se of ISDN as a technologically affordable means of bringing the benefits of a national network to all Americans.
If Congress wishes to promote widespread access to the Network and to design an network that is amenable to widespread use, it will do well to examine carefully the position that the EFF is articulating. It would also do well to look outside the confines of the Federal Networking Council and the FNC Advisory Commission that is made up of members similar in orientation to the FNC and is scheduled for only four meetings and a two-year-long existence. If it wishes to i ncrease secondary and elementary school access to the network, it could investigate enlarging the very small role granted by the legislation to the Department of Education. Unfortunately, without careful planning what would be gained by this is unclear. The Department of Education has never played a significant role in computer networking. The immediate needs of the K-12 arena are focused mainly around maintaining the existence of affordable low bandwidth access and t he support of successful pioneering efforts.
When Congress states its intentions for the scope of access to the network and, as a part of doing so, sets priorities for investment in network bandwidth versus ease of use, it can then turn its attention only to one other area.
A Corporation for Public Networking?
Network governance and oversight are key policy issues.
If Congress has doubts about the current situation, it might want to consider the creation of an entity for NREN management, development, oversight and subsidization more neutral than the NSF. Action should be taken to ensure that any such an entity be more representative of the full network constituency than is the NSF. If Congress decides to sanction network use by a community broader than the scientific and research elite, it must understand the importance of crea ting a forum that would bring together the complete range of stake holders in the national network.
While such a forum would not have to be a carbon copy of the Corporation for Public Broadcasting, given the half billion dollars to be spent on the network over the next five years and the very confused and contentious policy picture, it might make sense to spend perhaps a million dollars a year on the creation of an independent oversight and planning agency for the network. Such an entity could report its findings to the Congress and respond to goals formulated by the Congress.
Congress could declare the development and maintenance of a national public data network infrastructure a matter of national priority. It could make it clear the government will, as it does in issues of national transportation systems, the national financ ial system, and national communications systems, maintain an interest in the development and control of a system that serves both the goals of improved education and new technology development.
To carry out such a mandate, a Corporation for Public Networking could have fifteen governors nominated by the members of the network community and subject to the approval of the Congress. Each governor would represent a network constituency.
1. The NSF 2. DOE, 3. NASA 4. DARPA 5. Corporate Users 6. K-12 7. Higher Education 8. Public Libraries &State and Local Networks 9. Commercial Network Information Service Providers 10. IXC's 11. RBOCs 12. Personal Computer Users 13. Computer Manufacturers 14. Disabled Users 15. University Computing
Since the legislation calls for backbone nodes in all 50 states, such a structure would be a reasonable way to coordinate Federal support for the network on a truly national basis - one that, by acknowledging the network as a national resource, would give representation to the full breadth of its constituencies. Governors could use the network to sample and help to articulate the national concerns of their respective constituencies .
If it adopted these goals, Congress could give a CPN a range of powers.
1. The CPN could be a forum for the expression of the interests of all NREN constituencies. In the event the network were to be administered by the NSF, it could be serve as a much more accurate sounding board of net work user concerns than the FNC or the FNC Advisory Council.
2. The CPN could be authorized to make recommendations to NSF and other agencies about how funds should be distributed. Such recommendations could include truly independent assessments of the technical needs of the network community and the most cost eff ective ways of achieving them.
3. The CPN could itself be given responsibility for funding distribution. Such responsibilities would incur an increase in administrative costs and staff. Nevertheless, by creating an opportunity to start a process from scratch and one that would consequ ently be free of the vested interests of the National Science Foundation in high-end network solutions, Congress would likely get a clearer picture of where and how effectively public monies were being expended. With such responsibility the CPN could also keep extensive pressure on network providers to remain interconnected. When thinking about cost, Congress should also remember that effective oversight of subsidies funneled through NSF would imply the hiring of extra staff within that agency as well.
4. Congress might want to ask a CPN to examine the use of the $200 million in NREN R&D monies. Policy direction dictating the spending of Federal funds is still suffering from the fuzzy boundaries between the netw ork as a tool for leveraging technology competitiveness into commercial networking environments and the network as a tool to facilitate science and education. If Congress decides that the major policy direction of the network should be to develop the net work for use as a tool in support of science and education, then it may want monies directed toward DARPA to be focused on improved databases, user interfaces and user tools like knowbots rather than a faster network used by fewer and fewer people. A CPN that was representative of the breadth of the network's user constituencies could provide better guidance than the FCCSET or DARPA for spending Federal subsidies aimed at adding new capabilities to the network.
5. Additional levels of involvement could have the CPN act as a national quasi-board of networking public utilities. It could be given an opportunity to promote low cost access plans developed by commercial providers. If it borrowed some of the fund rai sing structure of National Public Radio, it should be able to raise very significant funds from grass roots users at the individual and small business level who are made to feel that they have a stake in its operation.
6. If congress wanted to increase further the role given the CPN, it could decide that with network commercialization and technology transfer goals completed, the majority of the NREN funds go to the CPN which could then put out a bid for a CPN backbone. In effect Congress could dictate that the backbone announced by the NSF for implementation in 1993 be implemented and run as a joint project between the NSF and a CPN.
All entities should be considered eligible to join and use the CPN in support of research and education. Commercial companies who wanted to use the CPN to interact with the academic community should pay a commercial rate to do so. With the availability of a parallel commercial network, commercial restrictions on the CPN could be very much loosened to include anything in support of research and education. The CPN would study and report to Congress on how gateways bet ween commercial TCP/IP networks and the CPN network could be maintained.
7. Some suggest that the Congress go even further. These people emphasize that a replacement for the R&D aspects of the Internet in the context of commercialization and privatization is uncertain. Bell Labs and Bellcore remain as the research arms of the Public Switched Telephone Network. However neither of them have ever developed major strengths in wide area data networking. Nor do they appear to be likely to do so in the near future. Despite this situation, t he major private investment made in the Gigabit Testbeds indicate that the American telecommunications industry feels a need to invest in continued research. This is something that the current commercial players are too small to do. Furthermore, it is s omething that the larger players driven by pressure to report quarterly profits may find difficult to do.
Congress could make a decision that Federal investment in the technology should emphasize less pump-priming to increase the pace of what most see as inevitable commercialization and more the continued building of new networking technology for both technol ogy transfer and support of the technology as an enabling tool. In this case Congress could direct the CPN to plan, deploy and manage a state of the art public information infrastructure. With goals for constituencies and levels of service defined, the C PN could produce for Congress multiple scenarios for developing and maintaining two networks.
The first would be an experimental network where the very newest technologies could be explored. It could be very similar to the current gigabit testbeds but this time with all five projects linked together. The second would be a state-of-the-art operati onal network that can provide wide spread field trials of technology developed on the experimental network. With the maturation of the technology on the operational network it would be available for open transfer to commercial service. It should be remem bered that such a continuous widespread network R&D environment would provide wide spread training experience for graduate students that would otherwise be unavailable.
Initial seed money would come from public funds. However the bulk of support could come from a percentage of profits (as cash or in kind contributions) that participating companies would be required to contribute to the CPN as the price of admission for d eveloping and benefiting from new technology. Care should be taken in structuring contributions in a way that small start-up firms would not be locked out. To ensure this, Congress could mandate that the CPN commissioners (perhaps with appropriate oversi ght from the National Academy of Sciences, the IEEE, or the ACM) develop a plan to ensure that the cost of entry to such a testbed not exceed the capitalization of the current small commercial players. It could also require the development of proposals t o handle the issues of interconnection billing, billing for actual use versus size of connection, and interoperability among network providers.
A different financing model could be explored if the CPN were instructed to report on the feasibility of selling shares to commercial carriers in a national networking testbed and R&E network where carriers could, over a long term basis, develop and mature new networking technologies before transferring them to the commercial marketplace.
8. In its November 1, 1991 recommendations to the National Science Foundation, FARnet suggested that the NSF should consider the issuance of several separate solicitations for the development of software tools for end -user applications and network management and operations. To emphasize its point it added: "we believe that the lack of useful tools for information retrieval and display is one of the biggest impediments to the productive use of the network and has imp aired the credibility of the NREN in the eyes of the target user populations." FARnet admonished the NSF to emphasize open architectures and standards in its solicitations, adding that "where standards are not adequa tely understood or developed, the NSF should support programs to test, evaluate and improve them."
FARnet concluded by recommending "that the NSF, working with the user community and the providers, define and implement clear criteria for the award of additional funding to mid-level and campus networks. . . The new criteria should be designed to further . . . goals such as the extension of network services to new or underserved communities (for ubiquity); the improvement of network operations, procedures and tools (for reliability); the enhancement of existing serv ices through development activities, upgrading of existing connections to 'have not' institutions; leveraging of state, local, and private funds (to maximize the impact of Federal investment), and training and support for end -users (in cooperation with n ational and local programs)." If a CPN is created, it should be directly involved with working toward these important goals. If implementation of the network is left to the National Science Foundation, Congress should emphasize the importance of the NSF 's meeting these goals.
9. Finally, a strong and broad-based CPN might be able to make recommendations to Congress on the identification and resolution of problems of telecommunications policy engendered by the continued growth of this network technology. It could perhaps play an educational role in advising state Public Utilities Commissions on the long term implications of their decisions.
Policy makers must soon decide whether the National Research and Education Network is a public or a private good. Although privatization appears to be proceeding apace, since the network backbone will be rebid, there should be time for some careful planning for the development and evolution of what can, within 10 to 20 years, become an extraordinarily powerful system that is as ubiquitous as the current telephone network and provides all Americans with access to infor mation in much the same way as public libraries were created for a similar purpose a century ago.
Congress must understand that the NREN is not just a new technology (indeed much is of it is old technology), but has the potential to become the most powerful means of access to information ever created. Within this context it must decide whom the NREN shall serve. It must decide whom shall have access to the NREN.
Once it has done this further options fall into four major areas.
First: Congress must decide degree of oversight that is necessary to extend to the network. Such oversight could range from legislating that the FCC regulate the network, to strict reviews of the NSF's actions, to vesting oversight powers in a Corporation for Public Networking.
Second: It must decide whether the appropriate place to subsidize technology transfer is within a privatized operational NREN or within the experimental gigabit testbeds. Without a better understanding both of how the technologies are evolving in the commercial market place, and the evolution of both the testbeds and the NREN, it will be difficult to make make a wise decision. In addition, we must expect that the nature of its choice will be further influenced by its decision on whom the network is to serve.
Fourth: It must decide whether to subsidize additional connectivity or broader use within connected institutions or both. In other words should more institutions be connected to the network, or should he network be made easier to use by the members of those institutions already connected?
To the extent that Congress chooses to pursue options three and four, it will want to explore the scenario for the Corporation for Public Networking discussed above.
Access to information is access to power. The creation of a National Research and Education Network based on the NSFnet and the remainder of the American Internet will mean the creation of a national information acce ss system of unprecedented power. In its ability to affect the lives and well being of Americans, the NREN, if properly designed, will be just as significant as the national Interstate highway system and the national electric power grid. The national highway, or the national power grid, or the national telephone system could serve as models for implementation. The Federal Government provides a public but otherwise unregulated Interstate highway system with univers al access available to all Americans. Private industry provides our electric power. However, it was allowed to do so only in return for submitting to Federal and state regulation designed to ensure affordable national access by all citizens. The national telephone system has been established under a similar "social contract." If the nation is not to be dangerously split into information rich and information poor classes, policy makers have about five years in which to choose a Federally provided National network, or a privately provided but nationally regulated network.
During the development and maturation of the national network, policy makers should also be very attentive to its impact on the public switched telephone network (PSTN). The technol ogy involved and the speed with which it is changing will only increase the potentially serious impact from the freedom of unregulated components of the telecommunications industry to pursue market solutions that will keep regulated companies from becomin g viable players. We must realize that we are about to enter a power struggle for the control of the information resources of the 21st century that promises to be every bit as harsh and bruising as the power struggle for natural resources was at the end of the last century.
While the intentions of most appear to be good, as this study has shown, the playing field is terribly confused. Gigabit technology (if properly understood) is desirable. Still we should take great care that its cost does not raise the price of low bandwidth or "low end" entry into the network.
Lack of a specific definition of communities to be served, lack of an agreed upon plan for how they shall be served, and lack of funds to serve everyone have combined to create the present chaotic situation in which many of the players have been motivated primarily by a desire to increase their institutional role in order to get larger Federal allocations of funds.
In the absence of both a well-thought-out plan agreed to by all parties and adequate monetary support, the grand push to accelerate both the speed and scope of the technology could have the ironic role of weaking the entire foundation of the network. Un til the Congress provides more direction, the squabbling that has developed is likely to continue. In the absence of such direction, at best large sums of public funds may be ineffectively spent, and at worst a picture of empire building could emerge tha t would make any Federal support for research or educational networking unlikely. Such an outcome should be avoided because the potential of a well designed and developed network to do great good in both policy arenas is very significant. Unfortunately w ith the NSF under mounting criticism, ANS on the defensive and rumored to be financially weakened, and Congressional hearings scheduled for mid-March, the potential for a destructive free-for-all is very great.
On February 28, 1992, the author completed an 18 month-long appointment at the United States Congress, Office of Technology Assessment where he served as Project Director for an assessment of the National Research and Education Network. The opinions expr essed in this essay are those of the author alone and do not necessarily reflect those of the Office of Technology Assessment.
This glossary identifies some of the people and explains some of the acronyms and other technical terms mentioned throughout this paper.
Adams, Rick Rick Adams is the President and CEO of Uunet Technologies.
Aggregate Transmission Aggregate Transmission refers to multiplexing or mixing together of the applications of thousands of users across a backbone. Such aggregate traffic can reach gigabit speeds with present technology and with acceptable dollar cost.
ANS Advanced Network & Services is the 501(c) (3), non profit, IBM and MCI spin off corporation launched on September 17, 1990. Launched with a $5 million dollar contribution from each of its corporat e parents, it found itself in a position to inherit control of a privatized National Research and Education Network. ANS is receiving $10 million per year from the National Science Foundation for providing the T-3 backbone to which it is also free to sell commercial access. The ANS/NSFnet backbone connects 32 mid-level networks which in turn connect over 1,000 institutions.
Bit Pipe A bit pipe is the name given to a telephone circuit used for transmission of packets in a data network. A "dumb" bit pipe is a telephone circuit that provides only physical data layer transmission and no higher level applications.
CERFnet The California Education and Research Federation Network was started with a National Science Foundation grant in 1988. Run by General Atomics Corp. from the San Diego Supercomputer Center, CERFnet joined with PSI and Uunet in the spring of 1991 t o form the Commercial Internet Exchange (CIX).
Clear Channel Transmission Clear Channel Transmission defines the amount of data occupied by a single user's network application. Clear channel gigabit transmission over a wide area network is not yet a proven technol ogy let alone economically viable.
CIX Commercial Internet Exchange is the agreement between PSI, Uunet, CERFnet, BARRnet, US Sprint (Sprintlink), and Unipalm Cambridge U.K. (PIPEX) that lets the traffic of any member of one network flow wi thout restriction over the networks of the other members. Any TCP/IP service provider may join the CIX for a cost of $10,000 and connect to and send traffic over the CIX backbone f rom either an East coast or West Coast connection point.
Connection Oriented The telephone network is connection oriented. This means that for the duration of a telephone call, a small segment of the network is solely dedicated to the traffic of that one call. In other wor ds no other calls can use that portion of the network.
Connectionless Most computer data networks are connectionless. Data is encapsulated in "envelopes" called packets. The packets from a user's session may be sent by network routers along different routes to their destination as traffic conditions on the network change from moment to moment.
DECnet DECnet is Digital Equipment's proprietary networking protocol. It is facing the same problems that SNA is with customers desiring to use TCP/IP for networking all their equipment rather than having to run dedicated networks for proprietary protocols.
EDI Electronic Data Interchange is a set of standards that allows corporations to order from and send invoices to other corporations, all electronically by means of the kinds of networks focused on in this study. EDI would be a major use of a commercial NREN.
Farber, Dave Dave Farber, of the University of Pennsylvania, is a co-founder of CSnet, a Principal Investigator in the Aurora Gigabit Testbed and a Board Member of the Electronic Frontier Foundation. On December 10, 1991 Kapor and Farber published a jo int letter accusing the National Science Foundation of tilting the playing field of the developing NREN unfairly on behalf of ANS.
FARnet The Federation of American Research Networks is an association in which the mid-level networks of the NSFnet, two commercial providers (AN S, and PSI), and some of the telephone companie meet usually four times a year to discuss common interests.
FCCSET The FNC reports to the Federal Coordinating Committee on Science Engineering and Technology. FCCSET in turn reports to OSTP. FCCSET is required by the HPCC legislation to provide a report to the Congress by December 1992 on the planned implementation of NREN.
FNC The Federal Networking Council is the NREN implementation coordinating body. It is Chaired by Charles Brownstein of the NSF and run primarily by representatives f rom DARPA, DOE, NAS, and NSF. Several other Federal Agencies are represented, but without significant power because the NREN legislation authorizes relatively little money for them. The Department of Education does not have membership in the FNC.
Frame Relay Frame Relay is a fast packet switching technique that extends the x.25 protocol for use over fiber< /a> by eliminating error checking. It provides bandwidth on demand up to 1.5 megabits per second and by 1992 was either introduced or being introduced by most of the telecommunications industry. It is often thought o f as a bridge to SMDS.
Gateway A gateway is an intersection between two networks running different protocols. A gateway router strips incoming packets of the protocol of the incoming network and encapsulates them in "envelopes" of the protocol of the outgoing network.
IETF The Internet Engineering Task Force is the standards promulgating body of the Internet. It has a quite successful record of developing standards such as the Simple Network Management Protocol that are quite quickly adopted by major segments of the network industry.
Internet The Internet is a worldwide network of TCP/IP networks. The least the American portion there of, is composed of the four American TCP/IP government sponsored networks: NSFnet, ESnet, NSInet and ESnet. The PDIs or Private Data Internets (ANS, Uunet and PSI) are also considered to be part of the American Internet which is a subset of the worldwide internet. In January 1992 the NSF estimated that the worldwide Internet had 727,000 nodes approximately double the number of a year earlier. Its electronic mail component reached 106 countries and territories as of February 10, 1992.
IXC The Inter Exchange Carrier is the post divestiture generic name for the nation's long distance phone companies. AT&T is the largest controlling more than 60% of the market. It is the only regulated IXC. MCI and Sprint are the other two IXC's that are not only national but international in scope. Many more small IXC's exist.
Kapor, Mitch Mitch Kapor, the co-founder of Lotus Development Corp, is President of the Electronic Frontier Foundation. Kapor has become a tireless advocate of the development of the NREN as a National Public Networ k.
LAN A Local Area Network is a means of connecting computers and peripheral devices such as printers, disk drives and file servers. Two or three miles is usually the eff ective upper limit of the distance between the farthest points on a LAN. Most LANs are 10 megabit per second Ethernet protocol LANs. FDDI is a new protocol which enables a LAN to increase its carrying capacity to 100 megabits per-second.
LATA The Local Access Transport Area, an artifact of the 1984 divestiture, defines the geographic area over which the Local Exchange Carrier may provide toll calls. The area is usually but not always smaller than that covered by a long distance area code. Even though ten or twenty LATAs are normally to be found within the territory of a Local Exchange Carrier, the LEC may not provide calls that cross LATA boundaries. Suc h inter-lata traffic is the exclusive domain of the Inter Exchange Carrier (IXC).
LEC The Local Exchange Carrier is the local telephone company for a given geographic area. In return for being given a monopoly over residential connections to the network, the LEC, which is most likely one of the more than 20 former Bell operating companies, is subject to strict regulation of the services it offers and rates it may charge for those services.
MAN A Metropolitan Area Network extends over the distance of a city. it is smaller than a WAN and larger than a LAN. SMDS is thought to be the technology that will allow the LECs to offer MANs. FDDI is another technology being deployed by Metropolitan Fiber Systems, a private bypass carrier building MANs in many large US cities.
MERIT Michigan Education & Research Information Triad is the holder of the 1987 cooperative agreement with the National Science Foundation for the provision of the T-1 and now the T-3 NSFnet backbone. MERIT maintains a subcontracting relationship with IBM and MCI as joint study partners and since September of 1990 with Advanced Network and Serv ices.
MFJ Modified Final Judgement is the name given Judge Green's decision outlining the rules of the 1984 divestiture of AT&T. Under the MFJ the RBOCs have been banned fr om manufacturing. Although recently allowed to provide information services, the RBOCs are still banned by the MFJ from delivery of inter LATA telephone or data service.
NSFnet The National Science Foundation Network is expected to become the core network of the NREN. The NSFnet is composed of the backbone o f 16 sites or nodes and 32 mid-level or regional networks connecting more than 1000 institutions to the backbone.
OSTP The Office of Science and Technology Policy functions as the Science Policy coordinating entity of the Executive Branch of the Federal Government.
Protocol A protocol is the language that a network or network application "speaks." It is to networking what a programming language is to programing.
PSI Performance Systems International is a commercial TCP/IP service provider with its own national T-1 backbone that connects to the N SFnet. PSI is the service provider for NYSERnet the New York State mid-level network. PSI connects several dozen corporations and other organizations.
PSTN The Public Switched Telephone Network refers to the combined infrastructure of the regulated IXC (AT&T) and the RBOCs and their respective Local Exchange Carriers. Universal telephone service embodied as the goal of the 1934 Communications Act is provided by the PSTN.
RBOC The 1984 divestiture left local telephone service under the control of seven Regional Bell Operating Companies, also sometimes referred to as RHCs (Regional Holding Companies). Each RBOC in turn is comp osed of several Local Exchange Carriers (LECs). The RBOCs and LECs operate under the same regulatory structures.
Router A Router is the device that serves as a "traffic cop" in a connectionless network. Routers are specialized computers that take incoming packets and compare their destination addresses to internal routing tables and, depending on network conditions, send the packets out to the appropriate receiving router. This process may be repeated many times until the packets reach their intended dest ination. The market for multi-protocol routers that include TCP/IP is one of the fastest growing within the computer industry.
Schrader, Bill Bill Schrader is the President and CEO of Performance Systems International. PSI sponsors the Com-Priv mailing list where most of the online discussions of the political and technical aspects of NREN have taken place.
SMDS Switched Multi-megabit Digital Service is a fast packet switching service in trial in most of the LECs. It is not likely to achi eve widespread commercial availability before 1993. It can carry the TCP/IP protocol and would provide a means of deploying NREN services as p art of the Public Switched Telephone Network. The LECs however are at a disadvantage in that they will require a partnership with an IXC to carry SMDS data across LATA boundaries. So me IXCs are preparing their own SMDS offerings which they will be able to bring to businesses by means of private networks that bypass the LECs entirely. SMDS provides packet switched bandwidth on demand in increments up to 45 megabits and very likely by 1996 up to 155 megabits per second.
SNA Systems Network Architecture is IBM's proprietary networking protocol used to enable its mainframes to communicate with each other. It demands $80,000 to $300,000 f ront end processors for use in linking the mainframes. During the past two years many large companies have begun to abandon these processors in favor of multi-protocol routers costing about $15,000 each and capable of encapsulating SNA traffic in TCP/IP packets.
SONET Synchronous Optical Network is a Bellcore developed CCITT international standard for high speed level 2 communication over fiber-optic networks. SONET functions as a carrier for ATM fixed length packets (53 bytes). TCP/IP can ride on top of SONET and ATM. All major IXCs are planning to install SONET and ATM on their backbones at speeds ranging up to OC48 (2.4 gigabits per second) during the next two to three years.
TCP/IP Transport Control Protocol/Internet Protocol (TCP/IP) has become, in a very short period, a world wide public domain standard for connecting computers by all vendors over wide area networks. It operates at level 3 and 4 of the 7 level protocol stack. Hence it can be transported by frame relay or SMDS which functions at level 2 of the stack. It wi ll be the protocol of choice in the NREN and is already in use in 90% of the Fortune 500.
Uunet Uunet Technologies started as a Usenet (unix based) network services provider and has expanded into offering commercial TCP/IP internetworking with a coast to coast back bone. Uunet and PSI are thought to be not too dissimilar in size although Uunet's backbone contains far fewer sites.
WAN A Wide Area Network refers to a network with backbones that can link computers over hundreds or even thousands of miles. T-1 or 1.5 megabits per-second is a typical WAN backbone capacity. A few WANs such as the NSFnet now have T-3 (45 megabit per-second) backbones.
Weis, Al Al Weis is the former IBM executive who is currently President and CEO of Advanced Network and Services (ANS).
Wolff, Steve Steve Wolff is the Director of the National Science Foundation Network and is likely to inherit a similar title for the NREN.