A Practical Navigator for the Internet Economy

Routing - The Key To Survival. Routing Arbiter And Route Charging, pp. 1-9

Remember the fears that when the "big boys" took over the Internet, they would starting charging everyone by the megabyte? For reasons that we have described in the last several issues this is not likely. What we may see instead is charging of providers for routing announcements that cannot be aggregated and have to be carried by the over-taxed backbone routers of the national service providers. Our lead article describes routing problems faced by the majors and examines the rationale for charging for routing announcements.

With the strain put on the network by continued rapid growth, routing is becoming the key to technical survival. It may also become the key to financial survival before the new year is out. We examine the services of the Routing Arbiter (RA) which, through use of a Routing Arbiter Data Base and Route Servers located at the NAPs, is making it possible for those connected at the NAPs to run a single peering session with the Route Servers rather than multiple peering sessions with each of the other connected service providers. This would take some considerable load off backbone routers and their crowded routing tables.

However, the Route Servers are being relied on only by the smaller players. Larger ones, although they may peer with the route server, peer also with each other. Why? Because they like to maintain control of their peering sessions. A complicating factor is that other large national providers do not use the route server at all. Sprint is one of those who do not. Sprint states that it doesn't because the Routing Arbiter Data Base, that the servers run, is populated with addresses from the old NSFnet Policy Routing Data Base (PRDB) that are no longer accurate. ANS, which ran the PRDB is cleaning up the old addresses but will apparently not be done any time soon. Some tell us the dispute is as much about prescriptive versus descriptive philosophies applied to routing, and some NSP's antipathies to the old MERIT ANS way of running the NSFnet backbone service, as it is about route accuracy.

In any case, what we found is that a laudable effort in which the NSF is investing $4 million a year is *so far* only marginally useful. What is interesting is that one way in which the Routing Arbiter could become exceedingly useful is, if it were to be used, for charging for routing announcements. Yakov Rekhter, who is a co principal investigator for the ISI side of the RA project, is now with Cisco. We republish a presentation in favor of figuring out how to charge for routing announcements that Yakov gave at the December IETF. Route announcement charging is predicated as a means to alter the behavior of those who use the defaultless core of the internet and burden the Cisco routers of that core with unnecessary routing announcements. By charging a substantial amount (5 to 10 thousand dollars per route) - an amount more than the cost of renumbering existing networks, it is assumed that providers will be motivated to aggregate and renumber rather than pass multiple new routes upstream.

Route charging won't happen over night. But because it is predicated on alleged technical necessity, it has a much greater chance of happening than settlements based on measured use traffic which seems to be motivated by economic considerations alone. If routing announcement charging does come, the way in which it is implemented will play a critical role in the death or survival of smaller providers.

Interview With Paul Mockapetris On Future Of IETF, pp. 10-13

Paul Mockapetris, the current chair of the IETF talks with us about the future of the IETF. Paul points out that he expects that more and more IETF standards work may run into areas where others will claim a patent affecting some part of the standard. The catch 22 is that during the past two years, as the internet gold rush has taken off, we may expect that many more patents will have been filed affecting internet technology than earlier would have been the case. Only in 1996 will the effects of patents filed 12 to 36 months ago begin become apparent. If a patent affecting a proposed standard can't be licensed on a very reasonable basis, the result could become the abandonment of the standard.

The interview also covers the SNMPv2 controversy, liability insurance concerns, relations with ISOC, and the NSF's move to create an intellectual infrastructure fund.

Interview With Russell Pipe On Global Information Infrastructure Commission, pp. 14-18

In an interview with the co-organizer of the GIIC, we seek to learn the nature of the organization. What we see is an effort by large multinational information technology corporations to supersede G-7 governments in setting policy on building a global information infrastructure. Though Mr. Pipe might disagree, it seems to us that the interests of the multi-nationals sponsoring the commission are paramount and the interests of everyone else secondary. Clearly when it comes to the questions of transborder data flow and issues of personal privacy, it seems that corporate concerns are first and that any concept of the public interest is left to be up held by national states which seem to be increasingly obsolete for they appear to be unable to do much of anything on behalf of individual citizens .

Sprint Fails To Open The Pennsauken NAP To Small Providers, pp. 19 -21

In a deal that has since fallen through, MFS was to have provided bridged access to the Sprint NAP to regional ISPs. We publish a complaint from an ISP and answers from Bob Collet of Sprint and John Hardie of MFS. Collet says that he never imagined that small ISPs would want to go to a NAP. In saying this, he fails to realize how important going to a NAP has become for sizable regional ISPs that want to be certain that they can keep control of their own CIDR blocs and routing.

Sprint has been saying since the summer that it would connect new customers at its NAP within 90 days. Unfortunately the window seems to remain eternally fixed at 90 days. Except for NYSERnet, a large Sprint customer who has just raised hell and has been promised prompt admission to the NAP. For remaining ISPs, the problems with Pennsauken availability are making it increasingly likely that a NAP with be opened in Manhatten.

Access Indiana: The Origins In 1990 & State Opens Project To Commercial Sector, pp. 18, 21-22, 24

We find that Stan Jones, ardent Ameritech backer, authored the 1990 state law that started Ameritech involvement with the Indiana super highway. Ameritech LDIP for Access Indiana, contrary to the intent of the original program positions itself to sell to private business. The Director of the Ameritech effort Mike James in interview with COOK Report moves to distance himself from the state bureaucrats.


Auto Industry Aims To Impose Certification on ISPs, pp. 1 -5

This will likely be the year when "industry" makes its mark on the Internet. In an, as yet, little noticed column in the January 16, 1996 issue of Network Computing, Robert Moskowitz calls on industry to regard the Internet as a "rough hewn gem and play the role of the gem cutter." Moskowitz is network architect at Chrysler and member of the Internet Architecture Board. His agenda must be taken very seriously.

For much of the past year, Ford, Chrysler and GM, the Big Three of the auto industry, have been exploring ways to shape the Internet to their growing need to have seamless total communications (CAD/CAM, EDI, email and mail lists, etc) with every one of its thousands of suppliers. The Big Three's initial thought was to put out an RFP for a virtual private network (VPN) where suppliers could be gathered, and quality and security of network performance assured. However, from evidence that we have reviewed, it seems that a planning group at the AIAG (Automotive Industry Action Group) concluded that such a network -- named the Automotive Network eXchange (ANX), -- would have been of unprecedented size and cost and therefore unworkable.

The data networks of the Big Three are a mission critical source of information, the "oxygen" necessary to their very existence. Interfacing their suppliers to the same kind of mission critical network would be the source of maximum pay off for their substantial investment in network technologies since 1990. If they couldn't get an industry-wide, vertically integrated network of mission critical quality, they'd head as far in that direction as their finances and technology would allow. Consequently their thinking turned towards something only slightly less ambitious: an outsourcing entity to run a special network exchange point to interconnect the Internet Service Providers their suppliers used and to certify their quality.

But, by the beginning of this year the plan for a special NAP had been dropped and the key mixture that remained was (1) the establishment of quality of service criteria for the Internet Service Provider industry, (2) a requirement for parts suppliers who wanted to do business with the Big Three to get on the Internet as soon as possible, and (3) an interest in creating an independent body to set quality of service standards and monitor how those networks providing service were complying with the standards. The fact that, if parts suppliers are interpreted narrowly, 10,000 companies are involved, and, if broadly, some 100,000 was seen as an "economic hammer" to get the attention of large Internet Service Providers directed away from the current business model of best effort, store and forward delivery of packets.

Reports reaching us say that the auto industry folk are leaving nothing to chance and encouraging the petroleum and banking industries to impose similar rules on the companies they are dealing with. There is some talk of a meeting of industry users to discuss quality standards as early as this May. They would like parts of the federal government to assist them in moving towards their goals. But so far the federal response does not look promising. Their hope is for the creation of an ISP certification agency that would monitor and report on ISP service quality on an ongoing basis. As Moskowitz points out in his article, if neither the Internet industry nor national governments create what is needed, then industry itself can - in the form of an Internet Quality Association designed to look out after industry's interests. The AIAG white paper outlining these plans has a tentative release date of February 1 .

We are looking at the emergence of a situation where more and more of industry's business will have to be done on the public Internet simply because network links are becoming so pervasive that the public Internet is the only entity that can accommodate the breadth and depth of anticipated use.

Steve Wolff Predicts Internet and Telephony Convergence. pp.6 - 10

We publish in its entirety Steve Wolff's TeleStrategies keynote speech given on January 11. There in Steve cited a litany of Internet business statistics including estimates of 3 billion dollars transacted on the web in 1996 rising to an annual 17 billion within 10 years. He emphasized the need for ISPs to compete on service rather than price - adding: "I think the implication is that there will be a great shake out among providers. Those who can adapt to and find the capital to exploit this new technology of differentiated Internet service have a chance of survival. Those who cannot or will not - will not survive. I believe that the aphorism about being an Internet service provider five years from now is true. You will either be big or bought or you'll be dead."

On telephony convergence: "I believe that the telephone system as we know it today will be totally dead within ten years. Anyone today who has a big long distance bill and isn't using Internet telephony needs to think very hard what they are doing with their money." He also mentioned a project at Cornell that is designed to enable the university to plug telephones into its computers and do away with its PBX. On Internet business models: "There is a new Internet paradigm. The Internet, as it was conceived, is broken and will not be put back together again in the same form. The Internet was founded on the notion of store and forward, best effort delivery of IP datagrams. This was a necessary outcome of pure statistical multiplexing, which is the foundation technology of the Internet. But this model is just not good enough anymore."

Internet Society Pains, p.10

Educom and Terena agree to give up Charter Member privileges. Bob Kahn says only Board of CNRI can make that decision for CNRI. ISOC President will resign in June if CNRI Board does not agree. He does not expect to have to exercise that option.

@Home Debut Impressive p.11

Paul Mockapetris explains @Home architecture. To rely on its own national ATM backbone and regional backbones to connect headends. Seeks to solve bandwidth problems by heavy use of caching. Users promised 128 kilobit return bandwidth.

Access Indiana, pp.12 -13

Role of state's preferred providers may be given statutory underpinning. We are now operating unmoderated major domo listserve ai inexile@pobox.com, in the face of continued refusal on the part of the organizers to allow free discussion.

US Postal Service Internet Kiosk Program: Nearly One Year Later. Still No Progress, p.13

Report by seminar attendee from bureau of Land Management reveals many glossy brochures, but little grasp of the Internet. Among the issues to which the USPS still does not know the answers are: Who its audience for the kiosks is. Rudimentary design - should the kiosks have keyboards. Verification - how can a user prove his identity? Content - what data should be available?

Colorado, Part 5, pp.14 -22

Part III of special report: The Distributed Model: Is it National Information Infrastructure for the Rest of US? A description of Dave Hughes' Internet technology and delivery philosophy. Critiques of the foregoing by Nancy Bolt state librarian and Ed Lyell state school board member.

InterNIC IP Distribution Policy, p.23

InterNIC now dealing with IP address requests from 50 new ISP start ups every week. By sending all new address requests to their upstream providers InterNIC has gone from handing out 800 CIDR blocks in the /24s to /21s ranges to about 30 /24s - /21s per month.



ATM is a level 2 connection oriented transp ort technology composed of 53 byte long cells. As such its architectu re does not mesh at all with connectionless TCP/IP and its variable length packets. Nevertheless we find a heavy interest in ATM among the major phone companies which have invested billions in the technology. One reason appears to be the desire to multiplex switched traffic between backbone nodes of a network before sending it else where. Another is to bring Switched Virtual Circuits (SVCs) to market. These are connection-oriented circuits between users that are set up and torn down by software upon command.

Reserved, on demand bandwidth will depend on SVCs - or possibly on an IETF "developing" protocol: RSVP. The National Science Foundation has just announced a new high speed Connections Program to develop the capability to define and reserve differing priorities of data service.

Ironically Ethernet is as the last mile to the desktop is seen as a major barrier to SVCs. ATM and Ethernets are not compatible. However, we report on Cells in Frames, a project at Cornell that is developing an inexpensive ATM "attachment device" that will drive a 10 megabit ATM connection to a workstation from a 10-BASE T Ethernet hub. Converting LANs to this technology will permitt merging of PBX phone lines with the ATM network.

Meanwhile, in wide area backbones, ATM suffers from various overheads, referred to by its detractors as a Cell Tax, that, according to a Minnesota Supercomputer Center report, would drive OC-3 155 mega bit bandwidth available to IP down to 116 m bits per second.


Noel Chiappa, in an interview with the COOK Report, discusses two competing philosophies of network design: resource reservation and over engineering. Chiappa: "What ever the future is will be greatly influenced by the answer to that question of yours as to "whether you do or do not need resource reservation?" If you decide that you need it, the kind of solutions that you look at start out by looking very different from the solutions that offer themselves in the absence of a resource reservation effort. The answer to these questions will also drastically impact the kind of switching architecture you adopt."


In interviews with Steve von Rump, MCI Data Services Marketing VP, and Steve Tabaska, MCI Data Services Engineering VP we discuss in detail MCI's backbone expansion and its transition to ATM.

MCI expects to be able to do SVCs desktop-to-desktop before the end of next year. In the meantime, it will open a switched ATM OC-3 backbone fabric by the end of April 96. It has changed from General Datacom switches to Fore ASX 1000 switches and is starting with 10 of these $100,000 a copy top-of-the-line switches.

Tabaska expressed doubts about technology arriving at the NAPs that would allow them to scale adequately to permitt multiple peering sessions at bandwidths of 155 megabits per second and higher. MCI, he said, is beginning to rely in part on private exchanges with one or two other large NSPs in places other than NAPs.

By this time next year MCI anticipates running OC-12 on their backbone. To do this OC-3 cards in the Fore switches will need to be replaced with OC-12 cards and Cisco will ha ve to come out with an OC- 12 interface for its routers. They have lo oked at the NetStar Giga- router by are not quite ready to go with it. They plan some months from now to run a dual architecture of PVCs to handle things like telnet, and DNS lookups and SVCs to allow reservation of bandwidth on demand.


We repudiate the Gartner Group's misleading appropriation of parts of our web page glossary for an advertisement in the Business section of the March 24 New York Times. Gartner did not ask permission to use our material which appeared as an indestinguishable part of a poorly done glossary of their own making.


AGIS has been consolodating NET99 customers into its infrastructure and will we believe soon close Net99 down. In the meantime the Net99 customers are extremely unhappy and continue to make their unhappiness well known on the agislist@interstice.com. We present a summary of the discussion from early February through mid March. We are harsh with AGIS because they bought the one provider which came into existence to help the small ISP and have effectively dsmantled it. They appear to be unresponsive to their customers who also have complained about their business practices. Readers may remember that AGIS was the company which, last October, said it would use lawyers to impose order on the internet. We would not like to see the Internet reshaped in its image, and will continue to confront it editorially with the reality of its own making.


Many Web standards are now being developed outside the IETF. Flynn expresses concern. Rutkowski shares a bit of the concern, but explains why he is generally optimistic.


As Kent England said "BBN will teach them (AT&T) how to run a commercial Internet service. But it will take a long, long time in Internet Years for them to learn." We wonder why AT&T entered a low profit and already crowded mass-market rather than focus with BBN on the industrial grade quality of service market that the auto industry would like and BBN could help deliver. Dial up service nationally doesn't scale when compared to service that good small ISPs can provide locally.



We describe the transition from the flat internet of a year ago to the stratified and hierarchical one of today. We examine the current complexities of peering and transit at the NAPs and MAEs as a part of explaining the multi-step hierarchy separating downstream ISPs from the "big six". Along the way we outline both the very substantial costs of establishing peering and transit at multiple network exchanges and what ISPs must do to be able to get address space directly from the interNIC. It is beginning to appear that, the more the Internet increases in size, the faster that power flows upwards into the hands of a few who, since they are both operators and rulemakers for the commercial Internet, might find themselves singled out for accusations of conflict of interest in most other situations.

In the context of these developments, the latest symptom of the upward flow of power is an Internet Draft that suggests that it be acknowledged by the IETF as "Best Current Practice" that customers may be asked to renumber their networks under a fairly broad range of situations. The draft comes from the CIDRD Working Group which emphasizes that its motivation is purely technical, saying, in effect, that more draconian policies are needed to be certain that the number of routes advertised to the defaultless core of the Internet does not exceed the ability of its routers to carry.

Under the new policy, addresses would be thought of as either owned or leased. Institutions with huge address blocks own their addresses and can move them from provider to provider without fear of renumbering. Smaller institutions lease their addresses and are put on notice that they could be asked to renumber. No precise border between address ownership and leasing is defined. Service providers are instructed to do the "right thing" to ensure the routability of the Internet. The draft leaves service providers quite a bit of waffle room to decide what the right thing is. It seems, therefore, that main function of the draft becomes one merely of giving the service providers an IETF provided stick to use in inflicting unpleasant renumbering consequences.

Debate on the IETF and CIDRD lists was so contentious that some doubt that the IESG will promote the draft. [We have tried to distill the substance from an outpouring of comments on both lists.]

On February 24, in a positive development, Scott Bradner scheduled a meeting at the March IETF to debate technical suggestions aimed at finding alternatives to promoting the draft to BCP. In the meantime, sizable new customers will learn that the best protection they have from having to renumber is to do business only with the largest and most centrally connected providers.


Demand for Internet bandwidth, according to sources who spoke with us on background, is doubling every eight to ten months. The growth is such that Sprint and MCI are having difficulty in keeping up. They are also realizing that direct connections to them are almost totally filled by those downstream. Thus they have to build backbone bandwidth on almost a one-to one-ratio to their downstream sales - something that has put a crimp in their pricing models. Some suggest that bandwidth may soon be rationed - either by price or technology - or by a combination of both. ATM and RSVP are suggested as technologies that may make reservation of bandwidth possible.


As the first of a series of interviews on the problems and potential of ATM, we talk with Dave Sincoskie about advances in switch design, the SVC hurdle, the ATM NAPs and Bellcore plans to interconnect them with a backbone of their own. Unfortunately, because of their different technology, NAPs and MAEs cannot be interconnected without a router in between acting as a substantial bottleneck. In April we shall publish an interview with Steve von Rump, MCI Data Network VP, on MCIUs ATM plans.


Paul Mockapetris answers the questions of a COOK Report reader on how @Home will handle bandwidth aggregation from it periphery upwards to its backbone.


In an interview with the COOK Report, Tony Rutkowski discusses the growth of Internet administrative infrastructure and suggests a small Geneva based coordinating body. He finds the domain name charging program with its $15 per domain name contribution to internet infrastructure to be unnecessary and suggests that the NSF cooperative agreement be terminated as soon as possible so that the CIX or a similar organization could be responsible for domain name assignment. In our assessment of Tony's remarks we summarize a different point of view that says internationalization of Internet governance and financial support is the major goal, and that the question seems to be unanswered as to whether NSF sponsored access-by-invitation workshops or an open process in the IETF would be the best way to reach consensus.


We present the first 20% of an article summarizing AGIS's dismantling of NET99 and is difficulty in keeping staff and customers happy. Part 2 in April. While some of the major have had service problems before, AGIS's combination of service problems and failure to respond to customer concerns seems to have triggered the establishment of the first open customer complaint list against a national backbone provider that we have ever seen


We identify an IEPG Draft on ISP peering options.



John Curran: Has the Internet Rendered ATM Irrelevant? pp. 1 - 9, 24

In a long interview, John Curran the CTO of BBN, Planet describes BBN's major use for ATM as being transport between routers within a single point of presence. He doubts the viability of SVCs because work on application programming interfaces is going no where. Why? because there is no demand since all applications are made first to run over TCP/IP. While ATM was planned in the late 80's and began implementation in the very early 90s, the explosion of the Internet in the mid 90s has made ATM irrelevant. John compares ATM to ISDN - used in some areas, but really nothing more than a niche application. He believes that RSVP, as TCP/IP bandwidth reservation protocol, will obviate the need to end user controlled SVCs.

Most people making use of ATM for high speed IP networks are simply using the ATM for large backbone data transport with PVCs going across the mesh and without taking advantage of all the capabilities that ATM offers. The reason is that IP, by definition, does not offer any interface to such ATM capabilities. There is no way right now to send an IP packet and say this packet is a constant bit rate packet and have it mapped to the ATM fabric. According to Curran: "With some of the technology being tested now (RSVP), we will soon be able reserve bandwidth across the network layer. But then you are back to the question of asking if RSVP works independently of underlying transport, why am I using the ATM?"

While, in some situations, ATM may have some utility as a means of backbone transport, John points out that there are people working on running ATM directly over SONET as a long term solution to the requirements both of higher bandwidth and the desire to avoid ATM's cell tax.

Hans Werner Braun Leaves SDC for Teledesic, p. 9

The friction that surfaced a year ago with General Atomic's loss of its share of the InterNic Cooperative Agreement has spilled over into the San Diego Supercomputer Center as a whole, with many of the top people quiting GA and going to work for University of California San Diego. Influenced in part by the deteriorating situation, Hans Werner Braun has left SDSC and joined Teledesic as its network architect.

BBN Pushing Commercial Availability for RSVP for Early 97 pp. 10 -12, 24

We interview Richard Blatt, BBN's RSVP product manager. BBN, Cisco and Intel have a project designed to make bandwidth-on-demand commercially available as soon as possible via the Internet protocol RSVP. A commercial implementation of RSVP would have the same general uses as end-user controllable, ATM Switched Virtual Circuits. If successful, some believe it would spell the end for the commercial viability of ATM in the Internet marketplace.

Sprint Executives Discuss New Backbone Implementation and ATM, pp. 13 -15

According to Dominick DeAngelo Sprint remains committed to an ATM implementation plan announced in the summer of 1993. However, until ATM standards mature, SprintLink will do little ATM adoption. Instead it has increased its backbone capacity by addition additional routers, and circuits to backbone POPs. FDDI rings have been bolstered by the addition of Giga switches. The new architecture gives Sprint 90 megabit-per second bandwidth capacity compared to MCI's OC3 which, because of the ATM "cell tax," may only be about 120 megabits per second.

Future Architecture - Huge Transit Backbones or Fine Mesh & Many Naps? pp. 16 - 17

Complaints about constraints on and inadequate investment in Internet infrastructure continue. Founding the Internet in the United States on five to six default free backbones engineered to differing capacities seems to have produced complaints from some of the top providers that their resource investments are not adequately compensated. At the same time these providers who are at the top of the Internet pyramid continue to be very picky about those providers with whom they will peer at the major public exchange points. Given this pickiness, some doubt the continued viability of public exchange points. However, for those who would like to see the public exchanges become viable, one very good sign in April was Sprint's commitment to start using the Routing Arbiter.

What's Wrong With NII Policies? pp. 18 - 20

In early April we started a conference devoted to a discussion of what we call Local Information Infrastructure (LII). We are contrasting LII to National Information Infrastructure as being far more desirable than federally funded projects which far too often go to vested interests in the stockholder's bottom line rather than in creating and infrastructure that is owned, operated, and controlled by the local community. The interests of a community far too often are defined by whatever group with either a national corporate of a self serving local institutional agenda can get to the grants disbursing agency first. Jeff Michka, a Washington state community activist, outlines for us, the unholy alliance between between self appointed community groups that specialize in getting grants and public officials and industry.

Access Indiana Policy Clarified, pp. 20-22

The Director of Indiana's Intelenet Commission describes the commission's role and legislation focusing the AI program under its control. It becomes clear that this state agency for the centralized purchase of telecommunications services, made the assumption that it could purchase Internet service for the state in the same top down manner. We now think that most of the policy snafus of the past year may be attributed to state bureaucrats who failed to understand the decentralized nature of the Internet. We offer key personnel detailed suggestions for re-orienting their approach into one of working with local communities to help them decide how to best purchase Internet services from the private market.