A Practical Navigator for the Internet Economy

Spiraling Bandwidth Consumption and Flat Rate Pricing on Collision Course? -- Wanted: a Viable Internet Business Model -- Vint Cerf Explains Reasons for MCI Price Increases, pp. 1- 6, 9

Vint Cerf, in an interview with the COOK Report, confirms that MCI will soon institute measured usage pricing on T-1 connections to its backbone. MCI is expected to impose tiered measurement and charge by the average percentage that the T-1 is used every month. The policy will follow what it has done with T-3 pricing since early in 1996. Although MCI has not yet made a formal announcement via a press release, Cerf explained that "we are plainly discussing this with you, Gordon, and your readers." The MCI move is the outcome of what Cerf describes as a crunch between the Internet's flat rate pricing model and usage patterns where both the amount of use and disparity between use by applications has increased dramatically.

We find that the MCI action is evidence of a possible collision course between spiraling bandwidth consumption and flat rate pricing. The apparent free lunch offered by the ability to take first the basic applications of the web, and them such bandwidth hungry applications as audio and video and finally Internet phone has created at least a 500% increase in Internet traffic during 1996.

Consequently, capital investment cycles in network up grades have begun to shorten from 24 months to periods approaching six months. The expense of upgrading from DS3 to OC3 increased as did the expense of moving from OC3 to OC12. The large companies have begun to grumble that the line of income from network growth under the current pricing scheme was now beginning to fall towards the level of money necessary to spend on increasing capacity.

Corporations have begun to notice that they could weave their intranets together via the public Internet for a fraction of the price of using their private frame relay or SMDS network. Such a movement not only placed strains on that expanding architecture but it robbed the big players of the more profitable income from frame relay and SMDS private nets. Hit with a double whammy of the corporate moves and a static all you-can-eat for-a-flat-rate price for the consumer net, the big five now have to find ways of increasing their infrastructure even as their more profitable base of private networks that they could use to fund public Internet infrastructure development begins to erode. Thus the big five are given additional reasons to want to continue their present experimentation with usage based pricing. UUNET already has tiered T-1 charging. BBN Planet has had tiered pricing for T1 and T3 usage called "Flexible T1 or T3" since 10/1/95. BBN offers flat rate pricing for 56K, educational customers. New customers get flat rate but only for the first year. ANS has had tiered T-1 pricing for all "gateway customers" (ISPs) for 2 years. With MCI at about one third ISP market share, if Sprint moves (~1/3 also), the impact on ISPs may be significant. The free lunch is ending.

Right now an ISP can buy a T-3 and resell it ten times, or even more, with no cost penalty from his upstream provider. MCI's changes are likely to end this and, in so doing, end what has been a source of profit for many local ISPs. With considerable competition at the local level, there will be much pressure on ISPs not to raise prices when their costs are increased. Those who have well-capitalized cost-efficient operations will have a margin to hold out with that those ISPs which are highly debt leveraged will not have. There will surely be consolidation. What is not so easy to predict is whether more cost-effective second-tier backbone providers might be able to holdout against the cost increases of the big five and whether they could do so while paying for what will surely be increased connectivity or transit costs from them.


How BBN Planet Handles Private Interconnects & Backbone Capacity Planning -- A Glimpse at How the Majors Keep Their Backbones Afloat, pp. 1, 7 - 9

We interview John Curran, BBN Planet CTO in the aftermath of the 'ungodly packet loss' discussion. Curran underscores the importance of notifying upstream providers when network problems appear. He explains procedures by which another provider wanting to open a private interconnect could approach BBN.

Curran also explains that the industry has just switched to the use of private interconnects. He notes that: the rules have not been fully worked out. For example some of the questions are: whom do I need to talk with at that company to find the person who can sign off on a T-3 circuit request regardless of what their budget says. How do I identify the person at that company who is most expert in its traffic statistics because I am seeing an increase and I need to know whether it is temporary or whether I need to start planning for a new circuit? These issues have been worked out among the major providers during the previous months. This should now make it possible to begin a rigorous processes of managing the interconnect capacity.


The Small Telco as Technology Innovator Northern Arkansas Telephone Company Offers Cheapest ISDN in U.S., SONET, & Internet Access -- We Examine Issues Facing Small Telcos Under Deregulation pp. 10 - 14

Stephen Sanders, owner and President of Northern Arkansas Telephone Company, explains the technology and operational choices facing the 950 small Universal Service Fund eligible telephone companies in the United States. In an interview with the COOK Report we find out how he was able to become one of the first purchasers of a non blocking switching fabric from Nortel. Headquartered in the town of Flippen, (population 1,600) his company offer offers his 5,000 plus subscribers unlimited ISDN service for $17.40 a month, the cheapest in the US. Having a SONET backbone, he also offers dedicated and dial up Internet connectivity. Small telcos who modernize their networks at some point in the 1990s should be well equipped to serve their customers in the deregulated era.


Quality of Service Issues Very Much Unresolved While Technology Questions for Packet Tagging, RSVP and ATM Not Solved, Administrative Barriers May Be Most Serious, pp.15, 24

A look at some of the uncertainties surrounding ATM, and RSVP. As soon as RSVP is technically viable it will have many difficult operational and economic issues, including settlements to overcome.


Appropriate Technology and Public Policy for K-12 Education: Internet As Pork Barrel? -- An Examination of Some Recommendations of Universal Service Board & an Update on MercerNet as an Example of Our Concerns, pp. 16 - 18, 22

We look with great skepticism at some of the rules proposed to the FCC by the Universal Service Board for what will be a $2.25 billion dollar tax on the users of telecommunications services in the United States. K-12 schools and libraries will be able to apply to this fund for payment of up to 80% of the costs of their Internet connection.

We offer an updated look at the TIIAP funded MercerNet project to show why we believe that this ITV project is likely to lead to a great deal of waste, in comparison to the amount of public benefit that can be derived. The policy makers have chosen to provision a complex service. Unfortunately they have done nothing to ensure that those who are to self certify their eligibility for the service will become knowlegable about the purchase process.


Wired for Dollars -- Why is Maine Using its Schools and Libraries to Lure the Technology Giant? by Laura Conaway (Nynex's K-12 Maine Boondoggle), pp 19-22

We reprint Laura Conaway's write up of the mess that ITV has made out of Maine's efforts to install a statewide k-12 net.

 

 

Critical issues for Internet Business, pp. 1-3

Some see Quality of Service capability as a technology that will give the Internet a new and much more viable business model. As an introduction to an issue focusing on Quality of Service, we survey the key issues facing the Internet in 1997. They include what some see as the likelihood of a fiber shortage caused by the current insatiable demand for bandwidth, for it appears that our fiber inventory is not inexhaustible after all. We include a synthesis of our own research and some emerging press coverage of the fiber problem. National Service Provider interconnects is a second unsettled issue. Perhaps as many as ten companies built coast to coast DS-3 backbones last year in the hope of becoming first tier providers. They did this just as the five largest companies moved the first tier to private interconnects and away from public exchanges. Finally we will begin to see in 1997 tools for use by ISP customers in the evaluation and comparison of the services which they are being offered.

Cisco Offers Software Capability to Give Different QoS Precedence to Labeled Packets pp. 4-7

Fred Baker, senior Cisco software engineer and current IETF Chair, describes algorithms that may be used in routers to give some IP packets delivery priority over other packets -- allowing large ISPs to experiment with the adoption of quality of service offerings. Packets may be prioritized by labels attached either to the IP precedence field or the tag switched, class of service field. When traffic rises above a user definable level on a given link, by using a weighted Random Early Detection algorithm, the user may drop packets labeled "routine" with greater frequency than those labeled "priority." If the traffic continues to increase, the rate at which the different proportions of packets are dropped may also be increased.

A second algorithm called weighted Fair Queuing was released as part of Cisco's IOS 11.0 more than a year ago. This forces different IP flows to interleave with each other and run at speeds that are an average of what they'd be without the algorithm. It also looks at IP precedence bits and applies different weights to IP traffic based on that precedence. As a result, if an application runs precedence seven traffic, that particular flow will get eight times as much effective bandwidth as something that is running precedence zero traffic.

RSVP as a signaling protocol under which these algorithms can run is discussed. Settlement issues related to RSVP are also discussed. Baker explains how virtual channels having different bandwidth allocations or different QoS characteristics may be established between routers. With a hook in the router, it becomes possible to direct one class of packets down a "fast" (CBR) virtual circuit and another down the "slow" (ABR) virtual circuit. In such a way a major provider could set up a portion of its backbone to provide best effort delivery and use the remainder for more important quality of service traffic.

VPNs of Single Provider Networks Best initial use for QoS? pp. 8 -11

Scott Bradner explains the implications of Integrated Services and RSVP. Because of the political and economic problems of settlement negotiation and accounting across multiple providers, this technology may find its first use by the summer of 1997 within the network of a single provider. In such a case it could be used by the provider to guarantee a corporation bandwidth between offices, if that corporation decided to replace its leased lines and run a virtual private network over the public internet -- something that would not be acceptable without QoS guarantees.

Design Considerations for Quality of Service, pp. 12 - 15

We interview Noel Chiappa, the developer of the multi-protocol backbone and router. Noel outlines some of the design considerations involved in establishing a workable Quality of Service model for the Internet. While using an algorithm like weighted RED would involve little more than setting precedence bits that are already in the IP packet header, additional strategies for establishing guaranteed performance for certain IP applications would require IP packet headers to carry additional information. If the amount of information crammed into the header becomes too great, the overhead imposed by the header on the system will slow down transmission.

One fix is to put this information or "state" into the routers instead of into the packet headers. Tag switching attempts to do this. However, if the total amount of state in terms of the combined effect of information for a single transmission and the cumulative information contained in hundreds or thousands and more of such sessions increases too much, the resulting hardware and software demands on the router may exceed the ability of the router to function in a reasonable way. For this reason widespread usage of RSVP across the networks of several providers may produce more "state" that backbone routers can comfortably handle. This is one more reason why use of RSVP within a single network will be viable long before implementations throughout the Internet.

Automotive Network eXchange Set to Begin Certification Process, pp. 16, 24

Bob Moskowitz explains plans to choose an overseer to work with automotive industry on the certification of ISPs. Comparatively few ISPs are expected to be able to meet Automotive Network eXchange certification standards. Nevertheless ANX plans to exert pressure on industry to use certified providers.

American Registry for Internet Numbers, pp. 17 - 18

A move has been made to bring Americas into line with IP number registries in rest of the world. The function is to be removed from Network Solutions and centered in new non profit body.

Internet Policy and Technology for K-12, pp. 19 - 22

A debate between Jeff Michka and Ferdi Serim. (Part 1. Part 2 in the March COOK Report.)

Internet II, p. 22

A brief discussion with Scott Bradner on the focus and goals of Internet II.

 

Domain Name System Under Stress, IAHC Position Weak pp. 1- 6, 24

With the completion of the International Internet Ad Hoc Committee final report calling for a Council of Registrars and the creation of a shared database system for up to 28 Registrars of Domain Names the 18 month old dispute kicked off by Network Solutions' beginning to charge for Domain Names in September 1995 is coming to a head.

We present a detailed history of the solicitation leading to the InterNic award. We examine the role of NSF from the start of the award in April 1993 to the present and conclude that its actions were reasonable and proper. Using a network of sources we describe the current positions of NSI, IAHC and the Alternic "glitterati" - the last of which we conclude are not serious players. While we are not sympathetic to NSI's plans to capitalize on their good fortune with an IPO, we find no reason to condemn their stewardship over the .com, .org., and .edu domains.

While we think the goals set by the IAHC are worthy of support, we believe that they face very difficult odds that may scuttle their effort. While the seven new top level domains that they are supporting will increase the range of choices for Internet users, the IAHC faces a very difficult task of legal, financial, technical and organizational hurdles in implementing them.

In evaluating at IAHC's chances of success one must ask two things. Where are we to assume that it will get the considerable sum of money required to build its infrastructure and make it work in a very short period of time? Second who will register what domains and how quickly will the data from the new Registrars', IAHC- sponsored, seven top level domains start being added to the data bases of the root DNS servers? The IAHC side has indicated that it would like NSI to join CORE and add .com to the domains registered by the CORE Registrars.

As soon as the IAHC CORE system has an operational shared database, we'd like to see NSI join and do just this. However when one looks at the stark reality that NSI would have to give up control over its pre-eminent cash cow in order to do so, we think it very unlikely that this will occur in the short term. After all the shared database system has to be built first and it is not clear where the sponsors are going to find the money necessary to do so. If they do succeed in building a working system that gains acceptance, IANA and community pressure could force some changes. The biggest immediate question is how the entire system will weather a legal challenge.

For IAHC will be lucky to escape being sued and if it is sued, there is one very weak link in the whole process: The authority chain. In other words the IANA - Jon Postel. The IANA's authority endorsing it is what gives NSI's registry value. The operators of the root DNS servers are willing to accept Postel's recommendations as authoritative. They carry the registry database(s) that Postel asks them to.

But what if a court were ever effectively to say to NSI: the authority of the IANA is in dispute and/ or no longer valid? Therefore you may no longer use the IANA as your authority in asking the root servers to carry your database. Or perhaps more likely were a court to say to the operators of the root servers: you may no longer restrain trade by accepting only data that the IANA deems authoritative?

In a worst case scenario, you then might then find multiple groups with databases of questionable quality insisting that they be added to the root servers. If the databases conflict and the system falls apart, too bad. This probably won't happen. But it could. Being fully aware of the dangers may be the best way for all parties to avoid disaster.

Noel Chiappa on the Scaling Problems of Current Routing Technology, pp. 7-15

In a long interview Noel analyzes the problems of the current technology with its premiums on routing gurus. The more routers there are in the defaultless core of the net, the longer a flap takes to stabilize. However the growing number of routers increases the likelihood that some router somewhere will flap more and more often. This problem presents curves that, undisturbed, will intersect. Of course they cannot be allowed to do so because at such a point the net would crash. To ensure that they do not intersect, Noel believes that, either aggregation must be more rigorously applied, or new routing technology developed to replace BGP4. He explains why the new technology approach will be very time consuming and therefore difficult. (As we prepared to publish this discussion, it seems to us that route dampening might be a third option.)

In any case for those who have never had to use a router, we believe that the discussion is an extremely informative guide to the ways that routing technology may be applied to today's network.

Curtis Villamizar on the Difficulty of Aggregation, p.16

In a response to a journalist's query about Cisco withdrawals of routing information, Curtis explains the difficulty involved in large scale aggregation and efforts to deal most effectively with route flaps. The document comes from a Merit web page and is used with his permission.

Tony Bates Explains the CIDR Report pp. 17 - 20

After an interruption of several months, Tony Bates restarted his weekly CIDR report last fall. The report serves as a tool that is useful in showing which ISPs are implementing CIDR aggregation extensively and which are not. He notes that after four years of "CIDRization" of 40,000 routes announced only 10,000 are CIDR aggregates. He suggests that with reasonable effort 10,000 routes could be cut from the total announced to the defaultless core. He describes how the report serves as a useful tool to identify inadvertent leakage of routes. Such leaks are significant. When they occur, they often are in the 500 to 1500 route range. He suggests that routers will soon be capable of handling up to 250,000 routes. Implied is that for the routes of the defaultless core to continue to grow significantly, routers in the core will continue to grow in number and dampening will have to be increasingly used to avoid having the net taken down by ever more frequent router flaps.

LEC Charging Policy Questioned pp. 21 - 22

Jack Buchanan of U. of Tenn. Memphis questions local loop pricing policies that would appear to make digital circuits that are less costly to provision and have less impact on local Central Offices than analog POTs equipment. Jack also offers useful insight into the fallacies of the universal service provisions in the 1996 Telecom Reform Act.

 

 

Wireless Becoming Viable Local Loop: Dave Hughes Reports on NSF Wireless Field Tests -- Progress in Spread Spectrum Technology Outstrips FCC Policy Process -- Availability of $500 T-1 Radio Modem in 98 Could Mean PSTN By Pass for Current Dial up Customers pp. 1- 10

For the past 20 years the voice of Dave Hughes, the "Cursor Cowboy" has been one of the most eleoquent and creative in Cyberspace. Now as Principal Investigator of the National Science Foundation funded K-12 wireless field tests Hughes' voice is beginning to exert an influence on policy making. In a long interview with the COOK Report, Hughes discusses his findings from the project so far.

He explains why spread spectrum gets high performance at low power levels and is both resistent to interception and interference. He also explains how unduly restrictive FCC provisions have increased the cost and lowered the performance of these radios at the very time the Administration and FCC declare their support for affordable K-12 educational, and local community networking. Because the radios can use TCP/IP, have a speed of up to ten megabits per second and can transmitt up to 20 miles line-of-sight in the unlicensed ISM bands, they can serve as an alternative local loop (dial-up modem, or dedicated telephone line) for internet access. Higher speed radios come with their own routers built in.

Through trials with public school systems in the rural San Luis Valley of Colorado and urban schools and libraries in Colorado Springs, Hughes is evaluating the radios' ability to serve as a backbone link from an ISP to a school system, as zero-cost links between the schools of a school district, within each school, and finally, high speed but free links from the schools to teachers and students at home. For two of these uses he explains why point to multipoint communication capability is a requirement.

He explains the FCC rule making process where Apple asked for an NII band for community networks but without stressing the need for power to give point-to-multipoint communications and then unaccountedly failed to submit the technical backup required to support its claim that its proposal would do the job. He details the requirements of the final rulemaking anounced in late January 97 for what is now called the U-NII band in the 5 gigahertz range. It is one where, in a direct challenge to George Gilder, FCC Chairman Reed Hundt calls on manufacturers to produce radios with up to 20 megabit speed per second speed to reach across communities and between schools, and minimize interference. But, at the same time, the FCC severely limits the power at which they are allowed to operate and thereby cuts their point-to-multipoint range.

In an open letter to FCC Chairman Reed Hundt, Hughes asks why no representatives of the forty spread spectrum manufactures were invited to the FCC bandwidth forum on the problems of alleged over crowding of LEC central office switchs by Internet users. If the NSF funds another proposal now before it, TAPR the major digital HAM organization in the US will by this time next year begin licensing the production of T-1 $500 or less, radio modems. ISPs could very affordably shift a significant portion of their local dial up and leased line users from the PSTN to wireless. In some cases ISPs could also use the ten megabit radios to replace their wireline connections to their upstream provider and completely avoid paying their LEC any local loop charges.

Hughes is contemplating bringing the 40 manufacturers into an effective industry association and getting them in a face-to-face meeting with the FCC before the end of this year in in the hope that the FCC will see the need to enable an American manufacturing base that could (as well as support domestic use) be exporting 100,000,000 US built radios per year to third world countries where they could bring undreamed of communications capability to large parts of the world, while stimulating US trade, and bringing domestic radio prices down.

Paul Mockapetris on QoS, @Home, Internet Cable ROI & Cable Modems, pp. 11-14

Paul Mockapetris shares some wide ranging opinions on QoS issues. He notes that @Home buys PVCs from Sprint for its 45 megabit backbone. He comments on efforts of consumers (including Ira Richer at CNRI) to impose some performance criteria on their upstream providers. He explains how @Home is applying ROI analysis to its trials. If the results look good, he assures us that the need for very large amounts of capital for modernizing the plant of the cable industry will be easily met. He summarizes recent progress in cable modem standardization (almost none) but explains why prospects look better in 1997. He concludes with an explanation of features available to users of @Home.

Craig Labovitz Explains MERITs Efforts to Measure internet Performance, pp. 8 - 11

In an interview with the COOK Report, Craig talks about the funding of probe platforms at some public exchange points and other interesting spots in the network to better understand how to cut down on packet loss. Work on the Internet Routing Registry continues. Part of the thought behind this is to create an acceptable way to avoid having to hand edit backbone routing tables where a single mistake can cause massive routing instability. Craig also describes NetNow as a means of enabling partcipating ISPs to better understand the performance of their own networks and thereby be able to improve their operations.

Big Five Urged to Consider Changes in Peering at Public Exchanges pp. 18 - 19

Paul Vixie on NANOG mentions complaints he has received about restrictive peering on the part of some of the big five. He suggests that the industry approach this subject with considerable care lest it wind up regulated. "We really do just need to send each other local-region routes, which keeps local traffic local, does not give away wide area telecom to noncustomers, takes away some causes for lawsuits and new legislation, and moves us back to a level playing field where folks without wide area networks have to buy transit and do so without complaining."

Beta Test of Filtering at MFS FDDI Based Public Exchanges to Begin in Feb. pp. 20 - 21

We provide our own explanation of how single party peering and failure to implement next-hop-self in one's routing can be used to get "free" peering with the most of the ISPs at a FDDI based exchange. We conclude with NANOG discussion of the new MSF service designed to prevent this Rtheft of serviceS at MFS exchanges.

Sean Doran Suggests a Tier 1 vs. Tier N, p. 22

In a letter to the Editor Sean Doran suggests an interesting strategy whereby would be Tier 1 providers might buy connectivity at multiple places to the big five instead of concentrating on expanding their own backbones enough to get free peering with the big five. If they made five year purchases they could try to get sufficient market share during that period to force entry into the top Tier at the end of their contracts.

Heker & Sale of GES, pp.14, 22

An interview with a source at WNA makes it clear that Sergio Heker is completely removed from operation of and customer contact with GES. Our interview was prompted by an ambiguous announcement to customers.

 

 

Clinton Administration Embraces DNS Tar Baby, Magaziner & OMB Responsible -- Action Derails Agreement with Network Solutions & NSF to End Co-operative Agreement on April 1 1997

Ill Considered Move Halts Formation of ARIN IP Registry Critics Say Action Deprives IANA of Opportunity for Legal Foundation of Authority & Endangers Stability of the Internet by Putting IP Numbers at Risk, pp. 1- 5

Thanks to the meddling of hopefully well meaning folk - Ira Magaziner's Internet task force at the White House, and an inter agency task force centered at OMB, we are faced with a potentially dangerous situation for the Internet. It is no secret how badly the Domain Name System is about to become fouled up after a year and a half of squabbling among competing bodies. But what is not broadly understood is that NSI runs the IP registry for the Western hemisphere and feeds content to the "." dot servers for the world that are located at NSI but owned by IANA. These are functions that there is no longer any legitimate reason for the US government to be involved with. But they are also so critical to the operation of the Internet that they must be moved very promptly to a separate and neutral body independent of NSI and one unable to be dragged "under" by the waves litigation now threatening everyone involved with Domain Name System.

After talking with numerous sources familiar with the events of the last two months, we are convinced that policy coming from the White House has, inadvertently, put a stop to plans that had moved far enough along so that the above removal of ARIN's functions from NSI could have happened in a way that would benefit the world wide Internet community. Fall out from this action has meant a halt in plans under way that would have - very shortly - resulted in the establishment of an independent American Registry for Internet Numbers. The establishment of ARIN also means that for the first time the operations of the IANA could become institutionalized and gain a sounder international foundation

Putting a hold on the establishment of ARIN renders the authority of IANA more liable to court challenge and leaves the payroll, database, and control of the IP number registry process in the hands of a commercial company (NSI) that does the original .com and other global top level domain name registry for the entire internet world wide. As someone closely involved with ARIN told us: "The real danger is that numbers are being subsidized by domain names, and domain names are about to become a disaster."

While NSI has shown no signs that it cannot or should not be trusted, it is improbable that NSF oversight of NSI will extend beyond the current agreement whereas the need for NSI as a stabilized registry operation in an impending sea of change in the Domain Name arena will continue.

Therefore, it is reasonable to assume that NSI will sooner or later be granted full independence from NSF oversight . When this happens leaving the power inherent in both the DNS and the ARIN functions in the hands of a single corporation would be unwise. Also, while one hopes the chances are small that anything serious will happen to the viability of Network Solutions, it's DNS database performance during the last half of March has been horrendous with major names that had already paid being removed from the root servers for non payment - something that has led to disruption of service for many entities involved. In the litigious atmosphere that surrounds this whole environment, Network Solutions will surely be a target.

In an exclusive interview on March 27 with Don Telage, President of NSI we were able to establish, with some degree of precision, that at the end of February, the National Science Foundation and Network Solutions had reached an agreement in principal to bring the NSF/NSI cooperative agreement to a conclusion a year early on April 1, 1997 and to establish and fund during a transition period the American Registry for Internet Numbers (ARIN) which would have been freed from NSI control on April 1, 1997. Unfortunately, the administration move to find a fix for DNS (discussed later in our full article) caused all forward movement between NSI and NSF to cease on Monday March 3. Since then the situation has become much more difficult and the freeing of ARIN as part of a package deal that was acceptable to both sides at the beginning of March looks far less acceptable to to NSI now as a stand alone option. [Editor's note: we have here confirmation of the damage that the administration's ill advised meddling has done. We and, we hope the entire internet, will be watching closely to see what they do to fix the mess they have created.]

ARIN will temporarily cover Latin America and sub-Saharan Africa. The ARIN organizers are working with both areas to help them set up their own regional registries. Then, under the auspices of IANA would be five registries, AfriNIC, ALyCNIC, APNIC, ARIN, and RIPE. Leaving IP registration for the western hemisphere and sub-Saharan Africa indefinitely under the aegis of NSI under the current stressful conditions does not make sense. If anything disastrous happened to impact the viability of NSI, IP registration and dot operation could be set up elsewhere within 48 to 72 hours - if the people and hardware were available. But during such a transition there would likely be substantial disruption of Internet service worldwide. Also, during such a move, assignment of new numbers would not take place and that process would take longer to get back to normal.

In a conversation with a White House source on March 25 we found out the Administration has decided that the Federal government needs to study the DNS and solve a problem for the Internet community that it has been otherwise unable to solve for itself. Unfortunately it appears that Magaziner's group has been listening to the positions that Tony Rutkowski and the corporate lawyers of the Internet Law and Policy Forum have been promoting both on the network and off line. The source maintains that the inter-agency task force is unaware that in grabbing the DNS tar baby it also has grabbed and derailed - for the time being - ARIN.

In derailing ARIN the group is undertaking actions that pose some risk to the stable operation of the internet world wide. That stability can be ensured only by the resumption of swift action to resume the establishment of ARIN and, in the face of a likely onslaught of DNS related lawsuits, and to create a Global Council of IP Registries, to internationalize the IANA, with members taken initially from the three regional registries; European (Ripe), the American (ARIN) and the Asian (APNIC) IP registries; and other regional registries added as they develop.

Background Data Relevant to DNS ARIN Controversy pp. 6-11

We publish several pieces relevant to the cover article. First is the portion of the Ira Magaziner CNet interview on network governance. Included is our commentary on the man's muddled views. Next is a summary of some of the problems of the NSI databases over the past two weeks with a report from George Herbert on NSI's answers to his questions about what went wrong. There follows debate between Tony Rutkoskwi, Dave Crocker, and Michael Dillon on IAHC. Next is information about the IO Design lawsuit and finally our early March editorial on the NSF IG's attempts to make Internet policy for the NSF.

John Curran on FCC and ISPs, p. 12 - 15

In an interview John gives the best analysis that we have seen of the problems behind Bell Atlantic's and Pac Bell's requests to have the FCC lift the local access charge exemption from ISPs, an action that could increase more than tenfold what an ISP has to pay each month for dial in lines. John concludes that such action would be likely to kill the dial up IP industry. He suggests that the FCC should be paying close attention to the infrastructure that ISPs are using to remove their traffic from the PSTN. He also points out that the focus of the RBOC complaints on ISPs ignores the vast use of corporate dial in modem pools that stress the PSTN during the day rather than in the evening off peak hours where dial in ISP use is focused.

Interviews Assess SprintLink pp. 16 - 22, 24

We interview Brad Hokamp Benham Malcom, and Hank Kilmer to assess the position of Sprintlink in the wake of substantial reorganization over the past year. We are encouraged to hear that Sprint has no plans to discontinue flat rate pricing on leased lines as three of the four other largest providers have done. (MCI is making the change. BBN and UUNET instituted tiered pricing quite some time ago.)

Kilmer explains in considerable detail the new operational management structure underlying SprintLink and Sprint's other IP services including desirable new changes made as recently as this February. He also explains Sprintlink's current thinking on Quality of service technologies and the use of ATM in the SprintLink backbone which currently is POS directly over SONET. ATM will not be incorporated until some of the performance issues are improved.

Letter to the Editor

March 31: Editor's Note: We received the following communication this morning from Al Gidari on behalf of the ILPF in reference to the remarks we made in our May 1997 cover article about apparent endorsement by ILPF of what has been going on with government involvement in DNS.

"Tony, like every other lawyer involved in ILPF, has the right to discuss any issue they please. To my knowledge, Tony has never represented to anyone that on this or any other subject he speaks for ILPF. That is because he does not. The ILPF has not taken an position on any issue yet. We can't because we have not completed study on any issue and have not presented anything to the membership at large to consider. We are still finding out what works best and 1997 should prove to be a good year for testing process. On the Domain issue, we will neither refute nor support Tony because we have no opinion, not having been asked by anyone to study testing process. On the Domain issue, we will neither refute nor support Tony because we have no opinion, not having been asked by anyone to study the problem and not having raised it within the membership as one we needed to act on immediately if at all. We do not know whether lawyers who understand the Net and the issue would help or not in the process. Many such as Tony might have an opinion in that regard but the ILPF has not taken a stand on it, and has no obligation to do so, and will not do so simply because you bait it in your article. Now, you can choose to print the truth or not, but you have been given a straight answer. You put the burden on the wrong party Gordon. . . . . Had you called me or e mailed me you would have gotten the above answer. Please feel free to do so in the future and I will be sure you get the straight information. If Tony or Ira or any other party thinks theprocess being developed by ILPF can help in this issue, then the members would decide whether to be involved. No one has asked and ILPF has no opinion in the absence of a request, member approval, a process being invoked that included public review and a final recommendation/position being approved by consensus of the members."

Al Gidari

The Editor thanks Mr. Gidari for setting the record straight. Given time constraints we found there to be a limit to the number of people and issues that we could cross check. We have strong opinions, but those opinions include putting all authoritative information that we receive on the "table" in public view. This statement from Al Gidari certainly seems to qualify and to be worth sharing.