A Practical Navigator for the Internet Economy

Global TeleSystems Emerges as Pan European IP & DWDM Carrier's Carrier

Ebone's IP Know How & Hermes Railtel Fiber Yield Largest Greenfield European Network -- Critical Issues & Future Concerns,

pp. 1-9

We interview Frode Greisen and Sean Doran of Ebone. The interview traces the development of Ebone as the first major builder of IP infrastructure in Europe. It describes the development of Ebone's relationship with Hermes Europe Railtel and the acquisition of both (completed in May 1999) by GTS (Global TeleSystems). GTS is currently the largest carriers' carrier in Europe. It leases access to dark fiber using DWDM technologies where customers can buy a single wavelength of light and is the only carrier in Europe to be offering international DWDM links to customers. It also sells IRUs to fiber strands and optional SONET support. Because it can manage inter city and international fiber links at the lower levels of the protocol stack, it can offer customers a level of coordinated reliability that was difficult to achieve when responsibility for management of international lines terminated at the borders of each national PTT. They have been running operational OC48 (2.5 gigabits per second) between Brussels, Amsterdam, Frankfurt, Geneva, Paris and London since February 1999.

At POPs in each of the cities they serve, they run matched pairs of Cisco 12000s which transmit on international DWDM links. The Cisco 12000s function as backbone boxes and are fed by a collection of Cisco access boxes, usually 7200s or 7500s. The Ciscos transmit over DWDM between cities - essentially virtual dark fiber. They use SDH framing which is inexpensive, because the only SDH-talking devices are the Cisco line cards in the backbone routers. They talk to one another over point-to-point dark fiber and the DWDM gear that is providing the virtual dark fiber between the cities. The two line cards use SDH or SONET because that is what line monitoring equipment expects. Monitoring of SONET Add Drop Multiplexers (ADMs) allows network staff to diagnose and repair any problems at the DWDM level of their network. A discussion of peering emphasizes the importance of good connections to UUNET. GTS achieves this in part by buying transit from Sprint's ICM network.

The end of the interview offers a discussion on two different approaches to scaling Internet architecture. According to Sean Doran: Building a network which is to be used for Internet connectivity (broadly defined) while ignoring issues of locality and future projections of traffic - things that can vary really widely over a long period of time - is difficult." Doran strongly disagrees with the telco model for implementing DSL asking: "If you have to adjust your topology because of traffic issues, you will be in for a difficult time. How do you avoid under or over provisioning for five years from now with this kind of architecture?"

He adds: "What you need to do is deploy an infrastructure locally that can scale into having high speed access to every household (or business). That is you assume that every household may move several megabits per second of data onto and off of the network and that they all might be moving all their data at the very same time. This is a model that makes traditional network design people nervous. It makes me nervous too because their models simply can not scale to the necessary size."

Later in the discussion Doran adds: 'My preferred model spreads the stuff that interacts with your customers around a CLEC. Rather than having a small Cisco box in a huge stack of small Cisco boxes sitting in your telco facility with individual lines radiating out from your telco facility to individual customers of yours, I prefer the model where you radiate all your access boxes out along your telco fiber. That is to say you virtualize your telco facility along infrastructure that you are deploying into the ground on a city by city basis. The goal is to bring the edges of your network as close to the users as possible."

Doran talks of two models of very large ISP design. One is the UNNET model. The other is the Sprint model which was in the preceding paragraph. With this model used really only by Sprint GTS and Qwest, regulators would be ill advised to let Sprint's IP networks disappear inside of WorldCom. Doran admits that each design philosophy can have its strengths. He warns however that we should not allow one of he opposing philosophies to become really dominant lest it break and we loose our ability to continue to rapidly scale Internet architecture.

He continues: "In the contrasting model, the intelligence is in your network. Your network is a big mesh so, when an incoming packet arrives at your edge router, you have a virtual circuit to the appropriate router on the egress path to the packet's final destination. You do not have a hierarchy of routers. Basically you have a flat topology where every router talks to every other router. However, the model where you have a very flat network (UUNET for example) and a maximal number of virtual circuits allows you to deal with hot points. Say because I don't have an adequate amount of capacity over here I am going to restrict the amount of packets that can be blasted by the virtual circuit owned by this customer over there.

With the flat architecture there are several breakage points. One is that you run out of the ability to have some given number of virtual circuits and you have to start constructing a hierarchy anyway. Or that the architecture simply doesn't buy you what you thought it would. The breakage on our hierarchical design side of things is that people behave badly and start blasting. We will then have to start policing them and forcing them to insert traffic into the network in a sensible fashion."

The Story Behind Third Party Access to Cable Networks in Canada

by Francois Menard pp. 10-18

This article by Francois Menard covers the on going efforts in Canada to determine the conditions under which Canadian ISPs shall have access to broadband capable cable network infrastructure. Since the cable TV industry in Canada for several years has enjoyed common carrier status, the mater of network layer interconnection between what Menard refers to as the cable carriers and ISPs is subject to being decided on very different grounds than in the United States. Canadian cable carriers are under legal order by the Canadian CRTC to open their networks to ISP interconnection. Menard's article describes what he sees as efforts on the part of the Canadian cable industry to circumvent CRTC orders to unbundle access by ISPs to the TCP/IP level (layer 3) of the protocol stack.

As he discusses the evolution of the technical positions taken by the two sides, Menard clarifies the business models of cable carriers versus ISPs. He describes a number of useful insights into the difference. These insights are leading him to some important and regulatory paradigm shifting conclusions.

These are:

1. That the telco and cable business are built from top to bottom individually owned and vertically integrated and controlled networks;

2. that having dealt with such networks for more than half a century the regulatory mind set is framed by the vertical architectural structure;

3. this mind set looks at the Internet vertically rather than horizontally according to each individual layer of the protocol stack;

4. but that this mind set is now a flawed and economically hostile way of approaching what is becoming the business model of the Internet - namely service providers where each is responsible for their own networks and practice non discriminatory interconnection;

5. that ISPs now offer new extremely cost effective technology for building global stupid networks by means these horizontal inter connections;

6. that to the extent that the regulators are unable to understand that if they don't come to think both in horizontal layer 3 terms and in terms of layer 3 unbundling to facilitate layer 3 interconnects between ISPs, using network infrastructure that may or may not be owned by a third party, the Internet revolution may be short circuited by the triumph of the economic interests of the old, vertically integrated legacy networks.

Menard hopes by February 2000 to demonstrate this in clear detail in a cable carrier follow on study to his "Netheads versus Bellheads" document of mid 1999. His paper in this issue of the COOK Report is a first step in his direction.

Secret Meeting Shows ICANN - IBM Dependence

Two IBM VPs in Presence of Four Internet Dignitaries Set Stage for NSI's September Capitulation to ICANN

ICANN Emerges as IBM - US Gov't Brokered Internet Equivalent of WTO - Lessig Shows Internet as Potential Victim of Public Private Manipulation

pp. 19 - 40

In a very long article we summarize our knowledge of the ICANN debate. The article uncovers participants and some of the details of the secret meeting of July 30, 1999. This meeting sponsored and brokered by IBM shows that ICANN, far from being a consensus organization, is the creature of IBM's need to control the framework of e-commerce in the 21st century.

Those interested in Internet governance should pay careful attention to Larry Lessig's new book: Code: and Other Laws of Cyberspace (Basic Books, 1999) finds that he who controls the code on which cyberspace is founded will control whether freedom can exist in cyberspace. Lessig pounds home this conclusion again and again. We find it fascinating that Lessig ignores ICANN. For we note the reason for ICANN's being in such a hurry. It knows what Lessig knows about ownership and control. It must craft its architectural code on behalf of e-commerce and government before the rest of us awaken.

Lessig writes "cyberspace [is changing] as it moves from a world of relative freedom to a world of relatively perfect control'..... The first intuition of our founders was right. Structure builds substance. Guarantee the structural (a space in cyberspace for open code) and (much of) the substance will take care of itself." . . . "We are just beginning to see why the architecture of the space matters ‹ in particular why the ownership of that architecture matters."

Editor's Preface

ICANN is now "fully formed." With Network Solutions signed on as an accredited ICANN registrar and obligated to pay it nearly three million dollars in domain name taxes per year, ICANN need no longer fear bankruptcy. ICANN may now proceed forward with an Internet wide system of domain name registration under its control. It has won act one. Whether it will be able to "win" act two and enforce and expand its powers to become for its masters a global Internet regulatory agency remains to be seen.

ICANN has gotten to its current position by a complex process of lobbying in Washington and Europe. It is one that we have spent the past three years and upwards of 300 pages of the COOK Report in documenting. In this article we review the entire chain of events in order to paint as accurate a picture as possible of how a tiny clique has managed to put in place a structure that is now positioned to become a global regulatory body for the Internet.

This article also covers a July 30 secret meeting run by IBM at a Washington DC hotel. At this meeting two IBM Vice Presidents met with NSI's CEO and a Science Applications International Corp (SAIC) Vice President in the presence of senior Internet statesmen Dave Farber, Bob Kahn, Brian Reid and Scott Bradner. ICANN and NSI had spent the previous two months on a collision course over whether NSI would have to capitulate to the demands contained in ICANN's registrar accreditation agreements. These demands threatened the viability of nearly all of NSI's income stream. NSI had both reason and resources to sue ICANN with both sides having clashed acrimoniously in front of Congress less than 10 days before. It is no exaggeration to say that the fate of both ICANN and NSI was at stake.

As everyone knows, the suit did not happen and less than two months later the collision course had become a marriage as NSI signed an agreement accepting ICANN's control and assuring ICANN of the money it needed to survive. It is believed that the July 30 meeting began the events that led to the late September marriage. We note that at the most critical moment in the struggle for control of the DNS system and the future of the Internet the opponents were not ICANN and NSI. It was IBM against NSI with John Patrick VP of IBM's Internet division and Chair of the IBM-MCI led Global Internet Project backed up by Chris Caine, IBM VP of Government Programs and head of IBM's 40 person Washington lobbying office.

It certainly looks to us like the crux of what lies behind the "window dressing" is the raw power of IBM. On December 9 we received an email containing the following text:

"Gordon: The July 30 meeting was called by John Patrick, who also ran that meeting. It was attended by John Patrick, Chris Caine, Jim Rutt, Mike Daniels, Brian Reid, Bob Kahn, Dave Farber, Scott Bradner, and an ICANN representative. Cerf was not there. It was held at the Hay-Adams Hotel. My impression of the meeting was that its entire purpose was to bully NSI into signing ICANN's agreement. It was entirely Patrick's meeting. Kahn, Reid, Farber, and Bradner were there as observers. The only negotiations that took place were between John Patrick and Jim Rutt. As far as I can tell the others were invited to this meeting for the same reason that Jimmy Carter is invited to South American elections." [End of 12/9/99 email.]

We contacted some of the people named in this message. When we reached Reid, he confirmed that he was at the meeting. When we read him the paragraph above, he asserted that he did remember seeing all of the aforementioned people at the meeting. He said that "one of the IBM representatives had asked that the meeting and its contents be kept secret," but that he "was fairly jet-lagged" and "didn't remember the details of the secrecy request." He added that "there ended up being no secrecy agreement, at least nothing written." Reid described his memory of the meeting as being "a dialog between John Patrick and Jim Rutt, but [he] couldn't specifically remember any of the things they had said to each other." Jim Rutt also confirmed his attendence at the meeting. He said: "It was from my perspective a benign and positive sharing of points of view by some experienced people around the DNS management issue. I found it quite useful and constructive."

Let's identify the persons listed. Mike Daniels is Chairman of the Board of Network Solutions and SAIC Sector Vice President. John Patrick is familiar to readers of the COOK Report as the spearhead of IBM's GIP and ICANN building operation. Chris Caine is Vice President, Governmental Programs for IBM, Caine is based at IBM's K Street Washington DC Office. This is Caine's first appearance in the ICANN NSI saga. We find that appearance to be quite interesting since Caine's office with its 40 employees is responsible for IBM's lobbying and government relations programs. His appearance at this meeting appears to us to elevate the importance to IBM of ICANN's success. Jim Rutt is NSI's new president. Brian Reid, formerly the Director of Digital Equipment's Palo Alto networkng Laboratory, is a researcher in networking; as far as we can ascertain Reid has been a neutral observer of the governance wars. We describe Dave Farber throughout this article. Bob Kahn, as co-author of the TCP/IP protocol with Vint Cerf, has ties to DARPA, the telcos and the telecom industry in general. Scott Bradner is an IETF Area Director and Officer of the Internet Society (ISOC).

We are intrigued by the statement from our informant that "The only negotiations that took place were between John Patrick and Jim Rutt. As far as I can tell the others were invited to this meeting for the same reason that Jimmy Carter is invited to South American elections." Inviting men of the stature of Kahn, Farber, Reid, and Bradner as "observers," may be seen as an act of arrogance. But it may also have been an act designed to intimidate Rutt and Daniels who were relatively new to negotiations among top level Internet power brokers." The very presence of these senior statesmen would serve to further elevate the seriousness of the discussions.

The July 30, 1999 meeting apparently belonged to the two IBM Vice-Presidents. The pattern is quite familiar to veteran IBM watchers who observe that when IBM doesn't know how to cope, it reverts to its classic pattern of control. Control of the meeting, of NSI, of ICANN, and of the Internet, we would add. But the fallout of IBM's behavior goes well beyond this meeting and stops at the highest levels of the Clinton-Gore administration. The relationships extend back to Al Gore and Mike Nelson who wrote the High Performance Computing legislation that Gore backed in Gore's Senate days. We became an observer of Nelson's moves with regard to IBM and Gore nearly a decade ago and remind readers of the path that Nelson has traveled from the Senate Commerce Committee to the White House Office of Science and Technology Policy, to the FCC and finally to employment by IBM in its governmental relations programs.

The relationships also extend to the National Economic Council's Tom Kalil who met with Joe Sims and Esther Dyson on June 15, 1999 and promised to assist ICANN's fund-raising efforts. We note that Ira Magaziner explained to us in September 1998 that it was Kalil who (as part of the White House's preparation for the 1996 elections) asked him in 1995 to begin his research on electronic commerce and the Internet. When in March 1997 we were informed that Kalil was involved in Becky Burr's refusal to allow ARIN to be formed, we emailed Kalil and stated that we believed that he had an interest in seeing Al Gore elected President in 2000. We stated that his and Burr's policy on ARIN was in danger of breaking the Internet, told him why, and warned that if it didn't change and the ARIN issue exploded, we'd dog his footsteps with public reminders of what he had allowed to happen. He responded to this email and discussions began that turned the misguided policy around a few weeks later.

The relationships are tied to the administration's habit of promoting a public policy that hands off regulator enforcement to industry for its own Œself-regulation'with the threat that if industry doesn't self-regulate, the government will step in and do it for them. Magaziner was a long time proponent of this premise. Beckwith Burr from the consummately political law firm of Wilmer Cutler was espousing it at the Federal Trade Commission in 1995, two years before she was transferred from FTC to OMB and then to NTIA to wrest control of DNS and NSI from the National Science Foundation. It is now very clear that ICANN is not the legacy of Jon Postel. ICANN is the illegitimate off spring of IBM, and the Clinton Gore Administration - with the assistance of the Internet Society (ISOC) and Vint Cerf.

The power on the side of those behind ICANN is overwhelming. It would be far easier and safer to fold the tent, admit defeat and disappear into the night. Yet doing so would be wrong. Is one to do what is "safe" or what is "right?" It is easy to be cynical. And likely justified too. Yet it is hard to abandon the hopes and dreams of new, individually empowering and more democratic many-to-many communications. We write with the hope that while our work may be unsettling to some readers, it will cause far more readers to stop, to think and perhaps to re-assess their position.

The Introduction to this article outlines why we hope that those who have a stake in a free and open internet had better grab the attention of the press and policy makers before an IBM, Clinton - Gore administration created and backed ICANN plants itself too firmly in place.

[more than a 25,000 word SNIP]

An Afterword - What, Why, and Wherefore

We have poured many many hours into this article which we view as a summation of everything we have learned about ICANN. It would have been far easier to have ignored the latest events. But how can one simply walk away from gathering storm clouds? While we may have offended some readers, we hope that we will have also made them think.

The Internet forces new ways of doing thinking looking and acting in many many fields of human endeavor. Recall the insights of Clayton Christiansen, the author and originator of the insight that some technologies are so disruptive that these technologies will lead to the failure even of established leading edge companies who cannot cope with them. In these terms the Internet is probably the most disruptive of all technologies. The power and money at stake extend well beyond what we could have imagined only a year ago.

The power on the side of those behind ICANN is overwhelming. It would be far easier and safer to fold the tent, admit defeat and disappear into the night. Yet doing so would be wrong. Is one to do what is safe or what is 'right?' It is easy to be cynical. And likely justified too. Yet it is hard to abandon the hopes and dreams of new, individually empowering and more democratic many-to-many communications. We write with the hope that while our work may be unsettling to some readers, it will cause far more readers to stop, to think and perhaps to re-assess their position.

It may not be too late, to stop, to think and perhaps to re-assess one's position if more people begin to support and demand that the early dreams of the net continue to be respected. Hubris and the arrogance of power have brought down would be rulers before. ICANN displays plenty of both. We need to take a lesson from the example of Brian Reid, who is quoted in Where Wizards Stay Up Late: "When you read RFC 1, you walked away from it with a sense of, oh this is a club that I can play in too. It has rules, but it welcomes other members as long as the members are aware of those rules.'The language of the RFC was warm and welcoming. The idea was to promote cooperation, not ego." [Editor: We contend that 30 years later what stands in opposition to cooperation is raw economic, self-justifying monopoly power as evidenced in the case of modern IBM.]

"The fact that [Steve] Crocker kept his ego out of the first RFC set the style and inspired others to follow suit in the hundreds of friendly and cooperative RFCs that followed. 'It is impossible to underestimate the importance of that,'Reid asserted. 'I did not feel excluded by a little core of protocol kings. I felt included by a friendly group of people who recognized that the purpose of networking was to bring everybody in.'. . . . The RFC, a simple mechanism for distributing documentation open to anybody, had what Crocker described as a Œfirst-order effect'on the speed at which ideas were disseminated, and on spreading the networking culture."

Reid has squarely identified the standards of behavior that made the Internet so strong and so special. Behavior that is completely antithetical to the ICANN way of pigeon-holing people in committees to isolate and render them impotent. We urge our readers to sit down with Lessig's Code and Other Laws of Cyberspace, which is both a prophecy and a correct analysis of what may come. The Internet must find a way to route around IBM's and the White House's ICANN.

NANOG Attempts to Facilitate Peering Through Tools Developed by Bill Norton

p. 40

We reprint a portion of bill Norton's October 17 Report to the NANOG list on the peering BOF held at the montreal NANOG meeting in early October. Norton appear to be developing tools to facilitate ISP peering. Through NANOG ISPs should follow the effort and assist in their development.

 

Wireless Reaches Internet Critical Mass

Internet Use Jumps to Mobile Platforms as Spread of Digital Infrastructure Enhances Wireless Capabilities

We Survey Issues and Players in Internet Wireless Services

pp.1-6, 10

We interview Ira Brodsky CEO of Datacomm Research and author of books on wireless communications. We survey the 1999 explosion in digital wireless technologies summarizing with particular attention to their impact on the Internet mobile, fixed and wireless LAN technologies. Brodsky points out that because of advances in digital technology wireless broadband access to the Internet has become a reality. This means that virtually everything we do with wireline Internet connections we will also do with wireless.

The interview explains from a technical point of view what is done to achieve high bandwidth using TDMA and CDMA. It covers Triton network Systems and LDMS. It shows through a discussion of Phone.com and the WAP protocol how cell phones are becoming web browsers. It describes PCS as well as Sprint's leadership in this cellular technology. It explains how Metricom intends to compete with Sprint PCS.

Looking at the Europeans it describes TDMA, CDMA and GSM applications in Europe. GSM is an enhanced form of TDMA that is popular primarily in Europe. However many people doubt that it would stand up well against a well aimed rollout of CDMA.

Wireless LANs are critical to the hopes of home networks of IP aware appliances. Costs are approaching $100 a node and speeds are approaching Ethernet. The viability of this market however is likely dependent on the outcome of the IPv6 deployment discussed in the IETF debate article included in this issue. It may also be impacted by how the network continues to scale as broadband moves into the edges of the network.

With broadband wireless as an end user option, one moves into the reality of access to the Internet being available from any where at anytime under almost any condition. Given the cost and time necessary for the installation of fiber-based local loop infrastructure wireless is becoming a more and more viable local loop alternative. On January 5, 2000 Advanced Radio Telecom announced deployment of 100 megabit per second IP network to connect high speed business LANs to backbones across the US. The ART broadband MANs will be deployed by years end in ten cities across the US. They will use Cisco supplied Ethernet routing and switching products and configure its MANs in a self healing ring architecture "capable of providing 200 Mbps of total bandwidth on its bi-directional paths.'

According to George Gilder, the growth of these systems will be sustained by the introduction, 12 to 24 months from now, of chipsets based on Qualcomm's 2.4 megabit per second HDR data transmission technology that is a flavor of TDMA running in unused CDMA channels. According to Brodsky: "HDR is CDMA; there may be some time-sharing going on, but it would be misleading to call it "TDMA." It would be more accurate to say HDR runs on separate channels that can be either in the cellular/PCS spectrum or outside that spectrum. Saying "unused channels" suggests it borrows channels from the voice system." Cellular coverage in the US has evolved to the state where basically any customer can roam nationwide by using an analogue phone that is also either CDMA or TDMA compatible. Finally Brodsky is optimistic about the capability of a company like Cisco to sell wireless spread spectrum equipment to ISPs who could use it to by pass the LECs local loop strangle hold.

The Disruptive Internet: Triumph or Chaos?

Accurate Assessment of State of Internet 2000 Depends on Mix of Technology, Governance Efforts, & Network Engineering Issues

pp. 7 - 10

In our annual State of the Internet essay we examine the continued triumph of the Internet in the areas of wireless and broadband technology, in electronic commerce and in newly evolving data storage technologies. However we caution analysts not to find a false sense of security by restricting their analyses to just these areas. For two areas not involving technology impact the Internet's future. They are the drive to regulate and control by means of ICANN as evidenced especially by the support for ICANN from those companies still dependent on the success of their legacy based technology. They second area focuses on an argument over architecture. This argument takes form as a commitment to make wide spread deployment of IPv6 a reality.

The IPv6 commitment is part of a technical debate over what some perceive as the lost "potential" of end-to -end IP connectivity as NATs and firewalls have come to shield or otherwise protect hosts on corporate intranets and prevent several important protocols like IPsec from penetrating the NAT and firewall barriers. The way these concerns are handled will affect the structure of the Internet. It will be extremely difficult to make progress on the implementation of IPv6 without a centralized top down drive designed to get as many as possible to change. But the amount of attention and effort given over to this drive will impact the final area which is one focusing on a slowly growing concern over the continued scalability of internet architecture and routing as broadband technology is deployed at the edges of the network.

Some find it worrisome that some of the scalability issues such as competing backbone architectures examined in the January 2000 COOK Report are generally not discussed openly. These folk believe that these issues may be more important to the smooth future functionality of the Internet than the outcome of the IPv6 deployment issue. If too much emphasis is placed on new technologies regardless of their impact on network architecture, network performance is likely to seriously degrade. If too much emphasis is placed on the struggle to control through code or architecture -- in addition to law -- in the way that Lessig points out in his Code and Other Laws of Cyberspace, the ability of engineers to handle the challenge of architectural design issues will be severely impacted.

Consequently the overall success or failure of Internet architecture will be determined by the interaction of these three with each other. We contend that most analysts are aware of the technology issues and are making the mistake of focusing on them to the exclusion of the regulatory control and network architecture and protocol design issues. The result hinders the Internet's ability to respond to the demands placed upon it by runaway growth. Successful analysis now demands an ability to synthesize technology, legal and network design issues.

IETF Debates IPv6 Implementation and End- to-End Architectural Transparency

NAT Boxes and Firewalls Seen by Some as Kludges to Be Eliminated and by others as Symbols of Healthy Diversity

pp. 11 - 24

During the first half of December, on the general IETF list, there was an outstanding discussion of some critical problems of Internet architecture. Most participants were among the most distinguished engineers in the IETF. The focal point was over the dilemmas posed by the desire of these people to gain a set of perceived benefits from the deployment of IPv6. Brian Carpenter's December 1999 Internet draft http://www.ietf.org/internet-drafts/draft-carpenter-transparency-05.tx t on Internet transparency provided the foundation for the discussion.

The crux of the perceived problem is that in order to make IPv4 addresses scale during the Internet's take off in the mid 1990s architectural "kludges" such as private IP addresses for intranets hidden behind Network Address Translation (NAT) boxes and firewalls and Classless Inter Domain Routing (CIDR) were instituted. The result has been that huge investments have been made in equipment and architecture that will not be easily changed. Also protocols designed to work in an Internet with end to end transparency will not work in a world where, to get from the backbone to a receiving device on the edge of the network, they have to travel through NAT boxes and or firewalls.

The perception is that the kludges are now very cumbersome and costly for corporations to manage. There is a perception that IPv6 which has several orders of magnitude more addresses than IPv4 will provide the Internet with enough flexibility such that the irritating kludges standing in the way of end to end transparency can be removed. Alas this is really true only if IPv6 can be massively deployed through out the Internet. We are talking deployment at such a level that IPv4 virtually disappears. The problem facing the Internet is that, short of an unprecedented regulatory decree that commands massive adoption of IPv6 globally, enough deployment of IPv6 to ever make a difference is unlikely to happen.

Some strong philosophical issues of design and management are at work here. On the one hand the IPv6 advocates have a top down vision of a uniformly designed and managed Internet.. Opposed to their view is the belief that certainly reflects the operational reality of the net - namely that the market place is working with the development of diverse solutions that perform quite satisfactorily.

When Ian King wrote: NAT IS A HACK. Why is there so much effort going in to somehow either "legitimizing" it, or demonizing it? Perry Metzger replied because there is a fight brewing about IPv6 and whether NAT is a sufficient alternative to IPv6. Ed Gerck summarized an opposing point of view. Further, it seems to me that if NATs are to be blamed for the demise of IPv6, or its ad eternum delay, then maybe this is what the market wants - a multiple-protocol Internet, where tools for IPv4/IPv6 interoperation will be needed and valued. A commercial opportunity, clearly.

Part of the fight is over control. Who gets to set the rules by which Internet architecture will run? It could turn out to be unfortunate wen others believe that there are serious unresolved problems with routing architectures that the time and talent of the IETF is focused on the IPv6 control issues. We may be certain, however that the IPv6 controversy is extremely important and will not quickly disappear.

Farber Moves to FCC as Chief Technologist

p. 24

On January 3, 2000 Dave Farber was appointed Chief Technologist at the FCC. Reaction was generally favorable that the Agency would have an Internet expert in the position. We wish that we felt as comfortable as the other experts about Farber's mission.

 

Poorly Understood DDoS Attacks Reveal Internet's Vulnerability to Disruption

No Consensus as to Solution -- We Present Hypothesis That Analysis of What Happened May Be Faulty

Paradigmatic Shift in Understanding of Internet Mechanics Outlined by Ed Gerck

pp. 1 -16, 30

During the second week of February the largest, and most diverse denial of service attacks in the history of the Internet caught several of the most important commercial web sites off guard and exposed what was previously a largely unsuspected operational vulnerability that affects the entire commercial Internet. -- Just as after Reagan was shot Al Haig stepped forward to say 'don't worry we're in charge here, we contend that Gene Spafford's February 19th summation of the White House meeting provides a soothing but superficial explanation of what is really a far more subtle and difficult structural weakness. This weakness is apparently inherent in the basic structure of the Internet and cannot be "enforced" out of existence. We present in Narrative form the NANOG and IETF technical discussions that resulted from the attacks. The discussion demonstrates that Internet backbone engineers are by no means agreed on precisely what happened or on how to deal with it.

On February 9, Lauren Weinstein, partner to Peter G. Neumann of the Risks mail list and co-sponsor with Neumann of People for Internet Responsibility had the following observation. "It seems apparent that the rush to move all manner of important or even critical commercial, medical, government, and other applications onto the Internet and Web has far outstripped the underlying reality of the existing Internet infrastructure. Compared with the overall robustness of the U.S. telephone system, the Internet is a second-class citizen when it comes to these kinds of vulnerabilities. Nor will simply throwing money at the Internet necessarily do much good in this regard. More bandwidth, additional servers, and faster routers--they'd still be open to sophisticated (and even not so sophisticated) attacks which could be triggered from one PC anywhere in the world. In the long run, major alterations will be needed in the fundamental structure of the Internet to even begin to get a handle on these sorts of problems, and a practical path to that goal still remains fuzzy at this time."

Part Two: A New Calculus for the Internet

The COOK Report Explores Ed Gerck's Ideas -- The Relationship of a Quantum State Internet to Security and Privacy and of Data that Obeys Physical Laws to Mechanisms for Conveyance of Trust

pp. 17 - 22, 30

Part Two of this issue contains an interview with Ed Gerck as well as two essays by him. He is co-founder of the Meta Certificate Group, http://mcg.org.br , CEO of Safevote, Inc. and Chairman of the IVTA.. We suggest that his ideas form the basis for a fresh and compelling analysis of what we may really be dealing with. We conclude that there is a possibility that the fundamental nature of the attacks may have been completely misunderstood. We also contend that Gerck's theories, published here for the first time, may provide an entirely different mathematical basis for understanding the Internet as a quantum information structure possessing significantly different capabilities and potentials than could be extrapolated from our current understanding. Although this is quite a statement to make, his ideas have reached enough people so that it is likely that research will be rapidly undertaken to ascertain if his own experimental results dating from 1998 are verifiable and reproducible. Gerck's ideas involve the foundation of an entirely new calculus for the operation of the Internet.

Gerck asserts that the major reason the attacks were so successful is that the packets arrived at the target servers with a high degree of coherency - that is to say at almost the same instant. He points out that the technical functionality of the Internet mitigates against the coherent arrival of large numbers of packets at a specific target and thus a ten fold spike in incoming bandwidth would be very unlikely unless other unusual mechanisms are also at play."

How then could the observed effects of the arrival of very large numbers of packets have happened? He explains how his work in the quantum mechanics of lasers in the early 1980s gave him a hypothesis that he successfully tested in a university environment in 1998. Namely he suggests that the number of entities in the Internet has reached a critical mass where a single event such as a packet sent to a trin00 network, can result in an avalanche of coherent data amplification. The result is similar to the coherent amplification process that sets off the sudden flash of a laser. Under such conditions he posits that when this occurs, it creates conditions where packets can provide for a much different behavior as they reach a target. Gerck suggests that such events trigger a kind of quantum behavior, which however always exists but which then becomes visible at the user observed level and strongly contrasts with the classical behavior that it has replaced."

Gerck's ideas represent a paradigmatic shift in the evaluation of the scope, function and behavior of the Internet. One of the problems of communications involved is that to those stuck in the old paradigm, messages defining the new are often unintelligible. For many people his ideas will be quite jarring.

For example, his ideas reach to the root of what we call data. He suggests that data be thought of in terms of a natural quantity and as something that can be modeled with absorption, spontaneous emission and stimulated emission processes -- the last being a behavior associated with quantum systems. He finds that under certain conditions, stimulated data emission can win out over spontaneous data emission. This will happen when a minimum threshold of affected systems is disturbed by what may be a hacker attack, or the interaction of a virus with multiple systems or even by the unexpected appearance of a bug in operating software that everyone assumes to be stable. His findings lead to the conclusion that such perturbations, resulting in web site and or network congestion, will happen with increasing frequency. Of course if he is right, when they do happen the next time, they may have absolutely nothing to do with hackers.

After compiling the technical discussion from NANOG and IETF, it seems to us that the emphasis on traditional security measures is rather futile. The Internet is too large with too many machines under too many levels of control for traditional security measures of confinement of people and machines to be effective.

Gerck has some very interesting ideas about constructing mechanisms where two parties which are not known to each other may use a third neutral environment in which to securely negotiate conditions of trusted operation. He seems to have an uncanny sense of political power and psychology and how to reflect this in technical situations to build trust between parties that have no common grounds for negotiation.

As recently as a week ago we intended to publish only his two essays. However when we called him on the 25th of February to ask for answers to questions about the second essay on coherency, we found ourselves in the midst of a far ranging discussion that opened up some of his ideas of the physics of data and mechanics of trust that we had not heard before. This discussion lead to the interview on pages 17 to 23. This interview which we have further expanded by asking several of our own experts to read and ask their own questions of Ed, begins to thrown some light on the breadth and scope of his ideas.

Gerck's ideas lead to a paradigm change on such fundamental questions as data flow in the internet and the nature of security and trust in computer networking. Having a world view different from the prevailing gestalt often presents problems for everyone involved. We invite readers to ponder his message. We have known of Ed for perhaps almost two years and known him directly for six months. An unusual quality about him is that he is laid back. He is intuitive and skillful in dealing with people. His ideas may succeed precisely because he doesn't push too hard.

We have been a bit gun shy about walking out on the end of a limb on behalf of the ideas of someone who is not yet well known and whose views are so iconoclastic. For the last few weeks we have made some serious efforts to get some sanity checks from people in better positions than we are to judge what he presents. Three very senior people have returned thumbs up. We introduced a forth such person with the strongest technical background of all to Gerck two weeks ago.

When we asked this person how we might describe Gerck in this newsletter he replied: You might describe him as one of those bright people who are so frequently overlooked because he's happier working on hard problems than talking about it all. You might describe him as an Internet Guy who got here "the hard way" -- He's trained as a physicist. He thinks about the world from a perspective of how do you model the stuff you perceive around you in mathematical terms -- and this leads him to different observations than those made by those of us who "grew up" in the Internet and distributed computing in general."

One of the problems facing the Internet, is that we have, sometimes with chewing gum and bailing wire, built it into something on which a very large proportion of our economy is riding. The prevailing opinion in the wake of the DDoS attacks is to call in law enforcement, build the security walls ever higher and hunker down with publicly reassuring words to the effect of don't worry we are in charge here. A careful reading of the technical discussion on pages 2 through 16 of this issue will show the that this position is founded on quicksand. A reading of the Gerck essays and interview will reinforce this conclusion

We contend that the official views issued in the aftermath of the White House meeting of February may be well-intentioned. Nevertheless they are misguided. Without a correct diagnosis of our current problems, we will be unlikely to find solutions. As a result, the Internet's behavior of early February may become more rather than less commonplace.

In Two Essays Ed Gerck Looks at DNS as the Sole Handle of Internet Control and Explains Why the February DoS Attacks Were Coherent Rather than Distributed Essays,

pp. 23- 27

Thinking

[Editor's Note: We present roughly half of Ed Gerck's Thinking Essay in the belief that readers will begin to understand why we consider it the single best short essay on the topic of information control, DNS Governance and ICANN ever written.]

"...there is nothing to be gained by opposing ICANN, because ICANN is just the overseer of problems to which we need a solution.

My point is that there is something basically wrong with the DNS and which precludes a fair solution - as I intend to show in the following text, the DNS design has a single handle of control which becomes its single point of failure. This needs to be overcome with another design, under a more comprehensive principle, but one which must also be backward-compatible with the DNS. [. . . .]

So, the subject is domain names. The subject could also be Internet voting. But I will leave voting aside for a while. In my opinion, the subject, in a broader sense, is information control. If domain names could not be used for information control (as they can now by default under the DNS - see below), I posit that we would not have any problems with domain names.

But, domain names provide even more than mere information control - they provide for a single handle of control. DNS name registration is indeed the single but effective handle for information control in the Internet. No other handle is possible because: (1) there is no distinction in the Internet between information providers and users (e.g., as the radio spectrum is controlled); (2) there is no easily defined provider liability to control the dissemination of information (e.g., as advertisement and trademarks are controlled); (3) there is no user confinement to control information access (e.g., as state or country borders in the Canadian Homolka case), etc.

But, how did we end up in this situation? After all, the Internet was founded under the idea of denying a single point of control - which can be seen also as a single point of failure. The problem is that certain design choices in the evolution of the DNS, made long ago, have made users fully dependent on the DNS for certain critical Internet services. These design choices further strengthened the position of DNS name registration as the single handle of information control in the Internet. And, in the reverse argument, as its single point of failure. [. . . .]

However, without the DNS there is no email service, search engines do not work, and web page links fail. Since email accounts for perhaps 30% of Internet traffic - an old figure, it may be more nowadays - while search engines and links from other sites allow people to find out about web sites in about 85% of the cases (for each type, see http://www.mmgco.com/welcome/ ) I think it is actually an understatement to call the DNS a "handle." The DNS is the very face, hands and feet of the Internet. It is the primary interface for most users - that which people "see". Its importance is compounded by the "inertia" of such a large system to change. Any proposal to change the DNS, or BIND nameservers, or the DNS resolvers in browsers in any substantial way would be impractical.

[. . . .] One of other fallacies in email is to ask the same system you do not trust (DNS, with the in-addr.arpa kludge) to check the name you do not trust (the DNS name), when doing an IP-check on a DNS name. There are more problems and they have just become more acute with the need to stop spam. Now administrators have begun to do a reverse DNS check by default. Under such circumstances you MUST have both DNS and IP.

Further, having witnessed the placing of decisions of network address assignment (IP numbers) together with DNS matters under the ruling of one private policy-setting company (ICANN), we see another example of uniting and making everything depend on what is, by design, separate. The needs of network traffic (IP) are independent of the needs of user services (DNS). They also serve different goals, and different customers. One is a pre-defined address space which can be bulk-assigned and even bulk-owned (you may own the right to use one IP, but not the right to a particular IP), the other is a much larger and open-ended name space which cannot be either bulk-assigned or bulk-owned. They do not belong together - they should not be treated together.

But, there are other examples. In fact, my full study conducted with participation of Einar Stefferud and others has so far catalogued more than forty-one essential problems caused by the current design of the DNS. Thus, a solution to current user wants is not to be reached simply by answering "on what" and "by whom" control is to be exerted, as presently done in all such discussions, without exception - for example, those led by ICANN. In this view, ICANN is not even the problem (as usually depicted by many) but simply the overseer of problems. At least, of 41+ main problems - all of which involve information control.

Thus by realizing both what these 41 and other problems are and the underlying issue of information control in the Internet (which issue is not ignored by governments), the study intended to lay the groundwork to provide for a collaborative solution to information flow in the Internet without the hindrance of these 41+ problems. The study also intends that the possibility of information control will be minimized as a design goal. [. . . .]

Regarding "time" - readers may ask what is the schedule to propose new standards based on what I and my group are working on for domain names? As I see it and as I also comment in regard to the work on advancing standards for Internet voting at the IVTA (where IMO the same principles apply), time is not a trigger for the events needed to get us out of our predicament, but understanding is. Cooperation has its own dynamics and we must allow for things to gel, naturally. We can motivate, we can be proactive but we must not be dominating. We seek collaboration, not domination. Both technically as well as market-wise."

Coherent Effects in Internet Security and Traffic

Here is a paragraph from Gerck's second essay.

"This was not only a DDoS - this was a CDoS. A Coherent Denial of Service attack. The difference is that a distributed but incoherent attack would not have done any major harm. In order to explain how such an attack was possible and why it was effective, one needs to understand first that, normally nothing is coherent in the Internet. All packets travel from source to destination in what may seem to be a random fashion; each host has unsynchronized time - oftentimes, even wrong time zones; and even the path traveled by each packet is also non-deterministic. Thus, achieving the coherent arrival of a stream of packets at one location by sending them from a large number of coordinated locations is a feat.

 

Gigabit Ethernet Rides Economy of Scale

As It Erases LAN WAN Boundary Gigabit Ethernet Makes Network Less Complex, Easier to Manage -- 10 Gig Standards Will Demand Choices Affecting ATM SONET WAN Functionality

pp 1-10, 27

We interviewed Dan Dov Principal Engineer for LAN physical Layers with Hewlett-Packard's networks division and Mark Thompson product marketing manager for HP's ProCurve Networking Business on December 6. In Smart Letter 30 on 12/9/99 David Isenberg wrote the following very good summary of why Gigabit Ethernet is hot. "Since there are many more LANs than WANs, GigE, due to its Ethernet LAN heritage, has huge economies of scale. (Every flavor of Ethernet that has hit the marketplace has slid down a 30% per year price reduction curve.) GigE's use in both LAN and WAN gives greater scale yet. Plus by erasing the LAN/WAN boundary, GigE decreases the complexity of the network, making it even stupider, easier to manage and easier to innovate upon. So it looks like the Stupid Network will be built of GigE over glass."

In the Interview Dov takes us through the technology reasons for Ethernet's increase in speed as its importance in LANs has grown and LANs themselves get larger and more bandwidth hungry. Ethernet, in short, is leveraging its ubiquity, low cost and open standards on the back of the growing importance of the Internet and its increased bandwidth. In doing so it is playing a significant role in making new industries like Application Service provision possible.

Dov concludes that "the reason that the Ethernet succeeded as well as it has, its simplicity. Ethernet is a very simple, yet elegant protocol. But because of its simplicity, it's extremely inexpensive to develop and to manufacture Ethernet-compliant devices." Many people are taking gigabit Ethernet and applying it to wide area networking because of its simplicity, ease of access and simplicity of its framing.

In its relationship between volume and pricing Gigabit Ethernet offers significant values. Gigabit Ethernet initially was being installed in the local area network to provide interconnection between boxes that were connecting 100 megabits, and 10megabits, to the desktop. The volume of that kind of traffic quickly becomes very large. For these applications over short distances, compared to OC-24, gigabit Ethernet is actually cheaper, even though it provides more bandwidth. What people started to realize, because of the volume of gigabit Ethernet traffic that was going out and the relative simplicity of it, the cost of gigabit Ethernet ran under cut that of OC-24 pretty quickly. And the result is that people who are making the decisions as to what will be used to hook LANs to each other and to the Internet started deciding to go with gigabit Ethernet, rather than with the OC-24 or OC-48. Gigabit Ethernet's application is at the periphery of the internet Therefore it is not being looked tofor the elimination of SONET add/drop multiplexers.

With ten gigabit Ethernet some people are proposing to basically take the ten gigabit Ethernet media access controller, the MAC, and packetize the data, just like we currently do in Ethernet at ten times the rate. But they then want to send it into a SONET framer. The SONET framer will then take that data and chop it up and put it into the SONET frame. The framer will send it across the network and when it gets received on the other side, it will be effectively deframed. There are also people that are more focused on taking the current, simple Ethernet approach, which is just, take the data, put it onto an optical fiber and ship it on across the link. They don't want to get into the complexity of SONET framing and so on.

HP's Thompson offered the following analogy: " It's sort of like the subway system versus the inter city train system. Once, historically, if you wanted to ride the "train" from the center of one city to the center of another, you rode the subway system to get out to the train station, took a train and then subway back into a city. So what we're talking about now is Ethernet making it robust enough and fast enough so that your subway car can simply ride from one city to the next and that you don't have to change the vehicles that are riding on the tracks, the fiber, in the meantime." In other words a simplistic design that would work for the people who are working in the local area networks would also go for people who wanted to do optical transmission with Ethernet framing cross- country" The interview concludes with a discussion of the issues being faced in the development of 10 gigabit Ethernet standards

Explosion in Capacity Chased by Explosion in Use Fiber to the Home from HP Oracle and Power Companies For Less than $15 a Month -- Abovenet on the Need to Own Fiber

pp.10, 27

As price on IRU's for large bandwidth circuits drop, David Rand of AboveNet explains the need to hold fiber. An explanation that seems to be well verified by announcements from Sierra Pacific, HP, and Oracle of their joint project to make fiber to the home available in southern Nevada this summer for $15 a month. Announcement by Delta Airlines and Ford Motor Company of subsidized PC o\wnership and internet use for all employees also look to be the opening shots of another huge driver of bandwidth demand.

Role of Diffserv in the Development of QoS Tools

Kathy Nichols Explains How Pursuit of Viable QoS Protocols Is Transitioning from Centralized Model to Horizontally Organized Tool Chest from which ISPs Can Design Cross ISP Compatible Services

pp. 11 -19, 27

On November 16, we interviewed Kathy Nichols who with Brian Carpenter is co-chair of the very active Diffserv working group. We asked Kathy to put Diffserv in its historical context. She replied that originally people assumed that Quality of Service guarantees would be needed to do multimedia over the Internet. Integrated Services and RSVP came out of these assumptions. But RSVP had been designed by Lixia Zhang and others while Lixia was at Xerox Parc. The design was made with the assumption that you could put RSVP state into every router because you would always keep your application inside the network of a single provider. After several years of experimentation the emerging view is that RSVP should be seen as a generic signaling protocol or a way for a host to talk to a network. Other protocols would govern ways that hosts request things of a network to which they are talking. One should note that the original work with RSVP and Intserv was done before April 1995 when the NSFNet backbone was shut off and when the topology and traffic on the internet to which people were thinking about applying quality of service were radically different that what they are now (almost exactly five years later).

By the beginning of 1997 some ISPs were beginning to talk of QoS in terms of being able to give some of the traffic that they carried better treatment than other traffic a kind of better best effort. According to Kathy "the Differentiated Services discussion happened because some service providers were not happy with the Intserv approach. They weren't going to let state cross their boundary. They didn't see how something with that much state could work. And it also didn't seem to do exactly what they wanted, which included to be able to tell a customer that they could replace their leased line and give them at least equivalent service. And it would be a good deal, because they should be able to offer it cheaper and reuse their infrastructure."

Traffic for a premium class of service could be relegated into special queues for that traffic alone. Traffic for best effort and better best effort could remain in the same queue. In most network conditions the packets would be treated the same while in exceptional conditions the mere best effort packets might find themselves discriminated against. Some of the very best engineers and protocol designers in the Internet were coming up with schemes for how to do traffic marking and shaping to accomplish these goals. (The idea that the same queue can be used to have two different levels of service is the idea behind weighted RED.) Unfortunately their schemes - call them tools perhaps - were too often incompatible with each other. People were designing complex tools to work handle vast amounts of traffic in complex and rapidly changing situations. Diffserv was started as a way to bring order out of a very complex chaos. People wanted to structure a framework for which people could design their tools and to create a situation where if they designed them to be compatible with the framework they would be compatible and interoperable with each other. Diffserv may be thought of as a set of guidelines within which various quality of service tools may be implemented.

Kathy states that the only way to scale QoS is to aggregate packets. If we group them inside of a "cloud" or domain, we will put them into something called a "behavior aggregate." You create a behavior aggregate by saying that you will assign each packet that is to be a member of that aggregate a particular per hop behavior (PHB). PHB permits the assigning of the same forwarding treatment for all network traffic that labeled with a given PHB. You may then consider telling customers that they will pay a certain rate for traffic sent in conformance with the PHB aggregate they have purchased and let them know that your routers will drop traffic labelled as conformant with a given PHB that in reality is found by the router to be non conformant. One goal is to get the maximum amount of classification of what may be done with traffic out of a field that is no more than 6 bits per packet. What Diffserv is really doing for ISPs and for hardware vendor is helping them to work together to establish reasonable guidelines within which many different quality of service provisions can created. The idea is that the ISP is allowed to establish its own QoS offerings. Diffserv has created behavior aggregates and control planes that can be used to implement the policy goals of the behavior aggregate. Two ISP may be able to solve cross ISP policy issues by sitting down with each other and selecting Diffserv compatible tools that would not have to be the exact same tool. It is Diffserv's intention to give them tools by which they can achieve common QoS outcomes by means that inside their respective networks may be quite different.

All Responsibility Disintermediated from DNS Fix

New ICANN DOC Shared Registry System Enables Registrars, Shared Registry and ICANN to Disclaim Responsibility for All Actions that injure Registrants

pp. 20 - 26

In mid January Wired http://www.wired.com/news/technology/0,1282,33753,00.html published a delightful summary of the results of Beckwith Burr', ICANN's, and NSI's redesign of the DNS system. People were buying a domain name and paying for it at the time of purchase only to see it sold out from underneath them the very next day to someone else. For the little guy the Internet's domain name system had been put at risk by the Clinton Gore bureaucrats. No mater: the large, powerful and rich had the ICANN Uniform Dispute Resolution Policy and the even more Draconian cyber squatting legislation. ICANN had done a superb job of freeing the corporate trademark attorneys to do their thing. It had done this by creating a jury-rigged system where registrars could say that mistakes belonged to the registry which in turn could say it was playing by ICANN rules while ICANN disclaimed all responsibility for breakages in the system.

According to Wired, "ICANN said it was not responsible for domain name discrepancies between registrars and their customers.

The COOK Report reminds its readers that to be functional a domain name must be part of the registry database that determines what other names are taken and is responsible for getting the names into the root servers where down line DNS servers can find them. The operation of the new system has been rigged by ICANN so that, while the registry gets names to advertise, it gets no information about the owners of the names in whose interest it is doing the advertisement. This information is known to the Registrars whose agreements with ICANN give them enforceable rights vis-à-vis the Registry. But the customers who pay a registrar to act as the intermediary between them and the registry have no enforceable rights what so ever to the use of the domain names for which they pay.

We do not know who designed and put in place this truly bizarre system. It was ICANN. But the secret process by which it was done inside of ICANN has remained opaque to everyone on the outside. As far as we can tell, ICANN rules by having its Jones Day attorneys, Touton and Sims work with Esther Dyson and Mike Roberts to establish policy that disenfranchises every Internet user (who does not also pay the necessary fees to become a registrar) of any rights to receive the benefits of the products for which they have paid. The registrar is fee to do anything it chooses with the domain name that it sells to the registrant. The system is also dependent for its operation on a shared registry protocol that has been (according to the testimony of some outside experts who advised NSI on its design) implemented in such a way as to make any accountability to the registrants and even to the registrars unlikely. NSI has sought what non experts will take as endorsement from the IETF by asking for publication of the protocol as an informational RFC. One of the experts who advised NSI in the design has protested loudly against the move and asked NSI to free him from his non disclosure agreement so that he may publish his criticism to allow independent observers to make their own judgements. NSI has refused.

By the end of the month it was clear that the entire shared registry system was a design failure. As early as late December complaints of break downs were becoming evident. On December 23 on the Domain policy list at NSI list member "A" complained " Most whois clients query the public NSI Registry database first which only updates *once per day* so it's quite possible for someone to do a domain query and be shown the old whois information of the old registrar. Nothing is wrong.

To which list member "B" replied: No, nothing is wrong as far as the design goes. But of course that [just looking at the design] is not far enough, is it? Therefore leaving the ability for registrars to "Steal" domain names and/or create a domain name conflict from the get go. Doesn't say much for stability, does it? Our article summarizes debate from the IETF and Domain Policy lists that makes quite clear the absurdity that the White House and its ice president is visiting upon the Internet.

Froomkin and Auerbach Offer Eloquent testimony to ICANN's Most Recent Failures

pp. 27, 29, 30

Two who have tried to work with ICANN say foul in no uncertain and bitter terms to move by DNSO to censor DNSO GA mail list. ICANN Makes it clear it will tolerate no criticism.

 

Driven by Need for Risk Management Bandwidth Commodity Market Coming

Efforts Underway to Create Tools Where Rapid Market Changes in Demand and Supply Can Quickly Match Buyers and Sellers

Stan Hanks Explains How These Developments Will Reshape Internet Industry

pp. 1-8, 12, 15

Over the next 12 to 24 months experts predict that bandwidth will become a commodity tradable in real time on commodities exchanges around the world. We interview Stan Hanks who was formerly VP of Research and Technology for Enron Communications. Currently he is very much involved in making the commoditization happen. (We have also interviewed Lin Franks of Anderson Consulting and intend to publish that interview in our June issue.)

As Hanks points out: If you get to the point where you have an oligopoly of suppliers - which we pretty much do - and an increase in availability combined with a historic decline in price, as well as a fair amount of price elasticity associated with the thing in question, you start seeing the development of commoditization.

Hanks outlines in detail how a cost of about $2.75 cents per channel per mile for OC192 capable lit fiber across a wide area network is derived. He then points out that because national networks average 25,000 route miles of 144 fiber cables the initial cost of such a network will run to multiple billions dollars. This is a very hefty investment for something whose wholesale price is declining at the rate of about 40% a year over the past five years or so. The problem is that when planning an investment like this it is not possible to derive reasonably accurate figures of income that might be expected from such investment. Financial exposure now is vast with no adequate way within the industry to manage risk. Commoditization of bandwidth will provide the tools by which risk can be managed.

The first step that the industry can make in this direction is to establish a benchmark price and uniform contracts. Efforts are already well underway to do this and success is anticipated well before the end of the year. Such a benchmark might be the price of a DS3 from New York to LA or it might even be the cost of a wavelength of light on a DWM system for a distance of 500 miles.

A real commodities market will assure users that they will always be able to get a supply of bandwidth, even at very short notice. One may the expert the internet business model to shift from the question will there be adequate supply to the question of what it will do with the bandwidth. Having an assured supply at a predictable price will make it possible to do many things with bandwidth that currently are not economic.

Currently ISPs tend not to give the capacity planning problem adequate attention. Their ability to turn up new bandwidth is hampered by the fact that they don't have the financial management and projection kinds of tools that enable them to go to their finance people and say if you give me "x" dollars for new capacity I can give you "y" income within "z" amount of time. Before long financial analysts are going to be asking senior carrier management what it is doing about the huge amounts of unmanaged risks it carries on its books. Suggestions are being made that the way to manage this responsibly join in an industry effort to commoditize bandwidth and eventual automate trading.

The terms for purchase of fiber today tend to be negotiated from scratch with each contract, and built around very long term duration - 10 to 30 years. Part of what is needed is a re-education of the industry to the degree where it can grasp why purchase of bandwidth in terms of a few hours duration to a few weeks duration will be better for the interests of everyone than purchases for ten to twenty five years duration. Sycamore is getting a leg up on the rest of the industry not by focusing just on optical transmission services but on building software that can be useful in the provisioning of new bandwidth services.

Ultimately, we may expect to see the vertical hierarchy of the big carrier backbones devolve into a mesh. Currently these big networks don't just connect to each other at a handful of places, they interconnect in all kinds of interesting ways. But they connect only to each other and then to their customers. This interconnection topology is going to start to evolve in very interesting ways. Customers of one vertical network, given the opportunity do so, would like to be able to buy by bandwidth to connect themselves with customers attached to a different vertical backbone. Horizontal linkages that are then over printed onto the vertical ones. Then you could move through the matrix of space either vertically or horizontally. And you could do this in accordance with what your real time switching and bandwidth equipment would allow.

According to Hanks, the only reason that this hasn't happened to date is two fold. First there isn't enough money in it in terms of applications. Second that there is no way to manage the risk associated with doing it. This horizontalization comes when A and B wind up being able to directly connect to each other, on an "as needed" basis. Akamai and the other CDNs - CDN = Content Distribution Network - are doing things to facilitate this; Enron is also doing this. Akamai's content distribution model sets up horizontal routing for web sites in such a way where should traditional routs become congested, Akamai's routing can switch from a vertical organization to a horizontal paths across provider boundaries. Inktomi and Digital Island after its recent merger with Sandpiper may also be regarded Content Distribution Networks determined to build their own models of horizontal connectivity across provider back bones. There are more of these out there. "Coming Soon," as the saying goes. Hanks was at a venture capital conference recently and found "CDN" to be one of the new hot buzzwords

Swedish Ruling Party Endorses Building National Broadband Infrastructure: Goal Vital to Sweden's Security

Interview With Swedish Commission Member Explores Development of Infrastructure Policy Goals of Five Megabits per Second IP to Every Swedish Home and Apartment

pp. 9-12

We interview Anne-Marie Eklund Löwinder, a senior project Leader of the Swedish Government's Commission on Information Technology and Communications (CITC). Ann Marie explains the rationale behind the national fiber strategy presented to the Swedish parliament this week. The government is proposing a fiber build out that will connect together all municipalities in all of Sweden. The fiber is to be owned by the municipalities and sold on equal access terms to ISPs which meet the program's criteria. A second and equally important part of the program is designed to lead to a local build out that will result in an Ethernet jack delivering TCP/IP at five megabits per second to every home and apartment in Sweden. The interview also discusses Stockholm's experience with Stokab which has fibered almost the entire metropolitan region over the past five years.

Napster - MP3 File Sharing Application - A Hugely Popular Bandwidth Sink Defies Control Efforts of Network Administrators

pp. 13-15

Napster is an application written by a 19 year old computer science student last summer. Downloadable from the web, it lets users temporarily turn their computers into servers for the purpose of swapping MP3 files. Grown hugely popular in the last several months, it accounts for a significant percentage of Internet traffic. According to university network administrators, it is clogging campus connections to the Internet. We publish edited discussion on what can be done about the problem from the CAIDA and NANOG mail lists. Port blocking has been tried without great success as students in many cases find other ports to use. A new program called Gnutella and far more powerful than Napster is under development as well. Some people are saying that Napster's impact on Internet traffic may approach that of the web.

Cracking the Code: an Analysis of US Internet Governance, E-Commerce and DNS Policy

Why US Dominance of E-commerce Indeed is Dead if ICANN Fails & Why the US Has Most to Lose from Continuing a Policy Founded on Indefinite Control of the Root

pp. 16 -23

Various court decisions are making ever more clear the advantage that possession of the root gives to the US in maintaining its commanding lead in global e-commerce. This is leading to resentment abroad. Given the course on which we are all headed, ICANN is likely to be at best a temporary band aide on a festering sore until decisions of foreign courts or governments fracture the US-controlled, authoritative root. We discuss both some of the ways in which this fracture might take place and what impact it would likely have on the Internet's operation.

While a fractured root would certainly not destroy e-commerce, the very fact that it happened would be likely to pop the speculative bubble supporting the stratospheric prices of Internet stocks. It would demonstrate that a globally-unified forward march of the global economy running on internet "rails" is only a pipe dream. Many investors and VCs would be forced to rethink the price value equations on which their actions have been based. Should contention over the root get serious enough to throw prices of Internet stocks into a nose dive, the United States would loose far more than any other nation in the world. This is very likely what John Patrick and Vint Cerf and Esther Dyson had in mind when they asked the venture capital community to contribute to ICANN last summer cryptically warning that if ICANN failed e-commerce would also fail. Certainly the ongoing uncertainty of how much of a global market for business to business e-commerce would be easily reachable in the event of trouble for the authoritative root would take the buzz off of most e-commerce business plans.

We arrived at the above conclusions after pondering Ed Gerck's essay "Thinking" (April COOK Report, pp. 23-25). We find Gerck's article to be a useful point of view for analyzing some unresolved issues relating to ICANN and the Department of Commerce on the one hand and the DNS and alleged need for a single authoritative root on the other. Gerck sees DNS as the major centralized component of an otherwise decentralized Internet. In his essay "Thinking" he says that some of the choices made long ago in the design of the DNS not only make it depend on a single root but also "without the DNS there is no email service, search engines do not work, and webpage links fail." DNS is " the single handle of information control in the Internet. And, in the reverse argument," it is "its single point of failure."

With something as powerful as the Internet, everyone wants more and more to seize control, if only to keep others from controlling it. It certainly can be argued that the struggle for control of DNS has become the focal point over the last four years of a diverse coalition of actors (trade mark interests, IBM, ATT and others) that have gathered together to form ICANN. Now it is generally assumed under US law that the organization which controls an entity bears legal responsibility (liability) for the use of its power. Gerck suggests that under the conditions of a single handle of control over the Internet, the controlling organization's liability is potentially total. Thus given the nature of ICANN's use of DNS as a means of grabbing control over the Internet, the liability facing ICANN and anyone else who would emulate it is essentially unlimited. As a result, in structuring ICANN it has been necessary to insulate all players from the consequences of their otherwise unlimited liability.

We have taken Gerck's essay and used it as a template on which we have applied our own knowledge of ICANN. This process has helped to bring a number of issues into focus for the first time. In its eagerness for control those who have promoted ICANN have taken all the critical administrative infrastructure of the Internet DNS, IP numbers and protocols and dumped them into the single ICANN basket.

But having all our eggs in one basket and having in the DNS a single point of failure creates the kind of prize that, as long as we still have national economies competing against each other, the US government and its major corporate allies will do what ever is necessary to protect from foreign capture, or even from foreign influence. Since ICANN is the basket holding all the eggs it, in the meantime, must be protected from its unlimited liability by being made virtually unsuable.

In order to make ICANN unsuable, its backers have had to create for it an arbitrary structure that renders it immune from the inputs of those communities that it is supposed to serve. This arbitrary structure has in turn prevented ICANN from inheriting the political legitimacy within the Internet community that Jon Postel's exercise of these functions once enjoyed. ICANN follows a carefully scripted routine that supports its role as guardian of all the Internet's administrative eggs that have been in its single "basket." This scripting greatly angers those who having mistaken the ICANN process for being one of actual openness have invested their time in hope of influencing the outcome. However the play acting also serves ICANN interests in that it can be spun by ICANN's public relations firm in such a way that the casual press lacking the time and ability to do its own research may be fooled. Therefore ICANN has bought the administration some short term time to regroup and maneuver.

What we have done in this article is demonstrate (1) why ICANN can be nothing more than a temporary fix (2) how ICANN is likely to fail (3) why the consequences of this failure will hurt the United States more than it will hurt other nations, (4) why from ICANN's efforts designed at all costs to shore up what is really an untenable effort to maintain long term central control over Internet addressing there needs to be a switch to efforts aimed at placing in the hands of each user the means by which he or she shall be able to address and find Internet objects.

ICANN was created as a diversion on the part of Ira Magaziner who conveniently left the administration and returned to private consulting as soon as it was established. It is a smokescreen cleverly designed to give the illusion to the rest of the world that the US is transferring control of administrative functions over the net to a world body where the Europeans and Asians would be led into thinking they could play a significant role in policy making.

And indeed just so long as they don't try to grab the root, American policy is to play along with the Europeans and Asians and acting through ICANN do such things as granting them direct control of their own country codes, and the power to enable their corporations to have preferential treatment over domain names on the excuse that such names can be treated as trademarks. Many other powerful groups have been given an opportunity to play in the great ICANN charade.

As long as ICANN is there, it gives the impression that others besides the US government will be allowed a role in root server policy making and control. In reality the continued heavy hand behavior of Roberts and Dyson has made it possible to drag out the ICANN foundation process for another year getting it conveniently past the upcoming touchy US Presidential elections. As a result the Clinton Administration has been able to extend the dual relationship of the ICANN DoC cooperative agreement.

The extension makes it possible to preserve ICANN as a maneuver designed to deflect attention from the stark fact that without ICANN, the US administration would seize the root servers by force rather than loose control. This is the secret of why ICANN cannot be allowed to fail. ICANN's central purpose is to divert attention from the fact that the Clinton administration has made a decision to treat the root servers as a strategic telecommunications resource over which it is perhaps even prepared to use the police power of the state to protect from falling into the wrong hands.

It would be encouraging to see some interest in Washington in the incubation of the understanding necessary for the Internet and e-commerce to cooperate in working its way out of the win-lose control situation in which it finds itself. The route of control has been tried. As we have shown in this discussion, not only has it not worked, it also looks to be untenable on a long term global basis. It is to be hoped that if our policy makers understand that we are likely to loose more than anyone else in a struggle to maintain our control, they may also come to understand that they have the most to gain by removing all possible levers of control from everyone's grasp. If it becomes clear that no single entity can hope to control the Internet, many strains in the present system could be quickly dissipated. We are a "can do" nation. If the administration were to understand that everyone would have more to gain from such an outcome, we believe that there is adequate talent available to ensure success.