A Practical Navigator for the Internet Economy

Packet Design Has Unique Research Role

Seeks Improved Routing to Offset Problems Inherent in MPLS

Judy Estrin, Kathie Nichols, Van Jacobson Want Cost Benefits of Convergence By Leveraging Strengths of TCP/IP Architecture

pp. 1-7

Packet Design is a venture headed by Judy Estrin, Kathie Nichols, and Van Jacobson. It is the fourth company founded by Estrin and her husband Bill Caricco. Buoyed by their first three successes the founders have the ability to run Packet Design with a business model of a "perpetual startup." As explained by Judy and Kathie in an interview, while Packet Design does anticipate making money by licensing and spinning off startups of its own, it has a much more unique purpose to its existence. Namely it has been formed to do research into improvements of routing. Judy explains that the frantic growth of the commercial Internet from 1995 onward meant that short term quick fixes were taken to the evolution and scaling of routing and backbones. To her dismay as the path to the convergence of voice and data networks, the PSTN and the Internet is now clearly defined and momentum of convergence is increasing, she sees a possibility that we could wind up with the worst of both worlds in the final architecture.

In her own words: "The convergence of data, voice and video we are seeing today is driven by the dramatic increase in data traffic now being pushed across the PSTN infrastructure. The traffic patterns associated with new data applications are very different from those of phone conversations. Yes, the Internet needs to maintain the manageability of the telephony world -but not at the expense of scalability. MPLS, a string-oriented technology, was developed to solve a point problem: integrating local IP and ATM environments. It works well for that use, but its proponents have positioned it as a panacea for all sorts of other problems. At first glance, MPLS seems like the perfect answer to a converged Internet. But it's really just a quick fix. Because its architecture is based on strings rather than clouds, it has all the disadvantages of strings and, in the long run, it creates more problems than it solves."

Judy answered affirmatively our questions as to whether a part of her message was that she believed that an Internet where the circuit-switched orientation of MPLS (strings) played the major architectural role would be more costly to operate than an Internet that was faithful to its founding connectionless philosophy (clouds).

"The reason that we feel so strongly about this is that we believe that it is the properties of IP which can take us directly to the type of Internet that yields the best scaling characteristics at the most favorable cost from a manageability perspective. Now what the telephony world does very well is to offer us some best practices in the areas of manageability and billing and accountability. We need to map these onto the Internet world. But you don't achieve this by making the Internet infrastructure look like the telephony infrastructure. You need to figure out how to do all those things within the confines of clouds."

While Judy, Kathie and Van believe that they have definite ideas that when implemented will produce answers to MPLS and improvements in routing,, as Judy phrased it; we are not ready to disclose them in this conversation. She did however point us to a very significant paper given at NANOG 20 by Van Jacobson and two colleagues.

The NANOG paper is titled 'Toward Milisecond IGP Convergence." See It is by Cengiz Alaettinoglu, Van Jacobson, and Haobo Yu. It suggests that sub-second reroute times would give increased network reliability; support for multi-service traffic (e.g., VoIP), and lower cost/complexity when compared to layer two protection schemes like SONET. Since current IP re-route times are typically in the tens of seconds, the industry should want to do better. There are two choices: replace IP routing with something else like MPLS fast failure recovery or figure out what's wrong with IP routing and fix it.

They prefer to fix the problems of routing. What they are proposing in this paper is a way to decrease the time that it takes changes in outing announcements to propagate within a network from tens of seconds to a few thousandths of a second. If this can be done the operation performance of IP networks will increase enormously. The following paragraph describes some of the paper's conclusions. Because we are summarizing, the arguments that follow are not fully stated. Readers are encouraged to read the entire slide set at the Packet Design URL above.

The Dijkstra SPF algorithm used to compute changes to SPF trees (route forwarding tables) is almost 40 years old. More recent algorithms can compute changes to SPF trees in time proportional to log n rather than n log n. This allows a net to scale up to virtually any size while bringing the calculation time down from seconds to microseconds. "Consequently stable, robust IP re-routing that works at the network's propagation rate (the theoretical maximum for any re-routing scheme) is both possible and achievable. To get there we have to (in rough priority order): (1.) switch to a modern algorithm for SPF calculation, (2) make the granularity of the hello timer milliseconds rather than seconds, and (3) allow different detection filter constants for link up and down events.'

Scaling the Internet via Exchange Points

Many Players Jumping into Rapidly Expanding Global Market

Technology and Business Model Issues as Seen By Equinix

pp. 8 - 15

We examine the world of exchange points which are undergoing dramatic increases in numbers as propelled by the need of ISPs to peer in as many places as possible to reduce transit costs and by the business opportunities opened by the neutral business model exchange where fiber providers, carriers, ISPs, web hosting companies, ASPs, storage networks, SMTP specialists and others can gather together under one roof to do business with each other in an extremely cost effective manner. Well known players in the neutral exchange point market are PAIX,and AboveNet both owned by Metropolitan Fiber Networks (MFN) and Equinix. MFN's AboveNet has six facilities - three in the US and three in Europe. PAIX expects to have six exchanges in the US open by year's end. Calling their Exchanges Internet business Exchanges (IBX), only Equinix caters to the full range of clientele listed above.

An article in the November issue of Telecommunications points out that "companies such as COLO.COM, CoreLocation, Equinix, Eureka, MFN's PAIX.net, and Switch & Data are expanding into major telecom hubs. Unlike the traditional telecom hotel model where carriers lease physical space from a building owner, neutral COs provide air conditioning, backup DC power, HVAC, dust control and high-level security in addition to real estate." In this respect an interesting new player is the NAP of the Americas. Located initially in a Miami. Florida Switch and Data Facility building and managed by Telcordia, the NAP aggregates traffic for a group of carriers which service Latin America.

The Telecommunications article concludes that "while demand will not end soon, there is concern that the colocation industry is heading toward a pseudo space glut with plenty of space but nowhere to connect to the backbones. It is estimated there are 42 national CO providers, with more than 25 million square feet coming on-line next year, a 50 percent increase over current availability."

Of the half dozen or so major players Equinix remains the most interesting. It seems to have concluded that it can achieve economies of scale and enough profits to conclude a very ambitions global build out of 30 very large facilities by building new state of the art centers that are on the order of ten to even twenty times larger than those run by its competitors. Equinix is very wisely taking great care not to run into the trap of insufficient backbone connections but building only in locations where fiber from at least five different providers is available. It has raised over $600 million in capital. It has negotiated a build out schedule with Bechtel that will depend on its ability to finish new exchanges on schedule, populate them quickly with customers and gain adequate cash flow to finance the next buildings on its schedule.

Given the scope of its ambitions it is fortunate for Equinix that it has the greatest technical depth of all the players having been started by the founders of PAIX and having acquired the services of respected infrastructure architects Bill Norton and Sean Donelan. We interview Equinix CTO Jay Adelson and his colleague Lane Patterson. The result is an in depth look at issues such as peering, by pass of ILECs in urban areas, the business models of metro area fiber providers and fabrics for interconnection. Reviewing the myriad of details involved in Equinix's understanding of its industry, the attention that has been given to developing an understanding of the kinds of assistance useful to its customers is quite impressive. An understanding of the economics of interconnection for players of widely differing size assures that a range of options from direct interconnects to access to a variety of switching fabrics is available.

Given the pace of change there are many options open to Equinix customers. The Equinix staff is there to help ensure that the customers derives the greatest synergies possible for his business by making the wisest choices. As Adelson says: " If you are shipping a lot of bits you basically have an economic and technological justification for participating at an exchange."

ITU and IETF in Agreement on ENUM Administration

Letter from ITU to ICANN Blocks .tel gTLD Applications As Competition to ENUM

Administration Modeled on Neutral Tier 1 Database Holder of Pointers to Records of Provisioned Services

pp. 16 -17

The IETF and ITU concludes successfully a set of meetings in Berlin at the end of October. The meetings resulted in a set of understandings that will keep the ENUM domain out of the hands of ICANN and that pave the way for deployment of ENUM provisioning under the e164.arpa domain with each national e164 numbering administration authorized to choose the distribution entity within that countries boundaries. The likely division of ENUM administration in the US into a single neutral Tier one records keeping entity and multiple tier two provisioning entities is explained. Finally abstracts and URLs of the internet drafts relevant to administration and the Berlin meetings are listed. Meanwhile Network Solutions driven by its business interests ignored its earlier chastisement for the premature start of ENUM trials and announced that it was trailing non ascii character domain names before the IETF completed its standardization work. This prompted a public rebuke form ISOC warning that NSI's action will harm the stability of the Internet Domain Name System. See: http://www.infoworld.com/articles/hn/xml/00/11/08/001108hnmultilingual.xml

IS-IS Bug Causes UUNET Route Flap

pp. 18 -19

NANOG discussion of how UUNET's architecture proved susceptible.

ICANN Having No Authority to Create New gTLDs Lacks Legitimacy in US and Is Increasingly Rejected in Europe

Dixon Explains How .eu Has Been Kept from Icann Control

pp. 19 - 21

On November 13 ICANN posted to its web site a response to the law suit against it by the countrycode administator from Belize stating that it had no authority to add new names to the root after all. Such authority, it admitted belonged to the Department of Commerce. See http://www.icann.org/tlds/correspondence/esi-v-icann-13nov00.htm

Thus continued the shell game between the Ameican government and its illegitimate, illegal offsping known as ICANN. We republish a conversation on from the Domain policy list where Jim Dixon shows how Euopeans are becomingless willing to accept ICANN authority. Dixon also explains how .eu was created as a country code domain to keep it free of ICANN's control.

DNRC Letter Documents ICANN Past Testimony to Show Duplicity Behind So Called Clean Sheet Study of Public Board Members

pp. 21 - 22

The Letter calls on ICANN to drop its clean slate Member at Large Study showing that the "study" breaks a two year long series of public promises by ICANN leaders.


Three Dimensional Data Web Set To Emerge

New Protocols Enable Manipulation of Quantitative Data by Data Web Browsers

Open Standards Likely to Give Huge Boost to Data Mining Activities

Work Pushed By Terabyte Challenge Consortium Enables Remote Interaction of Data Sets

pp. 1 -12

A new web is emerging. The data web will likely exceed the document web in size and in its impact on Internet infrastructure. We interview Robert Grossman, CEO of Magnify, Inc. and Director of the Laboratory for Advanced Computing at the University of Illinois Chicago. Grossman has played a pioneering role in the use of high performance computer networks to assist scientists in their analysis of extremely large data sets. He has built a layered view of how data mining - a process of data analysis and real time decision making - could be carried out over the Internet.

Many businesses have extensive data sets showing information about their customers including their customer purchasing history. Grossman explains his role in catalyzing the Data Mining Group which is a consortium made up of Angoss, IBM, Magnify, Microsoft, Mineit, NCR, Oracle, Salford Systems, SGI, SPSS, and Xchange. The group is made up predominantly of vendors of proprietary data mining software packages. These vendors are now joined in an effort to develop a set of open standards that should lead to much new software and to a vast increase in the amount of data mining. Furthermore, with the spread of the XML markup language that is used to display rows and columns of data on the web, it is expected that these developments will lead to the take off of a public data web. This will mean the growth of sites having publicly accessible data sets where visitors with client browsers equipped to interact with the site¹s data servers can retrieve data that can be manipulated as data rather than examined but not changed as is the case with an HTML page. The result will be the data web or what Grossman calls Data Space.

As Grossman explains: "From the user's perspective, Data Space works like the document web. You can use a browser to examine remote and distributed data. And you can analyze and mine it with a point and click interface. Web sites can use Data Space services such as personalization and predictive modeling to provide a site with interactions which are created on the fly for each individual visitor.

From the vendor's perspective, Data Space is also like the document web, it simply uses a richer suite of services, including services for moving data (DSTP) and real time scoring (PSUP), and specialized XML languages for working with data, including the Predictive Model Markup Language (PMML) and the Data Extraction and Transformation Language (DXML)."

Data Space uses open standards to provide the Internet infrastructure necessary to work with scientific, engineering, business, and health care data. "Unlike HTTP and HTML which are designed for multi-media documents, Data Space is somewhat more complicated because you have higher expectations when you work with data than when you work with documents."

"A document you only have to read. With data you have to analyze, score and make decisions. What everyone interested in tracking and planning for the further growth and development of Internet infrastructure needs to understand is that so far the current internet barely scratches the surface of what you will be able to do with data as Data Space and similar infrastructure begins to be deployed. I¹m sure that the data web will be an important driver of bandwidth over the next few years."

Grossman also explains the Terabyte Challenge which for the past four years has been used both as a test bed for the basic protocols, languages and tools for Data Space, as well as a testbed for different ways to scale data intensive applications, especially remote and distributed data intensive applications. The focus has been on developing an open infrastructure for working with large and distributed data sets. Grossman¹s group has developed a process of stripping that allows large data sets to interact with each other in real time at sustained bandwidth usage of more than 250 megabytes per second.

The data space transfer protocol (DSTP) is the protocol used to move data between nodes in the data web. The data extraction and transformation mark up language (DXML) describes how to clean, transform, and shape data. This is usually one of the most labor intensive tasks when working with data. Statistical models are built using statistical and data mining applications. The predictive model markup language (PMML) describes the output of such systems in an open format. Scoring is the process of using statistical models to make decisions. The Predictive Scoring and Update Protocol (PSUP) is a protocol that can be used for both on line real time scoring and updates as well as scoring in an off line batch environment.

When PMML was adopted as an open standard by the likes of IBM and other major players earlier this year, the trade press had a flurry of articles. However, our interview with Grossman represents the first article that covers the entire extent of what he is doing.

ISOC Summarizes ICANN Dilemma

p. 12

On December 9 - 10 the ISOC Board of Trustees met in San Diego California. ‹ http://www.isoc.org/isoc/general/trustees/mtg22-03.shtml Their minutes provide a frank assessment of ICANN¹s lack of authority.

IPv6 from the Viewpoint of Mobile Wireless

Continued Cell Phone Growth to Cause Deployment of IPv6 Nets Interconnected to Non Disappearing IPv4 Infrastructure

Substantial Work Remains to Bring V6 and Data to Cell Phones,

pp. 13 - 19

We interview Charlie Perkins a Research Fellow in the Wireless Internet Mobility Group at the Nokia Communication Systems Laboratory. Perkins offers a fresh point of view on the issue of IPv6 deployment. He explains that independent nodes running IPv6 already exist and will spread. "IPv4 and IPv6 can co-exist in the same general network because they do not collide with each other. They just have to know how to address each other. For example you can have a router that routes IPv6 packets and IPv4 packets on the same network."

"The whole thing about IPv6 to begin with was to develop a protocol, deploy it, and do what IETF does well which is to get to inter-operability testing going and then to just start to build it. People want to buy solutions to the problems facing them, be they IPv4 problems or IPv6 problems. People will want to buy solutions for their IPv4 problems and for their IPv6 problems. Eventually the solutions for the IPv4 problems may become more expensive than the solutions for the IPv6 problems. This will be true in part because the IPv6 solutions that are already available will become cheaper as IPv6 grows in market share."

NAT will not suddenly disappear. "Having large domains of both IPv4 and IPv6 is merely one way to partition the possibility of overall IP address space in general. In such a situation with the right kind of fire wall NAT platform, you can even mention translating IPv6 addresses into IPv4 addresses at the border of the domains so that IPv4 applications can in effect be tricked into believing that what is going on is only an interaction between two IPv4 applications." Talking about the inordinate expense of converting and IPv4 internet into IPv6 is asking the wrong question because v6 can be meaningfully deployed in an Internet where v4 continues to function.

However the arrival of a billion cell phones over the next 18 months will force much more serious deployment of IPv6 which is the only reasonable means of doing both voice and data over a single cell phone.

According to Perkins: "We have answers for most of what we have looked at but, as we look, we see more and more problems. For example there are a lot of problems in security and a lot of problems in quality of service. There are also a lot of problems in header compression. Also the way in which the base stations are coordinated to manage spectrum most effectively is historically not very friendly towards the IP model."

"All of these things add up to a situation where, as I mentioned before, you can employ IPv6 now. But for specific applications like voice IPv6 cannot currently match the performance of analog voice-over the air as a part of the PSTN. Now we're going to change this and believe that we will be able to equal or exceed the current capabilities of analog voice over the air as part of the PSTN.² Adding mobility to the mix of necessary protocol development for IPv6 data phones complicates the technical issues involved. According to Perkins: ³There are a lot of people who want to use v4. But I don't think we will ever get to global deployment of mobile IPv4 for voice-over IP. I think by the time voice-over-IP really comes into play, we will be using largely IPv6."

Of several interesting protocols being developed the most interesting is an IETF a working group called AAA (authentication, authorization and accounting). Radius only works for static objects and has some other difficulties as well. Consequently this AAA working group is building up a replacement protocol for Radius. The AAA protocol will come with features such as session measurement and Accounting. Tied in with IP SEC, AAA will do authorization and accounting for services such as mobile IP.

Among Scaling Issues IPv6 Solves Only IP Number Problem

NATs Depend on Both IP Numbers and Routing Issues

Since IPv4 and v6 Interconnect But Do not Interoperate Introducing v6 Means Running Two Networks -- 3G Makes v6 Cellular Viable

pp. 20 - 22

Yet another IETF discussion this time with interesting new information on levels of complexity of NATs and levels of address allocation.

Klensin Internet Drafts Propose Radical DNS Revamp

New Class Means New Root -- Drafts Are Aftermath of Network Solutions Split With IETF on ENUM and Internationalized Domain Names

pp. 23 - 26

DNS issues are less settled than ever before. Uncertainty about the fate of the protocol and ICANN¹s failure to generate any consensus on issues of Internet governance have lead to a situation where Network Solutions new VeriSign owners are doing something that before ICANN would have been unthinkable. Namely it has instituted an ENUM trial that flies squarely in the face of IETF - ITU agreement on the ENUM standard. On December 18, Tony Rutkowski, speaking for VeriSign Network Solutions at the NTIA Roundtable on ENUM dismissed the IETF - ITU model of national ENUM administrators for the e164.arpa namespace and advocated a model of industry control with NetNumber and the other unsuccessful applicants for ENUM like gTLDs lined up in opposition to Richard Shockey, Neustar, the IETF and ITU. Observers of the meeting seem in agreement that there now is no agreement on ENUM and that deployment in the US will be seriously postponed. On December 20 Net Sol announced the opening of its ENUM trials with a statement long on hype and short on substance.

The area of Internationalized Domain names is even more contentious than ENUM. There Net Sol got another head start with its own proprietary solution and has been registering .com, .net and .org names in Chinese, Japanese and Korean characters since November 10th. These Asian nations in the meantime have begun to register names according to their own systems. In effect we are getting the pollution of the name space that ICANN has warned about no matter whether ICANN likes it or not. The conflicts between the opposing sides appear to be intractable and the ITEF IDN standards process has bogged down.

In the midst of this something intriguing happened with the publication by John Klensin, Chair of the IAB of three internet drafts. The most important of these came on December 13. See: http URL:draft-klensin-i18n-newclass-00.txt Title: Internationalizing the DNS ‹ A New Class < http://search.ietf.org/internet-drafts/draft-klensin-i18n-newclass-00.txt>

Klensin states "The [draft] proposal is radical in the sense that it implies a major restructuring of DNS usage and, indeed, of the Internet, to make the DNS seamlessly capable of working with multinational character sets. Such a restructuring is, and should be, quite frightening. It is worth considering only if the long-term risks and problems of other proposals are severe enough to justify a radical approach. It is the working hypothesis of this document that they are." Klensin goes on to call for the creation of a new universal class in DNS one that would be designed for the UTF 8 character set. By calling for a universal class Klensin is, in effect, calling a new root into which the old ASCII based root would be folded as a subset of the new 'order'. Over time every DNS resolver would be obsoleted and replaced. It would be a bit like rebooting the Internet. But so convoluted have the DNS wars gown that at the very apex of power the thought has suddenly become one of sweeping everything aside and starting afresh. Klensin noted that: "A mailing list has been initiated for discussion of this draft, its successors, and closely-related issues at ietf-i18n-dns-newclass@imc.org. To subscribe to the mailing list, send a message to ietf-i18n-dns-newclass-request@imc.org with the single word "subscribe" (without the quotes) in the body of the message." We predict that this mail list could become one of the most important lists in a long long time


SURFnet Users Pay Cost of Connection

While Gov't Funding Buys Advanced Infrastructure Capabilities That Serve as Enablers of New Applications

Like Canada an Emphasis on Dark Fiber Pushes Network Forward While Other European Nations Focus Much More on Applications,

pp. 1-8

We interview Kees Neegers, Managing Director of SURFnet who explains first and foremost the government's policy that "the research and education community must pay for its own Internet connectivity while government subsidies are used to "tender" for a commercially advanced network. What we did is to ask for things that we think are technically possible, but for whatever reason the commercial market is not yet willing to deliver."

He states that 'the nature of the resulting contract is more partnership for a common development than just a normal supply contract. Research people from both Telfort(BT) and Cisco Systems are actively involved in our activities." Since the late 1980s he has operated on the following dynamic: " with the 'innovation' money we built a network - SURFnet1. With the user money, we made it operational and kept it operational. In parallel, we used the government money to build the next generation network. Now we are building SURFnet5 with government money and the users are paying for SURFnet4 which is essential, because it is the lifeline to the Internet for our customers."

Kees' philosophy is that leading edge network infrastructure will enable many new and productive applications. How the bandwidth is produced is of primary importance as government money is used to catalyze efforts by the commercial carriers in binging new technologies to market.

SURFnet is asking for and receiving a similar kind of optical customer owned lambda network as Canarie in Canada. "In our next generation networking, we are no longer just talking to carriers to make plans, we are talking to customers to make plans. And the carriers are welcome to deliver what we ask. But you see, we have the strengths of the end user community to tell them what we want. As with SURFnet5, we told them what we wanted : 10 gig lambdas, rather than ATM and SDH."

The fiber builds that it undertakes for its backbone are also used to attract and create commercial fiber infrastructure. SURFnet's philosophy has been one of continually pushing the envelope. To do this it needs to be able to move quickly and make its own decisions promptly. In part for these reasons it has chosen not to participate in the next generation pan European GEANT project which is a consortium of European national research and education networks.

Neggers: "SURFnet has found that European networking so far tends to be driven by the speed of the slowest. I don't like that model. It is a recipe for not being able to be state of the art. We wanted to use the opportunity of the setting up of Geant to learn from the past and improve the situation. However the way the Geant network is organized is a continuation of the structure from the past. Dante has no central management. It's a consortium of some twenty-six national research networking entities. All 26 have to agree on everything. I didn't want to be the 27th of that group. So my proposal was, Dante should do the procurement, should operate the network and I want to be a customer of DANTE. And the consortium should only be a consortium in its relationship to the European Commission to define the set-up. But none of the participants in Europe were willing to go that route." While it is the only European nation not to participate in Geant, it is connected to the pan European Ten-155 research backbone and will connect to Geant as well.

SURFnet has been following the development of OBGP. According to Kees: "The reason that Bill St. Arnaud and I are interested in this is that we are not providers. We work for a user community. If it is better for the user community to get many lambdas on their premises, we will deliver them. However, a provider might well want to keep a provider relationship where the provider is the smart network provider and it can force you to work through it."

SURFnet policies are designed so that expenditures of the network serve as a magnet for other telecom players and communities to lay dark fiber. "Everybody in the Netherlands is allowed to dig fiber in the ground and own it. . . If one provider asks permission to lay a fiber, it will be announced and all others who are interested in the same route are free to use the same digging and then the providers have to share the costs of the digging together." The national policy of the Netherlands government is to create conditions such that the combined actions of the Research and Education and commercial sectors will create a fiber based infrastructure for the entire nation and will keep it in the top ten in European information technology infrastructure national expenditures.

State of the Internet:

Light, IP and Gigabit Ethernet

A Road Map for Evaluation of Technology Choices Driving the Future Evolution of Telecommunications,

pp. 9-10

From the COOK Report Annual Report : There appears to be a choice of two paths to our telecom future. One is to go with the highly innovative pure Internet approach of gigabit Ethernet over condominium fiber. Such a choice empowers the customer, facilitates decentralization over centralized control and provides small and innovative businesses with the environment that they need in order to flourish. The other path is to try to fore stall the innovation by squashing competitors with a massive vertically integrated company founded on older technology and leveraging access to content and over a network monopoly so pervasive that people will find they have no choice but to buy it. What could be in store for us all, if things go in this direction, was summarized by Scott Cleland CEO of the Precursor Group on Friday January 19th, 2001. "Precursor believes AOL-TW has budding 'Microsoft-like' potential to grow increasingly dominant by being the leading national company that brings together the various online interfaces (content, Internet access, buddy lists, instant messaging, etc.) to become the de facto consumer online access market standard much like Microsoft Windows brought together the various desktop applications to become the de facto consumer software market standard." See http://cookreport.com/lightipgige.shtml

Bandwidth, User Tools Migrate Toward Network Edge Fueling Idea of Always on Disciplinary Computational Grids

User Control of Bandwidth Raises Interest in Shaping Network Environment to Needs of Subject Matter Communities,

pp. 11 -18

In the context of broadband networks, grids are becoming a much discussed subject. We interview Peter MacKinnon who is Managing Director, Synergy Technology Management. Grids are seen as a pervasive computing fabrics into which users can plug. Says MacKinnon: 'computational grids' are viewed as a network of distributed computing resources that can work both cooperatively and independently of each other. They allow applications to operate in a distributed multi-platform environment across various geographic scales defined by the physical networks involved. Computational grids represent one of the frontiers of computing. They raise many fundamental challenges in computer science and communications engineering, much of which has to do with partitioning a problem across multiple machines, latency in the networks and administration and allocation of the grid's resources.

Then there are access grids. According to MacKinnon: "That's another way to look at the grid, where it provides access to devices, such as, say, radio telescopes or optical telescopes. I make these references in technical terminology or scientific terminology, because this is the locus of this grid frontier. It's not in the commercial world yet. For the time being we're not talking about applications that relate to customer relationships."

A more complex example called 'Neptune a Fiber Optic Cable to Inner Space' is actually a proposal right now. It's being led by Woods Hole and JPL in the U.S. and the Department of Fisheries and Oceans in Canada, at this stage. There are other players. Basically the intent is to put fiber optics in a grid-like form on the Juan de Fuca plate off the West Coast of North America. This will be an ocean floor-based grid that will have nodes spaced a hundred kilometers apart and a receptor. And in that receptor or junction box, which would be analogous to a satellite system or a space station system into which you'll be able to plug in instrument packages.

A number of financial systems could be turned into a grid by being connected within a single phone network. With these grids a global trading house, for example, could end up having its neural network system in London connected with that in New York, with that in Tokyo, with that in Sydney. So now you're monitoring on a different level. If you have the latency problem solved, so that you can both do this in real time and do the computations required, it becomes a potential example that could lead to the development of new types of financial instruments or new ways of hedging or new risk reduction capabilities. The financial area would probably be one of the first major commercial uses of grid-like capability. Another example of grid-like capability could well be in utilities. These are organizations that have distributed systems already that want to use the grid, in a simple sense, to do status checks, self-healing, monitoring, or whatever the case may be.

The Grid Forum is basically a place to talk shop for everyone from those who are trying to build grids to those who actually want to use grids. It is a common meeting ground as a place to discuss the entire dimension of grids. They have organized themselves into several working groups. See http://www.gridforum.org/

Many of the technical issues needing to be solved involve integrating current communications and computing advances with the architectural needs of grids. However we have to find ways of solving certain problems in both areas before we are going to be able to make grids, in what we might call a promise, the success that they might appear to be.

For delivering the promise of grids, as demonstrated by the notion of grid space, which we just talked about, the following are likely. Grids will provide powerful, interactive, dynamic and flexible environments allowing for opportunities to create new discoveries, on a level of Grand Challenges. They will also allow for more R&D without increasing other resources as well as widen access and enhance educational uses.

Actual implementation of robust grids is going to require a great deal more advancement in software systems than is currently available. When you start to think about what it is that you're going to do in a grid-like problem-solving environment, then there are some really fundamental technical issues that need to be addressed.

If you have a computationally intensive problem, there have been a lot of advances made in development of parallel computing approaches in the last several years. These advances allow problems to be partitioned so that multiple parts of the problem can be simultaneously computed on different processors . Therefore, you have to understand in some detail the kind of problem that you're dealing with in order to know how to partition the processing.


Chicago Civic Network

Fiber to Link More Than 1600 City Institutions

November RFI for Condo Style Build Yields 63 Responses

pp. 19 - 21 and

Request For Information Chicago CivicNet pp. 22-24

We interview Joel Mambretti Director of the International Center for Advanced Internet Research at Northwestern University about Chicago's plans for CivicNet, a public private partnership that would link all public schools and libraries and city agencies by fiber.

The City is doing what any large organization should naturally do, that is to say, it is planning for a future rise in demands, and it is planning for new services, while understanding that it has a set budget.

The projected CivicNet budget is $250 million over 10 years. The question is: How can we best optimize this expenditure? Part of this process is simply doing what organizations should normally do. Certainly, then, they want to ensure that their requirements are clearly communicated to potential providers of those services, because the providers are very eager to know what customers require so that they can respond. Another part of this process is establishing an ongoing dialogue between the City and potential providers of services to match what is being asked for with what can be provided. This is a process that is healthy for both sides and one that both sides appreciate

Therefore, the idea isn't to go out say: Build something. But rather to say to the general world: Here are the requirements, and then ask for a response. That is why the city has issued an RFI not an RFP.

Thanks to subscriber Frank Coluccio we received a pointer to the Civic Net web site that has been built by the city of Chicago. http://www.chicagocivicnet.com/civicnet/SilverStream/Pages/civiclist.html The site and the RFI document itself - some 130 pages in Adobe PDF format - begins to bring home the seriousness of the project. For a project of this scope, it seems likely that the city is making unprecedented use of the Internet to bring a public focus to its civic net acquisition procedures. In addition to the RFI the web site contains a forum for respondents to question the city and it contains access to city mapping tools and map sets that in non electronic form would be hugely expensive to enlist in a venture such as this. Finally it is perhaps the most significant example of CANARIE's telecom outlook influencing events in the United States. We republish here a shortened form of the RFI text that is also lacking its extensive appendices. The complete document is worth examining in order to get a sense of how ambitious this project is.

Congress Gives ICANN Second Look

Auerbach and Froomkin Testify Before Senate that Compared to House is in Early Learning Stage pp. 25-26

When the Senate Commerce Committee announced ICANN hearings on the heels of house hearing and invited At large Board member Karl Auerbach to speak, we were encouraged Unfortunately the Senators were not well informed. While Auerbach's and Froomkin's testimony was accurate and should have caused rapt attention, Mike Roberts and Roger Cochetti as two of the people most responsible for the mess sat there and made apologies for an ICANN that really was not yet mature and had not yet had a chance to do what it was put into place for. Senator Burns in the opinion of an observer called the hearings only because he figured if his House colleagues were concerned he better find out what this was all about.

The highlight of the morning came when Senator Burns asked Michael Froomkin what he would propose in light of the criticisms of ICANN that were delivered to the Committee. Froomkin: "the most important issue is not setting a precedent by which a Department, like the Department of Commerce can end run the Administrative Procedures Act. And that is an issue that frankly is bigger than the Internet. The global concern here is not just in this process. For this is a way in which agencies can by pass ordinary procedures to create a privately organized regulator in all but name. A regulator that uses control over a federally dominated resource to make people sign contracts with it, pay it money and do what it says. And then not be subject to due process. Not be subject to court challenge. Not be subject to ordinary oversight. That is really cutting the congress and cutting the American people out of the regulatory process. So while in the case [of the seven selected new TLDs] you might have had an outcome which was better than no decision at all - I have nothing against any of the winners here. I have no reason to believe that any are bad or imperfect, and for all I know we would be all better off if they were all put in the root along with lots of others too. It seems to me that there is a good government issue that is pretty serious here. Someone needs to hold Commerce's feet to the fire on this one." Burns pledged a follow up hearing at to look at "redress of due process." Whether he really understood remains to be seen. As we have said before and as one of the other witnesses pointed out, if ICANN succeeds there will be other ICANNs all designed by corporate interests to engage itself dealing and ignore all due process rights of those whom they would regulate.

Hearings on ICANN Governance

Prepared Statement of Karl Auerbach before the Senate Commerce, Science and Transportation Committee

pp. 27-30

Auerbach: There are those who say that ICANN is merely a technical body. I am a technologist. Yet I have a difficult time understanding how any of ICANN's decisions concerned with the Domain Name System have any technical content at all.

One must wonder where the technical component might be in ICANN's Uniform Dispute Resolution Policy - a policy that expands the protection of trademarks to an extent not granted by any national legislature. And one must also wonder where the technical component might be in ICANN's preservation, indeed in ICANN's extension, of the hegemony of Network Solutions over the naming systems of the Internet. We know more about how the College of Cardinals in Rome elects a pope than we do about how ICANN makes its decisions.

There are lessons to be drawn from ICANN: - ICANN has shown us that governmental powers ought not to be delegated to private bodies unless there is an equal obligation for full public participation and public accountability. - ICANN has shown us that a public-benefit and tax exempt corporation may be readily captured by those who think of the public less as something to be benefited than as a body of consumers from whom a profit may be made. - The role of the US Department of Commerce in ICANN has shown us that Internet may be used as a camouflage under which administrative agencies may quietly expand their powers without statutory authorization from Congress or the Executive.

ICANN Governance

Prepared Statement of A. Michael Froomkin Professor of Law University of Miami School of Law P.O. Box 248087 Coral Gables, FL 33124 before the Senate Commerce, Science and Transportation Committee Communications Subcommittee, ,

pp. 31- 35

Froomkin: If in 1985 the Internet itself had been a proposal placed before a committee that behaved as ICANN did in 2000, the Internet would have been rejected as too risky. Risk aversion of this type is antithetical to entrepreneurship and competition.

Worst of all, ICANN applied its criteria arbitrarily, even making them up as it went along. The striking arbitrariness of the ICANN decision-making process is illustrated by the rejection of the ".union" proposal based on unfounded last-minute speculation by an ICANN board member that the international labor organizations proposing the gTLD were somehow undemocratic. (That this same Board member was at the time recused from the process only adds to the strangeness.) The procedures ICANN designed gave the applicants no opportunity to reply to unfounded accusations. ICANN then rejected ".iii" because someone on the Board was concerned that the name was difficult to pronounce, even though the ability to pronounce a proposed gTLD had never before been mentioned as a decision criterion.

Testimony of the Domain Name Rights Coalition and Computer Professionals for Social Responsibility

pp. 35- 37

DNRCI: The sad fact is that ICANN has been "captured" from the beginning. Special interest groups have dictated the direction of ICANN, and have morphed it into an Internet Governance body with none of the protections afforded by governments.

As currently constituted ICANN has failed on all charges. It has moved slowly; been unrepresentative; acted to limit competition; and failed to offer useful, fair, coherent policies, or even policies which encourage investment in virtual property. ICANN is a policy experiment that has failed.

ICANN is correct in that its formation was an unprecedented experiment in private sector consensus decision-making. Unfortunately, that experiment is in the process of failure. ICANN's claim of "openness and transparency, based on Internet community consensus, bottom-up in its orientation and globally representative" is far from the reality of the situation. ICANN is the classic top-down organizational structure without accountability. When its by-laws are inconvenient, they are changed without discussion.

The Internet is the single most significant communications medium ever created. Its power goes well beyond that of shopping malls and e-commerce, and empowers individuals in a way never before imagined. It is thus a national as well as an international resource. The ability to control important aspects of this technology cannot be underestimated. It is up to all of us to remain vigilant when organizations are given special privilege by a branch of the US Government to control this vast means of expression. Safeguards must be put into place whereby individuals, non-profit entities, churches, tribal governments, and other disenfranchised groups may provide unencumbered input and opinion to an open, transparent and accountable entity. This entity is, unfortunately, not ICANN in its current form.


Canada and the Next Internet Revolution

Canarie Builds Next Generation Optical internets Government and R&E Leadership Push IP Cost Advantage

Special Report Shows Prospects for Customer Owned Fiber Networks in Rare Innovative National Environment

pp. 1-2

This special issue of the COOK Report documents and analyzes a profound revolution that is underway in telecommunications in Canada. There, under the leadership of Canarie the Advanced Internet Development Organization, the Canadians are building a nationally connected community owned infrastructure of dark fiber. They are lighting the infrastructure with IP over gigabit Ethernet over glass.

As one of their presentations says, they offer "A proposed strategy to make Canada the most networked country in the world and the first to have low cost Gigabit Internet infrastructure available to virtually all schools, hospitals, libraries and businesses by 2005."

As this issue shows however this is not future hype. Large sections of the public infrastructure are operational now. We argue meanwhile the United States, having invented the Internet, is in the process of giving away all leadership in its implementation. Internet 2 is being run as a subsidy program for the connection of universities - one that is devoid of the innovation of Canarie. While 75% of the school districts in Quebec either have completed or are installing their own dark fiber networks as a part of Canada's national public grid the FCC-imposed e-rate in the US means spending 2.25 billion dollars a year to subsidize the obsolescent copper plant of the local phone company. The program forces schools to buy service year after year and effectively prevents communities from building their own infrastructure.

Meanwhile, market forces under facilities-based telecom deregulation in Canada are pushing for the ad-hoc establishment of a private/public-sector partly customer owned and operated IP over gigabit Ethernet fiber network linking schools and municipal governments to province wide network which in turn link to the Canarie national trans-Canada optical backbone for research and education traffic. The result is the first large scale national infrastructure that operates (except where it must interconnect) completely independently of the global public switched telephone network.

Canarie's Role in Charting Canada's Telecom Future

Increasing Bandwidth & Lowering Costs for R & E Community St Arnaud Offers Overview of OBGP, Fiber Infrastructure Growth, Condominium Fiber Business Model and Scaling Issues

pp. 3-8

We interview Bill St Arnaud the Director of Network Projects at Canarie. Bill explains the development of the first Canarie Networks and goes on to explain that he serves the Canadian research and education community by developing ways to make broadband Internet access several orders of magnitude less expensive for his clients. One of these ways has been in the development of news methods of building customer owned - or as Canarie likes to call them - "customer empowered" dark fiber networks.

In one of the papers on the Canarie site we find "lower prices for fibre is leading to a shift away from carrier-owned infrastructure and towards more customer- or municipally-owned fibre, as well as to innovative sharing arrangements such as fibre "condominiums". "Condominium" fibre is "un-lit", or "dark" fibre that is installed by a private contractor on behalf of a consortium of customers, with customers owning the individual strands of fibre. Each customer/owner "lights" their fibres using their own technology, thereby deploying a private network to wherever the fibre reaches, perhaps including carrier COs and ISPs. The business arrangement is comparable to a condominium apartment building, where common expenses such as management and maintenance fees are the joint responsibility of all the owners of the individual fibres."

The Canadian government has made a decision to use Canarie to build a public sector national infrastructure that can use the thousand fold cost advantage of the new IP over gigabit Ethernet over glass to enable a national broadbrand infrastructure for education, government and research telecommunications - in short everything that could be considered non commercial.

In order to leverage most effectively the ability of their public customers to use their networks to communicate with each other, Canarie has been developing (as we have reported in an interview with St Arnaud in our November issue) optical extensions to the Border Gateway Protocol (BGP) or in this case OBGP. This will permit each of the new k-12, municipal and university public sector networks to reach out and peer with likeminded counterparts by using OBGP to take a lambda or wavelength of light and establish a virtual optical cross connect at a mutually agreed upon exchange point. For the first time these networks can manage and direct their bandwidth without having to buy any carrier services or having anything to do with carrier clouds.

St. Arnaud's strategy is to test his concept and submit before the Spring IETF an Internet draft for formal IETF standardization. If this procedure went smoothly the work could be finished and OBGP could be found in the capabilities of all Cisco and Juniper routers by year's end. Unfortunately its is impossible to predict whether the standards process will go smoothly. We learned from Bill that Canarie has now raised the stakes by establishing a second OBGP group at Carlton university in Ottawa with the task of writing the code necessary to get the job done and testing that code in an inexpensive optical switch available now from JDS Uniphase or soon from a new start up named Edgeflow. This switch sits in front of the customers router and will give each of them OBGP capability regardless of what comes out of the standards process. The result will be that without dependence on a carrier many small entities will be able to generate and manage their own gigabit Ethernet bandwidth in a way that a year ago only a Fortune 1000 company could in anyway likely to be able to do. "Customer empowered" - indeed. St Arnaud however is clearly making a major wager on OBGP. In this issue we summarize his October 30 2000 proposed architecture design for the establishment and funding of CA*net4. Our summary shows that a major purpose of the new network will be to use and develop OBGP within the customer owned national infrastructure that Canada is building.

Finally we look at St. Arnaud's new draft "Scaling the Internet" first posted on December 12, 2000 and most recently revised on January 3, 2001. Here he looks at bandwidth demand for large scale internet transit backbones like UUNET's that are reported to be doubling yearly. He raises a hypothesis that also points out another new and disturbing trend. It begins to look like the numbers of open network connections between computers are growing even more rapidly. As peer to peer connections move to personal computers huge numbers of machines can suddenly maintain perhaps dozens of connections per machine. Even if one supposed that the growth of new users of the net would slow or even cease, there are numerous reasons to suppose that the numbers of connections will continue to grow. This process is demanding the acquisition of even more backbone bandwidth and its demands may be doubling every four months. Mike Odell of UUNET has spoken of needing petabit backbone links within two years.

Hopefully CAIDA or a similar entity will be able to test this hypothesis within the next few months. It it is verifiable, it will demand that major changes in internet topology be made. Ironically the widespead use of OBGP to get as much bandwidth at the edges of the network as possible and off the largest carrier operated transit backbones may be the only way to continue to scale the growth of the Internet.

A Revolution in the Cost of New Fiber Networks

IMS Executive Explains His Role in the Creation of Fiber Brokering and Condominium Dark Fiber Networks

Plummeting Cost Makes it Possible to Bring Fiber to Most End Users

pp. 9 -14

We interview Robert Proulx, vice president telecommunications of IMS Experts-Conseils. Robert explains how during the past four years he developed the art of brokering and condo fiber builds at first in Quebec and now in seven different Canadian provinces. At first he worked for Hydro Quebec, the provinces electric utility building and marketing a commercial fiber network on their right of way. Bell Canada became nervous, stepped in and bought all the network's extra capacity. It moved too late because by the time it did, Proulx knew who had fiber, who did not and where the fiber was. He realized that he had a business model for leveraging the knowledge he had developed. His customers still wanted fiber. They'd just have to do it on their own.

He explained what happened in the following way. "I'm the engineer hired by the organization to build a network. The first thing I do is to find a partner who wants to be an investor in the project. The partner becomes an investor in the project and I say we have a project that will be owned by five or six companies and we prepare the document and we go out with an RFP to build. And the same contractors that are being used by Bell Canada will build my network. It's an RFP to build. Not to provide services, not to provide dark fiber. It's to build network that will be owned by a consortium. But we also are responsible for building the consortium itself.'

Now Proulx would have a much more difficult time were it not for the regulatory environment in Canada. In the second half of the 1990's the cable TV companies fought for and obtained laws giving them open access to the poles and conduits owned by the incumbent local exchange carriers and provincial electric utilities. Anyone with a non dominant carrier license which is easy to get can demand and obtain access to poles and conduits from which to string new fiber. The payment to the pole owner is set at uniform inexpensive rates. It was this easy and affordable access to right of way (established uniformly by national policy) that facilitated the planing and then implementation of Canada's on going boom in the construction of customer owned dark fiber networks.

One of the most significant of Proulx and IMS' achievments has been the network they brokered for RISQ. RISQ is the Reseau d'Informations Scientifiques du Quebec, an organization owned by all the universities in the Province of Quebec. It exists as part of the Canarie program furthering province wide fiber nets. A carrier-built network without sharing would have ost 100 million dollars for 3,500 kilometers of new fiber and right of way.

As the map of the St Lawrence River Valley of Quebec shows on page 12 of this issue, Proulx knew with exquisite detail where the fiber was. He pursued a successful strategy of giving the owners buy-in to the project by exchanges of strands of fiber. This created a situation where the only parts that he had to build was to fill in the gaps between already existing fiber.

Clash of Broadband Private vs Public Sector Business Models Will Impact Canadian Economy

Francois Menard Describes Complexity of Canadian Regulatory, Carrier, and Content Struggles

Suggests That Viable Outcome Likely Only With Community Networks Infrastructure

pp. 15 - 35

The interview with Francois Menard at nearly 22,000 words is the longest we have ever published. There is a good reason for the length. Menard knows the intricacies of the content combined carrier business model from the inside and talks about them with a detail not found elsewhere. But he does far more than this. He understands as few others the efficiencies and power of the 'pure' internet model. The conversation shows how the impact of a potentially pernicious business model combining ownership of content and network infrastructure is likely to lead to a market share ware between Videotron and its new content based owner Quebecor, along with other cable carrier partners, on the one hand and Bell Canada, and other incumbent local exchange carriers,on the other hand.

For a period of time in the purest of dot com plays it looks as though Quebecor may use its pension fund fueled acquisition of Videotron to give away internet access in order to gain market share. Menard sees the commercial players battling it out to maintain vertically controlled cost inefficient corporate empires. When the battle dies down most Canadian ISPs will be destroyed and the few corporate visitors may try to recoup their battered bottom lines by raising rates drastically

Our conversation finally takes the struggles for commercial market share and shows how they will impact developing national public infrastructure in Canada and ultimately globally. What is presented is a synthesis of commercial, policy, economic and technical impacts of the ongoing technology deployment. The deployment is shaped on the one hand by efforts of the vertically integrated content mega-corps that have acquired or been acquired by huge carriers.

These people see the Internet as a way to broaden their control over their 19th century empires. On the other hand the 21st century optical Internet technology is decentralizing control and ownership pushing it to the edges and into the hands of community or customer owned - "empowered as Canarie likes to say" - networks. It is unusual for anyone individual to be expert in both these areas. It is even more unusual for such a person to be able to describe the likely impact of the ongoing collision between these forces that are both very much opposed and inextricably linked. Menard does just this. The vast scope and complexity of what he describes is the major reason for the extraordinary length of the interview. What is articulated here has, we believe, never been said before openly and in one place.

He described for us how he acquired his knowledge. "In working with several Internet Service Providers faced with the prospect of seeing the business models which they had pioneered destroyed by the established incumbent carriers turned into ferocious competitors, he became especially cognizant of the potential repercussions that the recent wave of mergers in the telecommunications industry and the even more recent mergers between Internet access, cable, telephony, television and printed press conglomerates could have on the Internet Service Provider industry. Having spent the last few months building an Ethernet metropolitan optical network for a competing cable television carrier, he realized that he could take his vision much further with community networks, and decided to join IMS to pursue his vision."

A few sound bytes: "While I want to know what's the true cost of an ad on TV all I have is an offer from Quebecor. The offer states that the only kind of advertising I sell you is this bundled scheme of newspapers, Internet, and TV."

"What we have in fact been explaining is the difference between the new business models of incumbent telecommunications carriers going wild with media mergers and the traditional ISP's which cannot afford to buy into this. What ISP's are trying to provide is a monthly subscription based service, not cross-funded by advertising or e-commerce taxation or captive portal kinds of services."

Menard is interested in the potential of municipalities to form network buying cooperatives on behalf of their residents. He suggest that it could "preserve the quality of the telecommunications service that we have today in a world where we clearly suspect that that if we leave it to the private sector, it will degrade rapidly as the telecommunications services become focused on content exclusivity and advertising-funded. The utilities and municipalities can change this as they are entities clearly known to have no economy of scale from the selling of advertising. IMS also can do its share of the effort to alter this trajectory, but it cannot be out there on all markets at the same time. Then again, it may not be all energy utilities and all municipal administrations which may be receptive to these new opportunities."

"We could imagine certain areas where the public sector would no longer be purchasing any services from the incumbent carriers aside from local PSTN interconnection. The business that would be left for the carriers in these areas would be formed of a mixture of shopping malls, business condos, and residential. It's a lot of business taken away from the incumbent carriers. From that perspective, the landscape of telecom will require a tremendous amount of consolidation once all these guys are no longer buying services from the incumbent carriers. I would suggest that we've proven that most phone companies have no interest in providing services based on the best networks that they can design because it would cannibalize too much their existing businesses. As the public sector in Canada is showing to this date, you have to build your IP network on condominium fiber if you want it to reap all of its potential benefits."

"The task at hand is to structure the resale of the telecommunications facilities of the electric utilities so that they provide telecommunications infrastructure with a level of openness suitable for implementing competition between telecommunications service providers which base their revenues on selling advertising and those who base their revenues on selling services."

"It used to be because of old technology that you needed public subsidy to provide services at affordable prices for the most people. Only fifteen years ago, mechanical switches for placing a telephone call across the country were still widely in operation. The responsibilities for managing this network were colossal in comparison to today's technologies. Looking at the current state of the art technologies we must ask if residents are now well served by far distant entities which define what services are with the aid of a public subsidy license from national regulators. Or should we eliminate the public subsidy license in favor of what can now be built by the residents of those very communities in a free market?"

"Scaling the Internet" - Excerpts from a Draft

Bill St Arnaud in New Paper Examines Traffic Issues His Hypothesis Suggests That Scaling Problems May Force Traffic Off Backbones

pp. 36 - 37

Excerpts from http://www.canet3.net/library/papers/scaling.html

Optical Community Networks - A Canarie Presentation - September, 2000

pp. 38-44

Excerpts from http://www.canet3.net/library/presentations/OpticalCommunity-Sept2000.ppt . The amount of cost information on these slides for building various community and municipal networks is extremely useful.

CA*net4 Design Document - A Canarie Presentation - October 30, 2000

pp. 45 - 49

Excerpts from http://www.canet3.net/library/presentations/CAnet4DesignDocument-Sept00.ppt

The design philosophy is fascinating. "Research and Education networks must be at forefront of new network architecture and technologies. But should not be duplicating leading edge developments occurring in private sector. Should be undertaking network technology development that is well ahead of any commercial interest. But any network architecture can only be validated by connecting real users with real applications and must solve real world problems. Test networks per se are not sufficient. There is a growing trend for many schools, universities and businesses to control and manage their own dark fiber. Can we extend this concept so that they can also own and manage their own wavelengths? Will "empowering" customers to control and manage their own networks result in new applications and services similar to how the PC empowered users to develop new computing applications?"


LayerOne Gear in Telco Hotels Provides Cost Effective Optical Interconnect for Carriers

Ciena Core Director Provides Service that Grooms Circuit Interconnection Between Fiber of Many Carriers,

pp. 1-14

We interview Alexander Muse, President and CEO of LayerOne which is offering optical interconnects for carriers at physical layer of the OSI reference model. LayerOne, located inside carrier hotels, uses the Ciena Core Director switch as a service to enable many different carriers to connect their fiber strands and provision circuits for each other.

LayerOne takes between 2,000 and 15,000 square feet of floor space insides a carrier hotel. They then bring in between four and 30 thousand strands of fiber into what they call a Nexus Bandwidth Exchange. The fiber is connected to a complex array of cabinets. To enable this they have a framework of ADC FEC boxes and the ADC FL2000 SC Connectors ELF bay. For electrical connections at the DS1 and DS3 level, they use a customized ADC Entraprise Frame. Optical to optical connections at levels ranging from dark fiber to OC-192 they make via ADC's Next Generation Fiber Frame. Ethernet connections are handled from a router/switch.

From this equipment fiber is tied into the Core Director. Instead of having customers bring in bays full of expensive SONET equipment LayerOne asks for the delivery of fiber strands in 96 or 192 strand cables. Initially they will connect two of them to the Ciena Core Director - each of which can handle up to 256 OC48s and each of which uses transponders and tunable lasers to do its work. Within the Core Director those would be then lit to a level of OC 48 or OC-192, depending on how their customers would want to interface. From there LayerOne grooms fat customer pipes to electrical STS -1s. It then maps those to the other providers to whom they want to connect. LayerOne charges them a flat monthly fee for connecting to it Core Director. The size of the fee depends on the bandwidth of the connection which can range from DS-1 to OC192.

But what is this grooming all about? As Light Reading explains, the Core Director is capable of grooming. That means it can set up any size of pipe across a network by combining any number of small STS1 (51.8Mbit/s) connections, ranging in size from 1 to 192." Grooming "slashes the number of boxes that service providers need to buy and maintain; it also helps them provision services faster and use bandwidth efficiently."

LayerOne is open in five locations and plans 15 or more by years' end. In the Bryant Street Dallas colo it is currently interconnecting 30 carriers. These range from Qwest and Level3, to SBC and Verizon, to Yipes and Telseon, to MFN and lesser knowns like Global. Metro.

Finally we conclude with a three page commentary on LayerOne by New York City consultant Frank Coluccio who paints a contextual picture of the LayerOne developments and who, emphasizing a point made by Muse, notes that "the capabilities that Layer One demonstrated were possible, were not capabilities that their vendor, Ciena, would have initially guaranteed, much less expect LayerOne to attempt to accomplish."

Changing Bandwidth Provisioning Models in Metro Area Fiber Markets

Net Access Chooses Acquisition of Dark Fiber and Self Provisioned Circuits Over Purchase of Shared Gig Ethernet

Prices of Telseon and Yipes! Viewed as Too High and Cogent's Business Model Seen to Be Unsustainable,

pp. 15 - 25

We interview Avi Freedman who relates his experience during the past year as he has sought to upgrade Net Access metro infrastructure from carrier provided to self provisioned using dark fiber. The network that he founded in Philadelphia in 1992 is now a 10 million dollar a year business supplying customer circuits from Boston to New York City, to Philadelphia to Baltimore to Washington DC.

Freedman's problem was to determine whether it made sense for him to buy Gigabit Ethernet from Telseon or Yipes!. Freedman says: "Gigabit Ethernet is something that we've actually found that people don't want primarily because it is difficult to be certain one is not getting shortchanged." "The fact is if I give you an OC12, you know you are getting an OC12 and there's nothing I can do to short change you on the matter of how many bits you can pump through it. You know that you have your full circuit even if it is being time sliced and sent over my WDM infrastructure. But if I give you a Gig E, you have no clue whether that's a VLAN on a big trunk or whether it is a real Gig E dedicated all to me, because the technology allows you to sell 10 people a GigE, put a 10 meg Ethernet in the middle and then sell you all Gig Es at the other end."

Freedman has found that it is most cost effective for him to lease dark fiber from MFN than to buy Gigabit Ethernet service from Telseon or Yipes! because they won't let him put his own measurement tools on the circuits. Under such circumstances the cost of their bandwidth is close to what it would cost him to lease dark fiber to supply an equivalent amount. To be competitive he figures the cost should be about one half of what he could provide it for himself. He also looks at Cogent which he believes to be selling to ISPs at rates that are below its cost. He questions how long they can stay in business.

He gives a great deal of useful pricing information and talks about scaling issues involved in the cost of providing the largest circuits (OC 192s). Provisioning OC48s is now more cost effective. As he puts it: "If you can do 32 OC48s and get another fiber pair, do 32 more OC48s, it can be cheaper (especially if you're trying to make your capital last) to just do this." He also explains that his customers have told him that they also want plain OC circuits without IP on them because what the carriers used to provide in seven to 30 days five years ago now takes six months or longer.

The article concludes with an exchange between Telseon Vice President Bob Klessig and Avi Freedman. A high light --Klessig: "It is safe to assume that the traffic between the routers is highly concentrated and that the routers will be in place for a long time. As Mr. Freedman says, this is a great scenario for a dedicated link such as dark fiber if it is available. But Telseon is offering a switched service. It provides for new data connections in minutes to hours between any two points on a Telseon metropolitan network. Bandwidth choices are highly granular and quickly changed."

Freedman: "There are no applications or software demanding highly granular and changeable bandwidth today that I know of from Net Access or Akamai. The base cost of Telseon and Yipes! 1mb-over-100mb/1000mb services are as much as customers will shortly be able to pay Net Access, and probably Enron and others for a 100mb provably dedicated pipe. Analyses and promises are no substitute for proof of infrastructure where the possibility for aggregation follows."

ICANN and Verisign in Alliance to Reinstate De-facto NSI Dot Com Monopoly in Return For Financial Support of ICANN

ICANN's Pattern of Fraudulent and Deceitful Action Continues With Board's Capitulation to Staff and Vint Cerf in Melbourne,

pp. 26-35

Those who established ICANN got much of their early support from pledging to reign in what they painted as a very nasty Network Solutions monopoly in the dot com and other gTLDs. Now operating under the guise that the Internet industry can be trusted to regulate itself, ICANN and Networks Solutions successor, Verisign started in secret last summer negotiations which on March 1, 2001 effectively reimposed the monopoly. While we offer our own analysis we also recommend that of Brock Meeks: Dot Com Hocus Pocus -The Remaking of a Monopoly http://www.msnbc.com/news/540693.asp.

The Australian Board meeting ended with the Board granting ICANN staff a blank check to make decisions for which they the Board should take responsibility. As Michael Froomkin wrote: Well, it's even worse than it seemed: we're stupid, and we've been snookered again. With no warning to anyone, the ICANN staff pulled a bunch of resolutions out of their pockets at the last minute. There was no public notice. No advance publication. As a result, the entire public comment period the day before the Board meeting was little more than a pointless farce, since no one except the staff (and maybe the Board?) knew what was on the agenda, and almost no one had time to wade through the pile of documents.

Meanwhile Vint Cerf as Board chair acted to involve himself as Versign's interests. Verisign CEO Sclavos wrote to Cerf on Feb. 28: "We also appreciate your commitment to seek formal Board approval for an appropriate extension of the time under the existing agreement should compliance with Section 23 be necessary. But we are hopeful that by working with you, and the Internet community, including members of the ICANN Board, we will all see these new agreements approved and successfully implemented."

A BWG attorney commented: Section 23 is the one that says that the com/net/org registry agreement expires in November 2003 unless Verisign divests the registrar by May 2001. Read the third sentence of the above paragraph. Sclavos is saying that Cerf has made a personal commitment to him that, if the ICANN Board does *not* approve the proposed agreements, Cerf will go to the Board to get the May 2001 divestiture deadline extended.

We also include a review of the completely revised ICANN Watch website This is now by far the best site on the web for tracking what this renegade Internet regulator is doing.

Optical Border Gateway Protocol Now Internet Draft,

pp. 36 - 37

OBGP is now an Internet draft for consideration at the spring IETF in Minneapolis. We briefly interview one of the draft authors Marc Blanchet about what to expect in coming months.

Letter to the Editor: DANTE Objects to Description by Kees Neggers,

pp. 37 - 38

Dai Davies General Manager of Dante suggests SURFnet criticism was unfounded. Kees neggers informed us he saw no need to respond. On March 6 SURFnet established the fastest external connectivity of any research network in the world. SURFnet Press Releases were posted at http://www.gigaport.nl/en/en_main_act.html