A Practical Navigator for the Internet Economy

ALOHA Networks Applies Spread Spectrum To Increase Satellite Bandwidth Efficiency

-- Will Offer Inexpensive Satellite Local Loop Bypass Internet Connections to Small Business in Americas & Soon Globally, pp. 1- 7

We interview the CTO of ALOHA Networks, Norman Abramson. ALOHA intends to start by tripling the efficiency of its use of satellite bandwidth via the application of spread spectrum. Abramson describes his technology as SAMA (Spread Aloha Multiple Access) and explains that, with refinement, they expect to go far beyond their immediate tripling of efficiency. He says: Spread Aloha is CDMA without the CD. That is to say - it is Code Divsion Multiple Access. But that instead of code division with multiple codes, we have only a single code for all users. In other words there is no code division per se and this fact is the key to why this technology is so simple to implement. The basic Spread Aloha technology deals with the fundamental issue of how to share a common channel among an extremely large number of bursty users.

They will begin operations with a hub using nine Field Programmable Gate Arrays. The hub will be located at their operations center and eventually they will have multiple hubs at multiple teleports. The hub can service 3,000 customers. Each customer has a dish and receiving equipment costing $2500. Bandwidth from the customer to the satellite and back to the customer is two-way, burstable up to two megabits per second, never touches a telco local loop, and will cost a small business having 15 to 20 computers on a LAN about $200 a month.

Serving Latin American customers in general will be ALOHA Network's first test. It will use the SatMex 5 satellite (launched 12/4/98) but, since its system is geographically insensitive (providing it has satellite coverage), its first test users could be anywhere in the satellite footprint -- which is North and South America.

It is going for the Latin American market first because it is "crystal clear that there are no viable, economical alternatives and because the demand is growing so rapidly." It will come back to the US market at a later time -- or in response to distributors who want to use SkyDSL [the name of ALOHA networks commercial service]. It will begin testing in 2nd quarter 99 and be in operation at mid year. It will start by installing and operating its own hubs and will have a network operations center in the San Francisco Bay Area. It will connect to the commercial Internet via the California Exchanges.

As we went to press, we learned that ALOHA Networks has just laid off some engineers. When we reached Norman Abramson to inquire we learned that ALOHA had decided, rather than building the production hardware necessary to begin operations, to seek a joint venture partner to do the necessary commercial production. With this change in direction they are redirecting capital allocated to the engineering of in house production. They hope to announce a joint venture partner shortly.

Cisco's David Oran Discusses Forces Propelling Development of Voice over IP

-- Surveys Device Control Protocol Issues from SGCP to IPDC, pp. 8 - 12

We interview David Oran who, at Cisco, functions in the capacity of systems architect for voice over IP protocol design. Oran explains what he calls "the four enablers" of Voice over IP. First is progress in the development of Digital Signal Processors (DSPs). Second: recognition of TCP/IP as the ubiquitous protocol over which packetized voice would run. Third: the standards committees understood the possibilities early on enough to be able to get enough cooperation among the possible players to create a real market. Fourth: tremendous changes in the regulatory environment with the 1996 telecom act in the US and telecom deregulation in Europe created major opportunities for voice bypass solutions in enterprise networks.

The motivation for infrastructure investment now appears to come from what looks like an essentially boundless demand for data capacity. Within this context, a major recent activity of Oran's has been the development of SGCP (Simple Gateway Control Protocol). Oran says: "Its view of life is that for much of what people want to do now - at least with Internet telephony - is to replace the infrastructure of a circuit switched network with a more cost effective and forward looking IP network. However, they may not be prepared to turn upside down the complete call control, billing, service and feature infrastructure that has been in place through the intelligent network all at the same time. So what SGCP attempts to do is to emulate the call control system of the existing network only enabling you to do the emulation on top of IP."

Asked to compare SIP to SGCP he replied: "SIP places all the call control intelligence in the network at the endpoints on the network. SGCP centralizes the call control intelligence in servers and the end points are simply slave devices. I expect that both SGCP and SIP will co-exist in the network. If you have a smart end point like a PC, SIP is the appropriate thing to do because it has state and understands things. It can go and talk to directory servers for example. If you have a dumb end point like a simple gateway or a cable modem, or some appliance type thing like an IP telephone, SGCP would be the more appropriate protocol."

He is optimistic about MGCP which is the result of progress in converging Level 3's IPDC with SGCP. Asked whether he believed that we have a series of on-going events that will make it possible for the next generation telco's to in effect surround and begin to swallow the PSTN, he replied that he does believe this but he also pointed out that, before it will happen, issues of telco reliability must be faced. "Skeptics say what happens when the TCP/IP network breaks and that is what the telephones are also running on? . . . To ensure the internet's continued growth, it must be engineered for higher reliability which means more backup trunking paths, more backup equipment, better and more seamless ways of switching to backup equipment than we have presently deployed today."

Evaluating SONET in High Speed, Mission Critical, Private Data Network

-- NANOG & Conscientious List-User Yield Unusually Good Technical Summary: Criteria, Strategy, and Choice of Vendors and Equipment, pp. 13 -16

We present a long discussion of the use of SONET in a mission-critical TCP/IP VPN in the New York metropolitan area. The discussion reprinted from NANOG with the permission of its compiler Peter Polasek. The information presented ranges from how to negotiate with the LECs to designing network topology for maximum reliability to the issues involved in selecting network equipment.

ICANN Constitutes Itself in Order to Apply "Adult Supervision" to Internet -- Esther Dyson Acts as Mouthpiece for Shadow Cabinet Mike Roberts Transmission Belt for ISOC & IBM Wishes

-- Backing Interests Threatened by Ubiquitous TCP/IP, ICANN Prepares to Overturn the Values that Made Possible Currant American Dominance of Telecommunications Technology, pp. 17 - 22

Since late October ICANN has constituted itself in order to apply "adult supervision" to the Internet. Esther Dyson acts as mouthpiece for a Shadow Cabinet composed of the Mighty Five, IBM's Roger Cochetti, and Jones Day's Joe Sim's. As interim President, Mike Roberts serves as a transmission belt for ISOC and IBM wishes. Backing interests threatened by the emergence of ubiquitous TCP/IP, ICANN prepares to overturn the values that made possible current American dominance of telecommunications technology. Before readers consider the foregoing to be an excessively dire evaluation, they should ponder carefully the following statements by three of the Might Five.

Vint Cerf said in an interview . . . it's time for the IETF to let the Internet grow up. "The fact that a constellation is being built to do what one or two people did is astonishing," said Cerf, senior vice president of MCI WorldCom. "But then again, when the Internet started, it wasn't of commercial value. Now it is and it has attracted all parties, including the lawyers. That means it's valuable and that's good." "Oversight of the Internet has become a legalistic business," Roberts said. "We have to deal with lawyers and bankers and unfortunately we have to do it in a way that makes some of you uncomfortable." Source: Sandra Gittlen Network World, 12/11/98 (Article regarding rough reception of ICANN at IETF.) Finally Dave Farber: to the Presidents Information Technology Advisory Commission: "The DNS Problem . . .. will decide whether "adult" supervision of the internet is needed. "

Editor's Translation: Cerf's "grow up" means let the big corporations make the rules so the little guys get screwed. Roberts "deal with the lawyer's and bankers" means that unless you have the bucks for the lawyers, don't even bother to play. Farber's "adult supervision" is exactly what the "five" have crafted ICANN to deliver. We thank Dave, Vint and Mike for making our points about the lead paragraph above although we are especially saddened to see the father of the Internet make such a statement.

Part One of our article is a narrative version of the presentation we made to the Third Annual Canarie Advanced Networking Workshop in Ottawa on December 15. Our major point was that the way ICANN has been put together is a fundamental betrayal of Internet values and culture. The Internet would best be served by ICANN's failure. Absent Jon Postel ICANN simply cannot be trusted. Part Two contains the high lights of mail list debate since late October.


Rob Frieden, "Without Public Peer: The Potential Regulatory and Universal Service Consequences of Internet Balkanization" and Xipeng Xiao "Internet QoS: the Big Picture".



New Infrastructure Costs Are Plunging

Sycamore Networks to Enable SONET Elimination; Terabit Methodology May Enable Inexpensive, Very Reliable, & Open Architecture Router Fabrics

New Companies' Technology Forces Attention to Cost and Scaling Impacts of DWDM Applications 18 Months From Now


Sycamore Networks, (Part 1) pp. 1- 7

The transition of optical Internets, from test beds into production networks, will open up opportunities for innovative technology deployment that haven't been seen since layer two switching technologies were applied by the first generation of commercial backbones five years ago.

We interview "Desh" Deshpande who with Daniel Smith is co-founder of Sycamore Networks. Deshpande and Smith also co-founded Cascade Systems in 1990. With Sycamore the will follow the same business model as with Cascade which was first to market with a family of ATM switches which were used by the early commercial ISP backbones get more bang for the buck.

Sycamore will have a nine-month product development cycle. Not having to worry about backward compatibility nor about cannibalizing its own installed base, Sycamore can use optical components culled from industry leaders such as Lucent and Nortel. With these parts, it will make chassis and filters for DWDM-based IP-over-fiber network transport equipment. The Sycamore products will enable users of the new dark fiber to light new strands in the most cost-effective manner by the purchase of optical filters for placement in the appropriate Sycamore provided chassis.

Sycamore's approach will enable those who are provisioning networks by lighting dark fiber to eliminate very expensive SONET multiplexors at the higher levels of the network architecture. It allows IP backbone builders to wait until they have a purchaser of a specified amount of bandwidth from point "a" to point "b" before buying and installing the filters needed to handle it. Consequently Sycamore will introduce a form of just-in-time bandwidth provisioning that will cut the time necessary to fill circuit orders from months to days.

Current network build outs demand total SONET provisioning of the full OC-192 capacity of the fiber regardless of whether any of that capacity has been sold. In one example discussed Deshpande estimates that the owner of the fiber would have to pay only 25 to 30% as much to provision the entire OC-192 over SONET capability. He adds that the owner of such fiber would be able to start receiving income after spending only 7 or 8 per cent the amount currently required by fully provisioned SONET. Deshpande speaks of wanting to move network build outs from the "mainframe mentality" of 10 to 20 year depreciation to the PC mindset of three to five years.

The Sycamore equipment is referred to as providing intelligent optical networking because bandwidth segments can be 'peeled' off at network pops without having to undergo optical to electrical interface conversions and back again as is the case with SONET based fiber nets. As Sycamore adds more software intelligence to its products, it anticipates that lambdas may be used much as virtual circuits are in the ATM world to provide connections between sites of a single customer and for Quality of Service considerations. Sycamore's first product releases at mid year are expected to focus on equipment with ranges of less than 500 kilometers. Follow up releases will focus on 1,000 kilometer and 10,000 kilometer ranges.

Alan Huang and Terabit, (Part 2) pp. 7 - 22

Alan Huang who in the early 80's created the Batcher Banyan switch architecture at Bell Labs, received a patent on November 24, 1998 for the design of what he calls a Meta-Router. We are the first to publish information about his design methodology which uses group theory to create a switching fabric that could serve as the fabric of a terabit router.

Haung's plans hinge on the fate of his new patent: "Scalable Switching Networks" US Patent 5841775. The patent outlines way to build a very efficient a scalable router. Huang points out that: "It can be built out of off-the-shelf routers, such as the Cisco 12000. It can be made fault tolerant via a fractional increase in hardware. It works in a synergistic manner with wavelength division multiplexing. It is also very well matched to the demands of voice and video.'

Huang claims that the connection methodology contained in the patent will allow him to switch the capacity of a given Batcher Banyan, Butterfly or Omega topology network with almost two thirds fewer ordinary Cisco 12000s than would be needed if these routers were arranged as a large router farm. He then shows how the addition of routers that increases his minimum configuration by 25% will give the resulting router farm three-fold more fault tolerance with fewer routers than the more accepted and popular connection methods.

In one example he shows how a network composed of 36 routers at nine locations (4 routers per location) could lose all four routers at a single location without breaking the ability of the remaining portions of the fabric to switch its traffic. He suggests that this reliability will be especially meaningful to telcos who will see it as a less expensive way to get the extra '9's of reliability that can otherwise only be gotten by the very expensive and space consuming proposition of having to double their equipment across their networks.

Huang's design can also function as a methodology, and a consulting service offering to backbone architects both the tool set and the contextual knowledge he has developed in creating and testing the process. He says: "On one level, I am showing you how to re-organize your router farm to increase your efficiency and fault tolerance. On a second level, I am showing you how to build a scalable, fault tolerant router. And on a third level, I am showing you how to build a scalable, fault tolerant, communications infrastructure for data, voice, and video."

Huang claims that he has managed to turn scalability into a boring subject. Building a network becomes a checkbook decision. You want bandwidth? Call up Lucent and buy as many channels of WDM as you can afford. You want routing bandwidth? Call up Cisco or Lucent and stitch together a certain the number of routers to build a meta-router that can handle the bandwidth.

We have appended critiques of Huang's ideas by Mike Trest and Sean Doran. Both these two, and other's with whom we have discussed what Huang is doing, point out that his designs do not fit well with the current topology of Internet backbones that aggregate traffic rather differently.

Furthermore, what Huang has to offer will not save users huge sums on routers within the next 12 months. As Huang states: "My message is for the mid term - two to five years. I am talking about the grand convergence of data, voice, and video. They might not care about voice and video now but it is clear that the telco's and cable companies are thinking about data. They need to begin to do some tests with my design as soon as possible to reassure themselves that it works as advertised. While they are testing, they will see that I am offering them a whole new framework from which to do their planning for the mid and long term."

The availability of multiple lambdas serving as channels brings to mind again the virtual circuit world of ATM. Huang sees a flatter and less aggregated network: "The backbone was useful in the old days when you only had one link to Denver. With 40 WDM channels, you can afford to bypass Denver and directly connect San Francisco to New York with one of these channels. Minimizing the distance traveled by a packet is no longer the proper metric but rather the number of router hops."

To take a different example, 'with 40 channels of WDM on each link, you can afford to directly connect Seattle to Los Angles via Chicago." Huang's commodity-priced, hot-swappable open-architecture routers are perhaps more suitable to such a world than relatively small numbers of very expensive and gigantic terabit routers being constructed today for an assumed continuation of our highly aggregated backbones tomorrow.

He is not worried by issues affecting scalability. "Don't be scared by 80 channels of WDM on each link or 80 Cisco routers at each location. You can build up to that incrementally. You don't have to buy them all in one fell swoop. Every 5 years the cost to performance ratio should drop by a factor of 10. It will get cheaper and cheaper as you build it."

We find it intriguing to see both Sycamore and Terabit talking about similar approaches in the new TCP/IP over glass world. In both cases we have new technologies making possible the consideration of a new planning methodology of inexpensive just-in-time bandwidth provisioning and router fabric building. Again, the pendulum seems to be swinging toward a DWDM powered world where quality of service issues, when needed, can be handled by a new lambda at the transport level rather than a dif-serv like protocol at the IP level.

COOK Report on Internet: State of the Internet in 1998, pp. 23 - 26

We publish our annual State of the Internet Survey. It will serve as the introduction to our annual anthology to appear by Jan 21. The title is IP Insurgency: Internet Infrastructure and the Transformation of Telecomm. As we have done in previous years, we will publish the complete text of this article on our web site when we publish the anthology in about ten days.

New Business Model For All IP Network

Enron's IP Over Fiber Network Alternative Way To Deliver High Bandwidth Web Content -- Overlay Of Public Internet Has Significant Qos Implications, pp. 1- 10

We interview Stan Hanks who is Vice President, Research and Technology for Enron Communications. Enron is using power company rights-of-way to build its own fiber links from Portland Oregon to Salt Lake City, to Las Vegas, to Los Angeles and from Salt Lake City to Denver to Houston, Texas. Enron is using access to this fiber to do swaps with Frontier that will give Enron a national fiber network. Unlike Qwest and Level 3 and Williams, Enron is building a pure IP network. Namely IP over DWDM without SONET and without ATM.. This helps them to keep both capital and labor costs low by eliminating the purchase of ATM and SONET equipment as well as the need to employ engineers for the maintenance of SONET and ATM.

Enron's business model is very different from that of the other next-gen telco networks (Qwest, Level 3 etc.). It is offering an overlay of the public Internet that can be used by large scale content aggregators such as Real Networks to deliver their content more cost effectively than possible by the purchase of multiple upstream connections and transit through congested NAPs.

In the case of Real Networks, Enron would have high-speed connections into their major distribution centers. The content that they originate and Enron distribute gets sent instantaneously to all servers at all of Enron's network pops. High-speed connections to ISPs, CLEC's and cable modem providers run from these pop's.

The Enron business model is predicated on the assumption that it can function as a content aggregator getting content from producers content to local ISPs and hence to the audiences the content producers want to reach more efficiently and cost effectively through its network than through the public Internet.

In addition to this Enron offers a managed infrastructure where they handle all the network connections and routing and all the computing aspects of delivering the content. For example Enron places a router and a server under its control and management into the pops of their local distribution partners. The router comes, at Enron's expense, with a high-speed leased line into the partner's pop. Enron then provides from its infrastructure to that of their partners all necessary high-speed interfaces. Enron sees itself as having a fix for many of the things that are broken about the current Internet traffic scenario. The fix is to move the more demanding content across Enron's transmission infrastructure. But note that Enron is not a general backbone network. ISP's can not buy access to the Internet via Enron. Enron is, in effect, a private overlay of the public Internet.

In the Real Networks model Enron is paid when the Real Networks servers are able to deliver requested flows via Enron's network rather than via the public internet. Last November Enron bought Modulus Technologies in order to be able to gain access to its Inter Agent real time control software. Enron content aggregator customers place this software on their web servers. When Inter Agent receives an http request it does a look up of the requestor. If that person can be reached on the Enron network, it ships the packets that way and writes the transaction to software that over time determines the total data delivered by Enron on behalf of its client.

Enron's overlay network is an interesting answer to the quality of service issue. In looking at the traditional IP class of service parameters, you find that you have a very small number of bits to play with. Network operators have to deal with the dynamic tension between the requirements of operating a core network and the requirements of operating the intelligent network at the edge. They find that, at the edge, if they want to, they may offer very many flavors or gradations of Service. But, at the same time they will find that, as they take your bits across their backbone, the limits of technology require them to cut down those flavors and gradations to very small number.

While Enron has relatively few customers, it can certainly deal with QoS issues by applying a bandwidth solution to them. But when Enron to grows from dozens to hundreds, to ultimately perhaps thousands of customers, its ability to throw bandwidth at problems becomes much more elusive and it begins to need network engineering solutions.

Wither Telephony In IP Dominated World -- IPtel Engineers Debate Merits of Innovation Versus Predictability in Ip Telephony Protocol Design, pp. 11 - 16

In late January on the IETF IPtel Working Group mail list, a question by a 3Com employee about the SIP protocol kicked of a discussion that migrated from the technical aspects at hand to a very informative discussion of the general role of the IETF process in Voice over IP protocol development. The debate pitted those who want the implementation of an entire suite of standards that would yield a total voice over IP solution, versus those who are happy to see more rapidly evolved modular solutions that may be packaged to serve varying needs and environments even if they are all less than 100% interoperable.

The debate is also about accommodation versus innovation. Should we be striving to move the public switched telephone network to the Internet? Or should we use IP and computer technology to build an entirely new phone systems and set of services? The legacy phone companies and their mentor the ITU not surprisingly want a cautious conservative transition to globally interoperable standards at all levels. The engineers of the IETF favor innovation over predictability as they push for a process that can offer a mixture of computing and telephony services independent of POTs.

In a "net head" versus "bell head" display of experiment versus conservatism one of the discussants concluded: Give me choices in how to locate users. Give me choices in how to select IP/PSTN gateways. Let me choose which QoS choice I want (by giving ITSPs choices in products to deploy.) I want a choice in what happens when I pick up the phone to call grandma. I *do* want it to connect. I do want some base level functionality - just as HTML/SMTP/... all do their basic job on the 'net. But let extensibility roam free. If in the process products don't work with each other, fix 'em. Everything does *not* need to be perfect day-one. This has plenty of precedent in the IP world.

Business Model of a Third World Isp -- Dileep Argawal Explains the Dynamics of Building World Link, Nepal's Largest ISP pp. 16 - 19

We interview Dileep Agrawal, an entrepreneur who has grown Kathmandu based World Link from his bedroom to a 50 person company in less than 3 years. Dileep shows the creative role portrayed by Teleglobe in offering reasonably priced satellite bandwidth down linked to Copenhagen and sent by Teleglobe infrastructure across the Atlantic to US based connections. (One of Teleglobe's policies was a three month moratorium on his monthly $4,000 half link bill in order to help him build up a cash flow.) He explains how the internet begins to play a role in the Nepali economy beyond that of mere tourism He shows how he must diplomatically work with the Nepali PTT in a country where a percentage of telecom revenue traditionally goes to support social programs something that makes cutting prices quite difficult.

ICANN AUTOCRACY pp. 19, 21- 22, 24

ICANN continues its assault hiring a PR firm rather than opening its board meetings and getting ready to impose certification procedures for registrars which are so restrictive that Einar Stefferud commented on Feb 25: "ICANN threatens to destabilize all businesses that depend on stable DNS name arrangements."

While ICANN appears ready to steamroller its agenda through at the Singapore meeting next week, it is also so broke that Vint Cerf publicly appealed to ISOC supporters on its behalf. Meanwhile WITSA, the international technology arm of the Information Technology Association of America sent out an astroturf brochure replete with factual inaccuracies asking its members to support the DNSO draft and the rights of trademark owners against the rights of ordinary domain name holders.. Finally we note an October 1997 Reuters article on the leaders of the EC and the GIP calling for the creation of the kind of international regulatory body for the Internet that ICANN seems intent on becoming.


A discussion of the contrast between the ITU's and IETFs handling of standards documentation


Extending The Reach Of Internet Telephony

Francois Menard Explains Player's Strategies -- Describes Wide Range Of Market Of Approaches & Protocol Development Designed To Leverage Differing Infrastructures pp. 1 - 5

Francois Menard of Mediatrix discusses a range of issues involved in the current state of IP telephony. In his discussion he makes a useful distinction between session initiation protocols and device control protocols (for example Level 3's IPDC protocol which has since metamorphosed into MGCP).

He points out that these protocols are useful for "replicating the exact behavior of the PSTN on the Internet. In other words, if all you want to do is clone the behavior of the PSTN and try to sell something that's equivalent to that by having it ride on top of a network using TCP/IP for transport, then all you need is a device control protocol. That's really what I define as IP telephony." "By having something in software that looks like a telephone switch, you can make a remote-controlled, IP telephony end-point in the network, benefit from the same type of telephone number routing that currently exists on the PSTN."

Menard contrasts IP telephony with what he considers true Internet telephony which is just one more service on the Internet, saying: "so why should it suddenly be sold and billed to the customer as if it were conventional telephony? It's a fundamental belief of mine that replicating the behavior of the PSTN on the Internet has a business model that is fundamentally incompatible with Next Generation Internet Services. On the Internet you bill for quality of service and services behind application servers. On the Internet, you cannot bill by the minute for doing nothing more than routing a telephone call to someone."

Menard foresees three different solutions for three different potential deployments. H.323 for legacy H.32x networks, Device Control like MGCP for those environments where users are happy with a third-party remote controlling their telephones and SIP for Next Generation Internets.

Some legacy telcos may have enough invested in H.323 to justify (in their minds at least) building and operating the third network that it requires. Menard also expects that "device control from a centralized call control entity will find its own acceptance in the marketplace. As an owner of LAN telephony devices, you may want to let your carrier control these devices. This can be referred to as "outsourced call control". He finds that SIP will be the protocol of choice for those who want to do true internet telephony over next generation Internets. In addition to letting third parties control your telephony systems, any time that an Internet telephony or IP telephony application has to deal with the PSTN, you will wind up having to deal with MGCP which can become a migration path to true Internet telephony. With IP telephony the network operator is in control. But with Internet Telephony the end user controls.

Menard goes on to talk about Videotron's modernization of its cable network that will allow it using MGCP to offer IP telephony as part of its cable services beginning late this year. Cisco is pioneering a new business model by building the network for Videotron in return for a share of the profits. The exact numbers are kept confidential, but a Montreal newspaper called Les Affaires disclosed that the deal was a cut on the service over a 5 year, renewable period.

In Videotron's case, its IP telephony will be sold like conventional telephony services. When Videotron has competition from traditional ISPs who will eventually gain access to Videotron's infrastructure, [the CRTC, Canada's FCC, has a proceeding for third-party residential access to cable], Videotron will be likely to be forced into selling IP telephony as just another Internet service.

Technology Choices For Running IP Over Glass

Nortel VP Describes Business Model Conditions Dictating Selection Of Four Possible Combinations: Ip & Glass, Ip, Atm, & Glass, Ip, Atm Sonet & Glass, Ip Sonet & Glass pp. 6-10

We interview Derek Oppen, Vice President for Carrier Router Products at Nortel. Oppen finds that because the capacity of fiber to carry bits has grown even faster than that for chips, the industry is looking at massively parallel architectures for backbones that allows one to scale routers up to terrabit speeds. Over the past 24 months the carrying capacity of fiber has increased 20 times from two lambdas at 10 gigabits per second to forty.

Carriers with legacy and multi service as well as IP traffic on their networks will Stick with running IP over ATM over SONET over Wave Division Multiplexing. This architecture will pay a roughly 25% of throughput bandwidth penalty because of ATM and SONET and demand. It will also be expensive to operate because of equipment and staffing demands.

A second operational choice involved removing SONET and letting IP run over ATM and glass. ATM handles all the networks multiplexing up to OC 48 or OC 192. IP over ATM over WDM is particularly well suited to an environment where you don't have just IP traffic but you have traditional voice traffic or frame relay traffic and you want to consolidate it in ATM.

The third option is IP over SONET over WDM in a three box solution getting rid of the ATM box. This is really IP over PPP. This is the type of option you'd use if you have lots of IP traffic and what's coming out of the IP side is less than OC-48 or OC-192.

The fourth option is IP over glass - layer 3 and the bottom half of layer 1. Note that even with the inherent simplicity of this scheme traffic engineering is necessary. Still required is thin layer two for traffic management and thin layer one for protection and survivability. While MPLS is likely to be used in favor of thin layer two, SONET or Giagbit Ethernet framing for thin layer one. The most likely candidate for this architecture are of course the next-gen telcos.

In looking at the cost of bandwidth as a result of the IP over fiber revolution Oppen believes that, while the cost of bandwidth will go down, it's not absolutely clear what this means. The big unknowns are, How much and how quickly might it go down? The other element to consider is the traditional carriers who have already laid their fiber. It's a sunken cost as far as they're concerned. The people who are laying fiber now have to figure out how to generate enough revenue to cover the incredible capital cost of laying that fiber. A final variable is the effect of multiple lambda use increasing the bandwidth of already laid fiber.

Nortel: A Survey Of Changing Strategy

The Journey From Smart To Stupid Networks Where Bay, Avici And Other Technologies Fit In Next pp. 11 - 16

KenSmith, Director Broadband Networks technology offers a tour Nortel's current marketplace position.

While we agreed that while Nortel's investment in switching technology, protocols and equipment aren't going to disappear overnight, we suggested that the viability of this technology will come into question within a time frame of 5, 10 or 15 years? This will happen because IP over light can do more efficient and less costly transport. If this is the case, aren't a lot of people going to say that they have no choice except to become a major IP player?

Smith agreed that this was true while cautioning us that it would be many decades before there was no more legacy telco demand left in the world. However he also pointed out that Nortel is a world leader in OC 192 WDM and added that Nortel's acquisition of Bay Networks would place it in a league with Cisco in routers while enabling it to use its huge carrier sales force to compete in ways that Cisco could not - adding that it would give lucent a challenge that Lucent could only match by acquiring Ascend.

The interview also examines where Avici Systems fits in Nortel's product mix, looks at alternative local loop technologies, SONETs future and various cost price issues.. In an interesting discussion of cost versus price, he points out that the cost per bit per mile and comes down at about 25% a year. And it has [been coming down] for about 10 years.

Paris DNSO Draft Gathers Wide Support

ORSC Protest Stops Sole Source ICANN Solicitation PR Firm Hired In Face Of Continued Board Secrecy pp. 17 - 22, 24

We explain how NTIA was caught trying to sole source IANA to ICANN. An ORSC protest to GAO stopped this move in mid January. We trace ICANN's financial deals with ISI to secretly lease the IANA employees. We present highlights of the emergence of abroad based consensus driven openly crafted PARIS DNSO draft. This is in contrast to the closed door preparation of the DNSO>org draft backed by the trademark lobby, chamber of commerce and other large business interests. Finally we close with an example of the way in which ICANN has been attempting, with little chance of success a Central Office regulatory body for the Internet.




For not much more than two years AboveNet has been offering a business model that invites content providers and ISPs to plug into the Internet and into each other at Data Centers in San Jose and Washington DC. In order to keep as much traffic as possible local, they may establish their own peering at these Data Centers (AboveNet calls them Internet Service Exchanges or ISX) without being charged for their cross network traffic since it need not use AboveNet infrastructure.

While one AboveNet customer may peer with another AboveNet customer connected at a different Data Center, they would not do it via AboveNet infrastructure since each customer is charged for delivery of bits across the pipes connecting the AboveNet Data Centers. As AboveNet puts it: "We don¹t offer a 'virtual presence/peering' solution, if people aren¹t in the same ISX." Some of the same insights that have shaped the AboveNet business model have influenced in differing ways, Frontier Global Center, GTE Internetworking, Exodus, and Savvis. Our article shows that AboveNet profits over the first two by never having been a telco and over the second pair by having greater depth and balance.

AboveNet obligates itself to deliver its customer¹s traffic to the rest of the Internet as well as to its other customers. AboveNet routes traffic as directly as possible through its own network ‹ either across the high speed links between its ISXs or via other parts of its infrastructure ‹ to get traffic from one customer to another, and from customers to the Internet, without delay and packet loss. This is the core of what AboveNet¹s calls its "one-hop network" - one where customers can connect globally.

With the opening of a London Data Center that is connected to the East coast by a trans Atlantic OC3 purchased from Global Crossing as a 25 year IRU. London customers may eventually find themselves one "network hop" from Europe to the US to Asia. High speed inter connections turn the Data Centers into something approaching a global exchange place that by year's end will likely terminate with the opening of a new center in Asia.

As far as routing and peering go then, one advantage of being an AboveNet customer is the ability to connect both at a data center exchange that is not congested and at one where peering with other locally connected AboveNet customers can be carried out at no extra cost. Another advantage is the assurance that customer traffic will get from San Jose, to Virginia, to New York to London without packet loss and with minimal latency.

The parts of AboveNet¹s backbone that do not directly connect AboveNet Data Centers to each other exist to accommodate its private interconnects with other Tier One providers. This infrastructure is provided by AboveNet to make sure it delivers the bits of its customers to other networks in a manner designed to produce minimal latency and packet loss.

As the largest backbones let connections to the MAEs and NAPs stagnate over the past three years, AboveNet has been able to replace their clogged infrastructure by means of a business model that aggregates ISPs and content providers at well provisioned data centers and then uses that aggregated bandwidth to leverage private interconnects with both large and small backbones and regional networks.

As a result, when analysts assumed that the giant telcos with their alleged economies of scale had locked up the top tier of the internet market globally, AboveNet emerges, in the words of Michael Dillon, "as a company that Œoffers a global IP network that is not simply an overlay of their telco business. (They are not a telco). They have a very strong IP engineering staff in control of the company. That is to say, the company¹s technical destiny is controlled by IP specialists, not ex-telco datacomm engineers. This means that AboveNet can avoid an awful lot of the mistakes that telco dominated backbone providers fall into." It looks as though AboveNet¹s achievement also means that a business model has been found to support a backbone infrastructure friendly to and cost effective for small to mid sized ISPs. With luck such an infrastructure looks able to provide enough traffic aggregation to counter balance the accumulation of market share in the hands of fewer and fewer giant players.

Parsing FCC's Reciprocal Comp Ruling

-- Telecom Attorneys and Policy Makers Attempt to Make Sense Out of FCC's Section 251 Ruling, pp. 12 - 16

When the FCC ruled on Reciprocal Compensation at the end of February, it was looking at the issue of whether, if a CLEC delivered a dial up internet access call to an ILEC, the ILEC needed to pay the CLEC a fee. The FCC had already ruled that such compensation could be applied only to local traffic. In this decision the FCC decided that dial-up calls were really not local but were in fact interstate. This ruling gave the FCC jurisdiction in the matter. When many people heard this they decided that the FCC was approving charges on such calls. The FCC pointed out that this was not true and added that it was just fine for states to have determined (or to determine in the future) that recip comp applies to dial-up calls to ISPs. Until Chairman Kennard stated that the FCC was not approving the imposition of long distance charges for internet access and emphasized that he was remaining faithful to political strictures that said do not regulate the Internet, many assumed that charges and regulation would be the outcome of the decision.

In a discussion on the cyber-telecom Kevin Wherbach offered the following analysis: There¹s a structural problem here: the FCC is good at analyzing trees, but sometimes the best answer is to step back and consider the forest. In this case, one tree is "don¹t regulate the Net" (because it¹s good politics, and because the Telecom Act says so). Another tree is "calls are either interstate or intrastate."

Most people in and around the Commission understand that the Net, competition, and convergence are eroding the traditional foundations of telecom regulation. In such times, one can try to muddle through by considering each case, on its specific facts, under the words of the governing statutes. In the reciprocal comp. case, that means making a binary jurisdictional choice under a circuit-switched paradigm, and then doing fancy footwork to (hopefully) avoid the consequences you don¹t like. The remainder of the article follows the arguments of the attorneys on the cyber telecom list as to the choices now facing the CLECs and ILECS. A recurrent theme of the discussion was the failure the continuing effort to find a growth pattern around which all could agree.

Country Code Wish List or ICANN Plans? pp. 16, 24

Very short article looks at some ICANN organizational charts hidden on the web site of an ally of Core, ISOC, WIPO and the treaty organs.

ICANN Implements Regulatory Model

-- IETF and IP Registries Wisely Ignore Roberts as Domain Names Forces Jockey for Position, pp. 17 - 22

We offer an assessment of the Singapore meeting which authorized the release of the ICANN criteria for registrars. Not surprising ICANN gave itself all the rights and the registrars none. Those wanting to be among the first five chosen to operate as registrars should think long and hard about playing by ICANN¹s rules. Why? Because if they accept these rules, they will likely find out that they may lose legal rights to sue in future for legal injury caused by them. The legal doctrine is "estoppel". In other words, when a new registrar accepts the benefits of ICANN-accreditation, it may be barred from later challenging restrictions on that accreditation, or even revocation, on the basis that it had willingly entered into the relationship without objecting to the terms and conditions imposed.

Currently the IETF is on the verge of abandoning the Protocol supporting Organization while the IP Registries have yet to be heard from. ICANN which is continuing to ignore the issue of whether or not it has the consent of the governed is likely to find one or more registries that it has blocked taking legal action against NTIA's authority to vest power in ICANN. We present a legal analysis of how DoC and NTIA could be sued in order to strip them of their ability to empower ICANN. The analysis, which finds that DoC has no statutory authority to do what it is doing, is written by an attorney with some considerable experience in the wars of internet governance.