A Practical Navigator for the Internet Economy

Christian Huitema on Quality of Service pp. 1 - 6

We interview Christian Huitema, Chief Scientist at Bellcore's Internet Research Laboratory. The interview is a wide ranging discussion of Quality of Service issues. Huitema points out that QoS in the Internet is deteriorating with average packet loss figures having increased from the 2 to 3 percent range two years ago, to 5 percent a year ago, to an average of 10% now.

He points out that RSVP is not likely to be the cure-all that people assumed a year ago for several reasons. In addition to the settlements problem, RSVP has another weakness that will make service providers shun it - every active RSVP session opens another route in the routing table of the defaultless core backbones of the global Internet. Furthermore RSVP posits relatively long, high bandwidth connections where it seems that most of the net's traffic is slower speed, brief burst, web transactions.

What a year ago was spoken of as Integrated Services has now turned into Differentiated Services where precedence bits are being used to generate classes of traffic. The model mentioned on November 22nd by van Jacobson in his "Two Bit" internet draft of traffic is one that emulates the aviation industry with the vast majority of traffic going economy class while business and first class makes up a disproportionate share of the airlines revenue. Current QoS efforts are likely to embrace this business model.

Huitema urges looking at Quality of Service in a broader scheme than just that of technology. In such a scheme Technology is the first consideration. The second is customer expectation. The third is how you provision your network. And the fourth is economics. "In the end QoS boils down to your need to get enough income to beef up your network. If you cannot pay for your network, you won't get any quality whatsoever. Now we have looked briefly at the technology that can be used to do differentiation of services. And this is one way to get some people to pay more."

Customer expectations entail the development of tools to measure network performance in ways that can be replicated and that customers will understand. With this goal in mind, Huitema's lab is working on proprietary tools for some of its customers. Some of these tools will also be designed to focus on helping customers to plan and size their network growth more accurately.

Finally Huitema is examining tools will give users a broad series of options in understanding the costs network services offered and their relationship to values derived by both parties from the services. He says that "We don't have them yet but we are working on them as we realize that the economics of the network are as important as the technology."

Overbey on IBM Global Net pp. 7 -10

We interview Sid Overbey, Vice President of IBM Switched Access and Internet Service, IBM Global Services. Overby explains how, when the NSFNET cooperative agreement ended in April of 1995, the IBM router engineers went off and helped to build, on the foundation of IBM's V-Net, the geographically largest $19.95 a month unlimited usage dial up ISP in the world. Global Net reaches into more than 50 foreign countries with 500 dial-up pops abroad and another 600 in the US.

Having been launched as an Internet service for users of IBM's OS/2 Warp operating system, Global Net has become a general dialup ISP that, unlike ATT's Worldnet, is the system of choice for Internet users who travel internationally because no matter where they go they almost always are within reach of a Global Net pop. The equipment used to support the pops is primarily an IBM RS/6000 machine as a server and 3Com (US Robotics) PRI technology.

At the fringes, the net uses Ascend/Cascade frame relay switches multiplexing traffic onto an ATM based OC3 core. Globally the network, currently, has some 200 different peers. While Global Net is fully peered with the big five, and has one private interconnect open now, it expresses extreme doubt that, by the end of the summer of 1998, the big five will have raised the peering bar to OC12.

Boardwatch and Keynote Encore pp. 11 - 14

In late November an interesting discussion of the Boardwatch - Keynote tests of webserver responsiveness as a measure of backbone responsiveness occurred on Inet-access. Whatever one thinks of Rickard's methodology, it has certainly attracted the attention of the industry. The tests generalize backbone performance from a small set of servers doing a large set of measurements. Unfortunately there are so many variables involved in the complex and ever changing way that the internet is put together, that such generalization is fraught with risk.

Sean Doran characterized the test as an obvious outgrowth of what he sees as 'a "human happiness" test suite. Whether or not it is a good indicator of "network OKness" is largely unrelated to the test itself, and rather more in the conclusion space of the particular testers."

"Unfortunately the argument is insoluble at this point because we simply do not have the means to prove what an average Internet user really does online, much less when her or his patience threshold is exceeded."

Sean goes on to provide very interesting data about the variability in the performance of links of varied levels of bandwidths that he has observed. While the internet certainly does not have a 40 kilo-character speed limit, the bandwidth obtainable by a single user with a large pipe is likely to be only of a fraction of that for which the pipe is rated. Finally on November 28 Rahul Desai published his own well reasoned critique of what had to be done to reframe the methodology before it would be worth further testing.

Brian Kahin Submits to our FOIA, p.15

After a second brush-off from the Office of Science and Technology Policy, we sent the letter in this article to OSTP Director Jack Gibbons. We maintain that Brian Kahin has been stonewalling us since our October 7th request. We have sought documents referring to Kahin's behind-the-scenes meetings with AT&T, IBM and Oracle to discuss the creation of a database for CORE. OSTP informed us that it shipped a box of 252 responsive documents on December 2 and will ship another box by December 5. We were told that the paper trail on these otherwise unpublished discussions was a foot high. Unfortunately, because of Agency stalling, we have nothing to include in this issue.

Network Address Translation Devices Improve, pp. 16 -19

On NANOG in early November during the same time period as the IPv6 discussion, Sean Doran argued that "NAT" boxes could become the fundamental scaling technology of the Internet by making actual IP addresses announced by a network behind its own border routers irrelevant to the rest of the Internet.

As is so often the case, reality turned out to be a little different, for others were quick to point out that some protocols would break when and if they were forced to travel through a Network Address Translation device. Less clear was the answer to the question of how protocols like DNS SEC would be handled. The consensus seemed to be that a workaround for the problem was quite doable. Less clear was what would happen if NAT was used to try to segment backbone routing. The one place in which it is currently most secure is in firewalls for major corporate intranets.

Management of .us Namespace, pp. 20 - 22

We conclude the article on the .us namespace that we began in November's issue. The contrast between NSI's management of .com and IANA's management of .us could hardly be more stark.

Two Bit Differentiated Services Architecture, p. 24

Van Jacobson announces an important draft at ftp://ftp.ee.lbl.gov/papers/draft-nichols-diff-svc-arch-00.txt

 

 

INTERNET TELEPHONY COMES OF AGE pp. 1 - 9

Kerry Hawkins, Vice President of Sales and Marketing at Vienna Systems, a Newbridge subsidiary and maker of Internet telephony gateways, takes us on a survey of the current state of the art of Internet telephony which has advanced enormously from its PC to server days of two years ago.

VocalTec, Micom, and Vienna Systems are the leading makers of IP telephony "gateways.' These devices, costing between $700 and $2000 a port, depending on the features they come with, attach to a corporation's local area network. They communicate with the PBX and do the compression decompression of the the voice necessary as well as provide the signaling that the public switched telephone network (PSTN) expects. Their purpose is to compress and packetize the company's voice traffic, removing it from the PSTN and its measured usage high tariffs to the corporation's intranet where, as long as every corporate office around the world has high speed Internet connectivity, phone calls between those offices travel the Internet for "free".

Such a market world wide is worth several hundreds of billions of dollars, and while the inter office corporate calls travel do require a significant portion of the bandwidth of the corporate network and thus are not really free, the savings that are generated by the use of the technology are so substantial that the economic impetus to use the technology will become increasingly great. While some phone companies may resist the introduction of the gateways, others are introducing them to their internet connected clients as a way to do an end run attack on their competitor's markets.

In addition a new market is springing up for Internet telephone service providers. Here ISPs may invest in gateways that may accept incoming calls from the PSTN and carry them via the Internet to the POP of another service provider. There a receiving gateway accepts the call and dials outward into the local PSTN to deliver the call.

Growth rates in the gateway market are looking to be 400% per year. A few months ago, Deutsche Telecom, the German PTT paid $48 million for a 21% share in VocalTec. As use expands it is thought that corporations will have to turn to a more expensive form of differentiated internet sevice to get their voice packets where the need to go in a timely fashion.

Savings are so great and the operational advantages of a system where the user's phone number can travel with him are so strong that we suspect corporate gateway based IP telephony will be one of the hottest internet technologies of 1998.

Hawkins finds that the traditional phone comapnies have a very starke dillema. For if they drop thier price to remain competitive with IP telephony, the they eat up their margin. But if they don't do it, then their competitor eats up their margin. In either case they loose. We add that we have heard only one starke remedy proposed: that is for the phone companies to set up competitive LECs and use them to cannibalize their incumbent LECs out of existence.

ARNAUD ON GIGAPOPS - IP AS ONLY TELECOM TECHNOLOGY? pp. 1, 10 - 15

Bill St. Arnaud Director of Network Projects at CANARIE, Inc. in Ottawa Canada describes some recent organizational and technology developments facing the commercial Internet. One of the most important is the GigaPOP. This functions as a regional exchange point that aggregates commercial customer traffic and then multi homes it directly to major internet backbones rather than sending it through the six major and generally overcrowded public exchanges.

Savvis is the first major commercial player to take advantage of this model operating seven GigaPOPs around the country. These seven are linked by an ATM DS3 backbone that connects six smaller cities to the seven Gigapops - each of which is linked by a DS 3 connection to Sprint, MCI and UUNET. The operator of the GigaPOP takes on a new role of cooperative buyer of upstream bandwidth for those linked to the GigaPOP.

If peering is sharply curtailed by the majors this year from the current levels of 40 to 50 peers down to ten or less, the GigaPOP concept may well provide a viable alternative business model for some of the depeered national backbones.

Arnaud finds the Internet in Europe laggin well behind developments in the US. He points out that: NSF, by decommissioning NSFnet, created the business opportunity that is allowing American commercial ISPs to now dominate the Internet business around the world. In Europe the problem is harder. The European PTTs were so committed to OSI and ATM and have always looked down on Internet. Therefore they never took up the challenge, while here in North America when NSFnet was dissolved, you had MCI and Sprint and others ready to take the Internet commercial. Consequently in Europe commercial Internet service comes from spin offs of the research networks or from the American commercial networks. 1998 will mark the first serious entry into the European market via the native PTTs.

Arnaud applauds the idea of a Global Association of IP Registries taking on policy responsibility for the allocation of IP numbers. He warns only against the association taking on Board members from European governmenmts. Such folk, he cautions, would be pro ITU in the orientation.

Finally he talks about what some are calling Internet 3: new networks designed totally for IP from the TCP layer downward. He talks about Qwests alliance with Cisco in these terms as well as Project Oxygen. He maintains that IP only networks will use bandwidth much more efficiently than the connection oriented networks of the phone companies.

STATE OF THE INTERNET 97, pp. 16 - 18

We reflect on develoments in 1997 in the context of David Isenberg's paradigm shift to the triumph of the stupid network. general issues covered: peering; QoS; IP telephony, IP everywhere; governance.

INTERNET GOVERNANCE (MANAGEMENT), pp. 19 - 22

We survey the demise of CORE plans to get its gTLDs into the root servers. OSTP finally answered our FOIA directed at Brian Kahin. The answer was an absurdity. It stated that the meetings that we wanted documentation on never took place and then identified nearly 900 responsive documents - half of which it sent and the other half of which it parcelled out to 13 other federal agencies to decide whether or not to send us. We explain why, despite OSTP denials, we believe the meetings did indeed occur. We also show why none of the 900 document were responsive to our request. In a fresh FOIA we sharpen our original request seeking to find what the Kahin Burr group did to bring DEC, AT&T and IBM into an apparent alliance with CORE. The ITU's Shaw, in reponse to our querry admitted that this group was helping CORE complete its database.

We show how after Kahin and Burr's fumbling, Ira Magaziner has been doing a credible job of building broad consensus for a resolution. The first public posting of Magaziner's draft is expected on the White House Web pages on January 23. (This was as of last week. The schedule to meet the 23rd deadline has slipped perhaps 48 hours since then.)

Here is what we believe will happen. A Global Internet Policy Council (our name for it) would be set up. For IP policy there will be one member from each IP registry: ARIN, RIPE and APNIC. There will be one member from the IAB and one from the IETF for protocol, RFC, and port assignment policy, and there will likely be two members from the DNS community who will be chosen to represent DNS policy interests. The government will not do the choosing for any of the positions. It will be up to the DNS warring factions to come together into a Confederation and choose their two representatives to the Global Internet Policy Council. The seven members of this council will be responsible to the constituencies beneath them for the articulation and implementation of policy that was formerly the sole provence of IANA.

We hopeful that a way can be found to insulate the Council from lawsuit and to transfer to the Policy Council the $40 million dollars of the Infrastructure fund that is not yet grabbed by the US Congress. If this is not done, the remainder of the money will also be taken by Congress and we will have the disaster of the Fund becoming the first tax on the Internet. On the other hand, if the money is used properly, it will be a very important enabler for the Policy Council to accomplish critical tasks that are now going undone.

According to reports we are getting, Jon Postel has been floating the idea of a 9 member council to contain only 2 IP number registry represenatives, 2 from DNS, two from the protocol community and three from an "Industry and User Committee." Corporations would be invited to join that committee at $5000 per corporation, per year. Only by joining the committee could they have a chance their people onto the new top 9 member IANA. One wonders what there is to be gained by this move? The creation of a new venue for the largest corporations that would like to exert control beyond what they could attain by work on the IETF to throw money into?

Perhaps it is not too much to ask that the charter of yet another new industry sponsored "users" group to go along side of ones like IOPs, ISOC, ILPF be examined very carefully and in public - not in negotiations between Jon and Ira. First the idea that three people could be found who can adequately represent 50 to 100 million Internet users seems strange. Second, the idea that the average end user has something to offer the body that will hash out the TECHNICAL policy for the Internet also strikes us as impractical.

It seems likely that legal and political rather than technical constraints will prevent any new gTLDs from going into the root servers before October 1, 1998. At that point they will likely be added slowly and one at a time as the new system shakes itself out.

We have been observing the process of the formation of a new DNS Conferederation and have considerable confidence in the outcome. (If it is successful, it could also move our October 1 estimate forward in time. Some involved with the new confederation believe that new gTLDs could begin to go in the rootservers one-at-a-time soon after April 1. It seems likely that the CORE domains will not have the highest priority.

MISCELLANEOUS: 9, 15, 24

Views of US toward the Bangemann group on Internet commerce and DNS. Book Reviews of several new O'Reilly java-oriented volumes.

 

 

EMPHASIS ON ZERO SUM WIN/LOSE POLITICS PUSHES INTERNET MANAGEMENT DEBATE TOWARDS GRIDLOCK pp. 1 - 8

We examine the landscape in the aftermath of the Green Paper and find that a proliferation of power blocs are jockeying to impose win/lose scenarios on the outcome of the restructuring of DNS and IANA. We offer a short history of the conflicts over the past two years in an effort to show the origin and development of the win/lose downward spiral.

Too many of the players, including POC and CORE, see the DNS situation as one where they win by atttacking NSI. "Dot" gov has been turned over to the United States General Services Administration and Educom has proposed to take over .edu". "Dot" com of course is what everyone sees as the prize. The Green Paper proposes that NSI must open a front office (registrar) operation and separate that from its back office (registry) function.

The Green Paper also proposes that NSI must "give the U.S. government a copy and documentation of all the data, software, and appropriate licenses to other intellectual property generated under the cooperative agreement, for use by the new corporation for the benefit of the Internet." Material covered under this phrasing certainly includes the database contents for .com which was described for us by one observer as "the mother lode of all Internet strategic databases."

"It contains for two million registrants the contact info (name, phone address and email) on up to three people for each name. (Technical, admin and billing). That alone would give possessor the name of almost every sysadmin in the US (or maybe the world if you combined it with other DNS databases from CORE for example) as well as the "owner" of every Internet associated business. Doing a "whois" search by name would raise considerable privacy issues. That's why the POC/CORE idea of aggregating it with all, other TLD data in one central database is so frightening. Not to mention that the revenue value of being able to e-spam or telephone solicit that community is enormous." The Green Paper speaks, without, we think, having thought through the consequences of using this material "for the benefit of the Internet."

While the Green Paper leans toward approving profit making registries, it also mentions the existence of a strong strain of opinion favoring non profit registries. We have found out a good deal about those on this side of the equation. Among them is, not surprisingly, ISOC. Don Heath confirmed in email to us on 3/6/98: "We advocate a public trust - non profit model for registries." Educom is of the same general persuasion, having submitted a response to the Green Paper, the salient points of which Mike Roberts outlines for us in a side bar. The Educom and (presumably ISOC) proposals would bar for profit registries by banning private ownership of domain names which are regarded as belonging to "a public space which requires public interest stewardship on behalf of all users of the Internet."

Registrants would pay for a license to use the name just as we are now having to pay for a license to use spectrum which the ITU holds "in the public trust." Tony Rutkowski points out that any language calling DNS a public resource, as does the ISOC signed MoU, assures ITU involvement forever, since their treaty charter assigns them a permanent role where such resources are involved." See for example Article 12, paragraph 3 of the ITU constitution "In the exercise of their Board duties, the members of the Radio Regulations Board shall serve, not as representing their respective Member States nor a region, but as custodians of an international public trust. <http://www.wia.org/pub/itu-constitution.html> See also <http://www.wia.org/dns-law/pub/ITU_Telecom_Regs.htm> Rutkowski adds: "What keeps the Internet outside the grips of these provisions is Art. 9 dealing with "specialized" networks and systems that are not in the same category as those "generally available to the public." The same critical boundary is also found in other major instruments such as the WTO GATS Telecom provisions and the national laws in many countries and regions like Europe." Given that ITU Secretary General Tarjanne has openly declared the ITU's Interest in becoming the governing body of the Internet, we can only wonder about the impact of the ISOC and Educom phrasing.

Furthermore the concept of the Registry owning the names, and only renting their use to registrants is exactly how the International Telephone Number System works to give the PTTs control of phone system customers, where one cannot get any kind of dialtone without first renting a number "From The Phone Company," which owns your number and can change it as dictated by the "Needs Of The Phone System." Unlike spectrum, to which it is sometimes comapared, the DNS name space is essentially infinite.

We continue to find references to IBM and AT&T, first in databse construction for CORE and more recently in running a non profit .com database. If doing so would give these giants access to NSI's .com database, we can understand why they would be eager to step up and perform such a "service" for the Internet community. As we have documented in previous issues, under the stewardship of Brian Kahin and Becky Burr, these two companies seem to have become the leading lights of the InterAgency Working Group's industrial policy.

Educom's call for non profit and education representation on the IANA Board reminds us of Ira Magaziner's statements in his interview with us that, the non profit, education and trademark sectors pushed heavily for what became, at the last moment, seven user group representatives on the policy board. Unfortunately, this changes totally the nature of the IANA Board from a source of technical policy for Internet names and numbers to a general Internet governing council and puts a top down imprint where it most certainly does not belong.

We have here a series of circus rings with 'elephants" jostling each other in win/lose struggles. In the central DNS ring is NSI backed by SAIC and defended in a current lawsuit by no less than Lloyd Cutler, the biggest legal pistol in DC. Standing somewhere in the shadows against NSI, either is or has been IBM and AT&T, their entrance facilitated by Kahin and Burr. Also trying to gain a foothold is the the Open Root Server Coalition. See www.open- rsc.org. ORSC is playing a non-zero-sum game in the hopes of attracting enough participants from the Internet commnity to turn the tide from zero-sum to non-zero-sum politics.

IAHC sought a win/lose solution and began our current sad downward spiral. Rather than lose IAHC's opponents called in US government, which has handed them a victory making IAHC/POC/CORE a current loser. Rather than accept their loss POC/CORE calls on the EC and EU to fight the US. The result is that, in another ring, the Europeans lead by Bangeman are trumpeting accusations against the Americans. These are not the only "rings." In a third we have the US Congress, in a fourth Asia, and in the fifth the US Judiciary.

The IANA Transition Advisory Group (ITAG) is in a sixth ring. This group (Randy Bush, Geoff Huston, Brian Carpenter, John Klensin, Steve Wolff and Dave Farber) is composed primarily of long time close associates of Jon Postel. ITAG is appears to be set up to perform the detailed design of the new IANA corporation. Drafting the articles of Incorporation and the By-Laws is something that has to be well underway right now for there to be a chance for Magaziner's timeline to work. Unfortunately, the pattern being followed is very similar to Jon's appointment of IAHC. ITAG is a closed, top down, appointed group working to revamp the most critical aspects of the Internet. We have seen no sign that, apart from getting initial clearance through Magaziner, the ITAG will do other than present its redesign to the world as a fait accompli. IAHC was the previous such win/lose solution concocted by Jon as IANA.

It is becoming clear that the new IANA will have broader powers than the old. ITAG may be the most important "working group" in the history of the net. It is a shame to see that it has not adopted the IETF tradition of openness and is working instead behind closed doors. What will be done to ascertain if the result has any consensus behind it? When ITAG is finished, will it present its draft anywhere before sending it to Magaziner? Now that rule making is underway contacts with government must be done in the open. ITAG as a private sector group falls outside these constraints. Any corporation wishing to affect the outcome at this moment has to be thinking about whether an approach to ITAG could be fruitful. This approach is hardly fair to the groups that are working in the open to develop broad consensus for a solution to the DNS problem. One such is the Open Root Server Confederation. We recommend a look at their work in progress at <www.open- rsc.org>.

Ira Magaziner - as Ring master - is trying to be an honest broker of what is best for the Internet and is taking an internationalist stance that ironically the European's miss as being US centric. The Internet community - no longer cohesive - is heavily fragmented. Fighting amongst itself, it risks surrendering the field to the ITU and the old line phone companies - players which will remake the Internet top down while deftly handing out higher prices, less innovation and less freedom for end users.

We owe our entire focus on the unfortunate aspects of the zero- sum (my win muct be your loss) behavior chronicled in our survey of the current mess to discussions with Einar Stefferud who with this problem in mind has been helping to nurture the Open Root Server Confederation. While Vint Cerf disagrees with some of what we have written, he joins Stef in commenting for the record: "the zero sum mentality is a useful analogy to explain some of the extremal behaviour of various parties. "For me to win, you must lose" - a poor match to the burgeoning value of the Internet and its potential for growing opportunities for many parties."

Our Feb 23 Interview with Ira Magaziner pp. 1, 8 -15

The interview explores in depth the schedule for and steps to be taken in setting up the new IANA corporation. It puts on record the thinking behind the user members and Magaziner's reaction to the problems that they may cause. It shows the current state of his thinking in a detail not elsewhere available. He describes with candor the methodology of his approach. Those who don't take the time to look at and try to understand what he reveals here about his decision making process and the direction in which he intends to push events should not complain if, having opted out of the process, they are later unhappy with the outcome.

Analysis of Magaziner's Position, pp. 16 - 18

Ira Magaziner understands very well that the crux of the current problems are not just DNS but the entire range of IANA authority. We are worried that what Ira is doing faces several contradictions. He is working with an arbitrary but significant deadline of September 30 because, at that time, two things happen. First, the final six month ramp down of the NSF NSI Cooperative Agreement ends and with it the US government's authority over NSI's operation. Second, federal funds to pay for the IANA function end and, with that ending, the government's claim of authority over the IANA, if not ended, is sharply diminished.

In this context Ira wants to make sure that NSI shares registrations into the .com database. He also wants to have a new IANA authority in place with an international buy-in to a privatized policy board that will establish policies over the issuance of new top level domains and establish a new internationally agreed upon means of operating a single set of beefed up root-servers. A tall order in eight months under any circumstances - one that is made even more difficult by two years of zero sum struggles.

All of this operates in a context where, in the year since the U.S. government's involvement began, the importance of the Internet to the entire range of telecommunications has been growing and where many large corporations - motivated by a newfound awareness of the impact of the Internet on their future viability - are now quite eager to meddle in the process to protect their own self-interest. As a result, Magaziner is on the tightrope because he has little time to get a group of powerful forces with conflicting interests to act on behalf of a mutually agreed upon common good. His need to find consensus among such a diverse range of interests could mean that he either runs out of time or agrees to a structure that will be unworkable.

QoS & Tag Switching, pp. 19 - 22

Paul Ferguson describes Quality of Service as neither network uptime nor the application of RSVP on a VPN but rather a complex series of engineering tasks which range from traffic management to capacity planning and may include every thing from differentiated services to tag switching. In a second interview Yakov Rekhter describes tag switching which is expected to provide traffic engineering capabilities comparable to what ATM provides today, but without requiring ATM. It will be released by years's end.

Internet in Japan pp. 23 - 25

In a Tokyo interview Hiroshi Fujiwara, CEO of the Internet Institute describes how his Institute is set up to provide technology transfer to emerging providers of new internet infrastructure in Japan. He also describes NTT's approach to the Internet market place and the interest of the Japanese consumer electronics industry in IPv6.

Rutkowski on Internet Meta Developments, pp. 26 - 27

Tony describes how technology issues lie along various layers of the protocol stack while issue of content slice vertically through them. He sees IP as merely "glue" holding the "layers together while the important thing is what is happening above and below it. " He likes the Association for Interactive Media which he sees as a fascinating development representing the interests of a broad array of new Internet constituents. 'They represent new small entreprenuers who are affected by these developments in Washington but have no voice in the process."

Technical Issues from Inet-access, pp. 27 - 30, 32

Avi Freedman and Sean Doran discuss stat-muxing as a means of more efficient use of the LEC network. A second discussion looks at whether network AS numbers may become directly used in such a way as to be certain that the AS number is tied to the correct set of prefixes. A third discussion between Sean and Lex Luthor focuses on the possibilities opened by tag switching and other layer 3 switching solutions.

 

 

INTERNET FAX TO CHALLENGE PSTN FAX

JOINT ITU - IETF STANDARD PUTS $25 BILLION NORTH AMERICAN ANNUAL PSTN FAX INCOME AT RISK

MACHINES USING NEW STANDARDS AVAILABLE SOON- ADVOCATE SUGGESTS IFAX IS TROJAN HORSE FOR ADOPTION OF IP TELEPHONY pp. 1 - 10

Richard Shockey takes us on a tour of the likely fallout from the completion of joint ITU - IETF Internet fax standards. He points out that Internet fax should deploy even faster than Internet telephony since it is simpler to deploy and far more forgiving of network congestion and delay. Internet Fax is coming in two parts. The first is a store and forward model that is essentially based on the MIME attachment of TIFF files to standard E-Mail messages delivered by SMTP. The standards for this model are found in the IETF - ITU agreements of January 1998.

The second part is an Internet draft that extends SMTP itself. The draft turns a fax machine into a virtual SMTP server so that transmission of the fax from point-to-point happens in real time. The protocol would extend SMPT beyond its function of a simple mail transport protocol to the point where, when a transport session is established, the user can exchange capabilities between devices - something that cannot be done with store and forward mail.

Implementing these will be a series of hybrid "stupid-smart" devices that bridge faxes between the PSTN and the Internet. The Panasonic FO-770I, which is already on the market, is one such device with almost all the capabilities of the new standard . Load your fax, toggle "send" in one direction to transmit via the PSTN, toggle "send" in the other direction to go via the Internet. Shockey and others are working on the introduction of inexpensive "black boxes" to connect standard G3 faxes in small-office, home- office (SOHO) environments directly to one's PC and from there to the Internet.

Where is this headed? Ultimately the intelligence will be in the keysets on everyone's desk and not in some centralized gateway device. If telephones, fax, printers, copiers and other standard office devices become more intelligent, as predicted by Moore's law, no intermediation between the Internet and the PSTN will be necessary. The gateways will become superfluous and the Internet will have completed its cannibalization of the PSTN.

Surveys show that every Fortune 500 company is spending an aggregate of around $15 million a year on fax. 40% of all trans- Atlantic and trans-Pacific telephone calls are fax related. The early adapters of Internet FAX have realized that they would not need PSTN fax any more. 5% of the 25 billion dollar annual North American fax bill is likely to move rather quickly to the Internet. When it does, the screams from the PSTN side will be enormous. Fortune 1000 CIOs have just spent a fortune with Cisco or Bay Networks or Newbridge. They are soon going to realize that they can leverage their investment by getting rid of their PSTN connected fax machines as they become aware that they can start running voice and fax traffic over their IP networks for a very very small incremental cost. Some folk like Robert Metcalfe and Charles Ferguson are coming to believe that the Telco's are well aware of what is happening in the market place and are using every means necessary to preserve and defend their business models and monopolies. And that, rather than actually compete in the market place, they have chosen the courts and political process to defend their positions.

Charles Ferguson's piece on the economics of the LEC environment is found at http://www-eecs.mit.edu:80/people/ferguson/telecom/ We urge readers to read it from cover to cover. Its analysis of the drag placed by the LECs on US economic growth is extremely powerful.

MAGAZINER'S GREEN PAPER A GOOD START

SHOWS COMPLEXITY OF BUILDING A GOVERNING STRUCTURE FOR INTERNET -MANY IMPORTANT DETAIL'S MISSING -TIME LINE TIGHT

PLAN FOR USER BOARD MEMBERS SHOWS OUTSIDE INTEREST'S DESIRE FOR CONTROL pp. 1, 11 - 13, 24

Ira Magaziner released his Green Paper in the midst of what Jon Postel called a test of the ability of root servers to point elsewhere than NSI. Critics called Postel's "test" a hi-jacking.

The paper makes a good start, but it shows the complexity of trying to put an IANA Policy Council in place. Seven technical and seven user members will not be productive. Einar Stefferud offers a us a critique of the proposed structure saying: "my basic preferred governance model is that of 'customer or producer cooperatives' wherein the customers or the producers play the role of owners and elect the board of Directors and the Executive."

"I also advocate subjecting the Board and the Executive to a requirement to make available information to answer questions from the customers or producers who should have a right to ask any questions about any aspect of the cooperative's business operations, finances, policies, and possible conflicts of interest, etc. In short, the customers or producers should have all the rights of owners, and the whole thing must operate under 'sunshine law styled rules' of openness."

A different commentator said of the user member board members: "The real issue is whether the Internet is going to be allowed to grow into the enormous intellectual, social and economic force for facilitating individual freedom that it has the potential to become or if it's growth will be limited by those who are sophisticated enough to anticipate the decline of their own interests if that growth is maximized." [Editor: Namely the Carriers and LECs.]

The processes now on going will reshape the workings of the Internet in very significant ways. It is important that they be carried out with maximum care and deliberation rather than rushed to meet an arbitrary deadline. It is also important that Ira Magaziner signal his understanding of this as soon as possible.

CARRIER CONSORTIA ALLEGED CARTEL POLICIES KEEP INTERNET BANDWIDTH PRICING HIGH

WIDELY KNOWN BUT LITTLE TALKED ABOUT TACTICS RATION BANDWIDTH AVAILABILITY OF TRANS-OCEANIC CABLES

PRACTICE KEEPS GILDER'S PREDICTIONS OF PLENTIFUL BANDWIDTH AT BAY, pp. 14 - 16

We publish an anonymous interview with an authority who agreed to talk about the marketing and pricing practices of the trans-oceanic carrier consortia which allegedly dribble enough capacity onto the markets to keep prices high and act to set annual price ranges for availability of new leases. This is an area we first became aware of after our October 1996 interview with Teleglobe. We have since found a small number of people who would talk about the carrier consortia practices in private. Up to now, we have never found someone who would talk in front of a tape recorder.

If these practices continue outside the view of public knowledge, the era of the availability of virtually unlimited cheap telecommunications bandwidth envisioned by George Gilder, may never arrive. New blood represented by Qwest, Level 3 and Project Oxygen is coming into the market. No matter what happens in the US, without serious changes in trans-oceanic cable pricing, there will be Atlantic and Pacific choke points. A little recognized factor in the current continued high cost of DS3 circuits and long lead time for delivery is the need for carriers to calculate the load these circuits will place on trans-Atlantic and Pacific choke points and ensure that the bandwidth provisioned through their respective cable consortia is adequate.

We asked two sources whom we consider authoritative to review this article. One, a bandwidth purchaser, responded that it is right on the money. The second, a bandwidth seller claimed that prices are dropping. The first countered that declines are tiny. Both were surprised that we had gotten anyone to comment - even off record.

DAVE HUGHES HITS UNIVERSAL SERVICE FUND REFUSAL TO PAY WIRELESS K- 12 CONNECTIVITY

SLC ACTION SHOWS BANKRUPTCY OF HIDDEN 2.25 BILLION TAX ON PHONE USERS IMPLEMENTED FOR BENEFIT OF LECS, pp. 17 - 19

The Schools and Libraries Corporation has shown its colors as a play toy of the LECs in the imposition of a 2.25 billion annual tax on local phone service to connect our K - 12 schools and public libraries to the Internet with a decision to forbid payment for spread spectrum, part 15, wireless access devices to connect schools and libraries to the Internet. We offer a summary of Hughes' comment and strategy aimed at overturning the decision.

MINI BOOK REVIEWS, p. 19

Mini book reviews of Quality of Service by Paul Ferguson and Geoff Huston and the Electronic Privacy Papers by Bruce Scheier and David Banisar.

DISCUSSION OF BOGUS ROUTE INJECTION AND OTHER ISSUES FROM NANOG

IMPORTANCE BUT DIFFICULTY OF CROSS PROVIDER COOPERATION EMPHASIZED pp. 20 - 22

Comment from NANOG on problems caused by those who inject routes that they don't own. Further comment on peering issues.

 

 

WorldCom/MCI Merger: Internet at Risk? -- Merger Would Lessen MCI/WorldCom Incentive to Upgrade Its Interconnections to Remaining Backbones - John Curran's March 13 CWA Speech -- Replies by Vint Cerf and Curran Follow - Merger would Yield Single Dominant Player pp. 1 - 11

On March 13, 1998 at the Mayflower Hotel in Washington DC, the Consumer Project on Technology and the Communications Workers of America presented a half day symposium on the proposed WorldCom MCI merger. John Curran, the CTO of GTE Internetworking, closed the symposium with an hour long address and question session. We publish a transcript of his talk and of the questions from the audience followed by a rebuttal written for the COOK Report by Vint Cerf and a final response by John Curran.

The most commonly expressed concern has been the newly merged company would control so much of the backbone market that it could raise prices to downstrean ISPs with impunity. However, in his talk, John made a much more subtle and compelling case in looking at the changed relationship that would occur among the five to seven backbones that are either totally or partially privately interconnected. These backbones account for 90% of the Internet's traffic. However, because none of the backbones accounts for as much as 30% of the traffic (not even UUNET plus ANS), satisfactory connectivity to the overwhelming majority of the internet is dependent on these top five to seven networks cooperating with each other to install upgrades to their private connectivity as rapidly as possible in order to be sure that down stream customers have satisfactory capacity to all parts of the Internet. Everyone loses by not cooperating in continuing to build the network and everyone wins when all increase their interconnect infrastructure as rapidly as possible.

Curran pointed out that if, suddenly by way of merger, one network were to aggregate a substantial majority (greater than 60%) of the network's traffic, the balance that has kept everyone locked in a win - win relationship changes in a such a way that slowness to upgrade links becomes much less of an irritant to the larger partner and much worse for the smaller which all of a sudden finds its connectivity to the majority of the Internet degraded.

The post merger 'Goliath' network, by slowing down the rate of its private interconnect upgrades, can create a situation where dissatisfaction among its competitor's customers increases faster than among its own - with the result that those customers may begin to disconnect from the smaller competing backbone and migrate to the "Goliath." Given the frantic pace of growth, the shortage of capital, and the myriad of things competing for management's attention, in a post merger Internet, the larger network would not have to conspire overtly against the smaller. Because the balance of win - win would have been altered to win - lose, just by "inadvertent" slowness to respond to the interconnect needs of its smaller competitors, the Goliath would find itself in a situation where forward momentum would favor its continued growth.

The question period brought out another factor in the equation. The Internet can function well without regulation because of its current balance of large backbones having an interest in interconnection. Change this by creating a WorldCom/MCI "Goliath" and you may inherit a situation where the only way to create a balance to the overwhelming dominance of WorldCom/MCI is to impose regulation on the industry. Such a move would slow down the speed of innovation but would also likely suit just fine the LECs and dominant IXCs that are threatened by the technology paradigm shift of the Internet.

We offered MCI a chance to rebut Curran. It accepted. Vint Cerf wrote us a 2500 word response where he maintained that the overall size of MCI and WorldCom was closer to 20% of the net and stated that the size issue was really impossible to deal with because no one had any measurements that were adequate. Vint dealt only obliquely with John's analysis of the motivation to increase interconnect infrastructure. When we pressed him on the issue he replied he simply didn't find it the psychology described by John to be credible.

Finally John responded to Vint's comments. Among the points he made: "Based on the traffic flow statistics I've seen to date, each major backbone currently handles a minority of the Internet's inter-provider traffic, with specific percentages ranging between 10 to 30 percent for the top handful of backbone providers. (Representatives of several networks have told me that they agree with these generalizations, but I would welcome input from anyone with significantly different numbers.) . . . ."

"The current cooperative environment encourages providers to resolve traffic exchange problems before customers are forced to pursue such extreme steps as obtaining secondary Internet connections to access poorly interconnected backbones. But after a WorldCom-MCI merger, customers facing performance problems caused by their provider's degraded interconnection with the dominant backbone would have to trade in their old provider for a direct connection to the merged firm's network. This switch would be a customer's only opportunity to bypass the degraded interconnection between the MCI/WorldCom backbone and the customer's current provider, and to achieve unfettered, quality access to the majority of Internet destinations under the Goliath's control. "

"As customers migrate, the dominant provider would have ever-larger direct connectivity to Internet destinations and ever less dependence on remaining backbone providers. The natural end-state is obvious -- one dominant network that would be the Internet."

The COOK Report concludes: This merger is not about the scaling of the internet by creating new infrastructure. It is about the creation of new debt and a $170 million dollar retention bonus pool to be paid by WorldCom to top executives of MCI, money that, if WorldCom had been trying to scale the Internet rather than build an empire, could have been put into fiber and other infrastructure.

Scharf on DNS and BIND, pp. 12 - 16, 24

Jerry Scharf explains the stresses that Internet growth and dominance of .com have placed on DNS and BIND. He talks about the possible future of an internet directory service. He also describes the need for DNS SEC and explains how it will work when it is released by year's end. Finally he describes some of scaling issues currently facing BIND. This interview done in London on January 27, 1998 gives an overview (not readily available elsewhere) of the issues facing DNS as a protocol and BIND as its implementing software.

pgMedia Makes NTIA Filing, pp. 17-18

pgMedia has new Washington DC lawyers for its anti-trust suit against NSF and NSI. In a filing with NTIA it describes the technology that would support its DNS registration system. We asked DNS guru Paul Vixie to critique it. Paul finds it not viable. We interpolate his comments within the text of the filing.

Legal Issues in Internet Governance, pp. 19 - 22

When William Bode won, in early February, an injunction against the disbursement of the NSF sponsored Intellectual Infrastructure Fund, high level administration officials, promised to get better legal representation from the justice department for NSF. Bode was suing to overturn not only the IIF money but all registration monies taken in by NSI.

When the promised help had not arrived as the March 17th court date neared, we faxed a protest to the Attorney General emphasizing that poor legal representation by DOJ of NSF could endanger the stability of Magaziner's reform efforts. We suggested that Justice treat the Court hearing on March 17 seriously, stating that failure to do so could have serious consequences and promising to go public if Justice did nothing. On March 17th Justice did just that (nothing), sending back the same AG to defend NSF who had appeared with negative success earlier.

Dan Steinberg is a Canadian lawyer who has written an interesting legal summary of the Bode and pgMedia cases, and a lengthy filing by Karl Auerbach. His point is that unexpected developments may cause these cases to interact with each other with unexpected legal outcomes. The US government, he adds, should have competent legal talent reviewing the further development of these issues with regard to their impact on the ability of the US to make policy. Unfortunately, there is no such person in place. Meanwhile Ira Magaziner has sold his Washington DC home. He responded to our query asking whether he may be leaving government service soon, with an injunction: not to worry because he would be staying on the job.