This article presents our opinion of how we have gotten into our present, sad state regarding networking in this country. First, the National Information Infrastructure is defined. Next, the importance of networking to education is discussed. Then, the history of NSFNET leads into an analysis of how we have arrived at our present state in networking, together with an analysis of the fundamental problems. Then, some measures for self-protection regarding networking are presented. An emerging networking activity, which may benefit education, follows, along with some specific recommendations for the activity. A glossary of acronyms is provided at the end of the article.
The NII is the National Information Infrastructure, the activity the government tends to term the "Information Superhighway." To the everyday user of networking, we think of the NII as the Internet. Indeed, Steve Wolff, ex-director of NSF's Division of Networking and Communications Research and Infrastructure, has stated publicly and definitively, "The Internet is the NII."
The important aspect here is that, currently, the Internet is all we have for our national information infrastructure. Our ability to intercommunicate nationally relies solely on the Internet, and our future relies on it working, and working well.
This article is presented from the viewpoint of higher education. It is axiomatic that networking has been of tremendous benefit to higher education. Today, excellent Internet connectivity is expected by faculty, staff and students. Often, the quality of Internet access is used when recruiting faculty and students. The Internet is an enabling technology, providing access to: 1) vastly greater quantities of information, 2) much higher quality information (often more suitable for education, as it can be obtained directly from experts), 3) more timely information, and 4) multimedia information (video and audio offer new dimensions in learning, augmenting printed media). Hypertext, traversal of an information "tree," constructed by the author(s), provides many ways of presenting information, permitting tremendous latitude in the way one learns. Indeed, this permits what some term "hyperlearning." Multimedia electronic textbooks are beginning to emerge, available for free (e.g. http://csep1.phy.ornl.gov/csep.html), which provide motion as well as interaction. Forward thinkers in higher education perceive that we have completed only half of the transformation into the modern information age - the remainder of the transition will involve multimedia.
The next great need is for multimedia applications, encompassing multimedia and including personal desktop videoconferencing. Recently, traffic statistics were taken on the FDDI ring at Moffett Field, CA (a national exchange point), over a ten minute period. At that time 16 of about 400,000 total IP sessions were CU-SeeMe sessions, and were observed to be require about 200-300 times the network capacity of the average IP session. This illustrates the incredibly high capacity required for multimedia.The infrastructure of and "behind" the Internet (i.e. WAN and LAN) will require tremendous upgrade before we can fully use the Internet for multimedia. However, the impact upon learning is likely to be so far-reaching, that it is incumbent upon us in education to pursue this next step aggressively.
The tenet of this article is that networking for higher education is in terrible shape right now. The remainder of this article explores whether our present information infrastructure will facilitate or serve as an impediment in taking this next step.
A bit of history will establish perspective for where we are today regarding networking. Recall that in 1986 NSF issued a solicitation out for regionals to form consortia to connect to NSFNET, which at that time was at 56 kbps. Even though very few nodes then existed on NSFNET, NSF had to delay the implementation of the program due to the lack of adequate capacity in the 56 kbps backbone. We remember with some dismay those early days of the Internet - when attempting to telnet into a Cray 2 at NASA Ames, after about a minute, one would receive a response with both the login prompt and the timeout message, apparently returned in the same packet!
In 1987, NSFNET was upgraded to T-1, a 24-fold increase in capacity. A picture, showing the original thirteen nodes, is provided as Figure 1. The upgrade was accomplished under the auspices of Merit, and involved MCI and IBM in the joint venture. Many, myself included, suffered qualms about the partnership - after all, IBM was known at that time for SNA and not TCP/IP. However, the transition went extremely smoothly, and the regional networks became connected, for the most part, in the middle of 1987. During this time, users and regional networks received extremely good service from Merit, and exceptionally good relations developed among all participants. The motto at that time seemed to be, "Just get the job done."
The T-1 backbone worked extremely well for about four years, during which time the traffic grew steadily until saturation was imminent on several of its links. In 1991, ANS (Advanced Network Services, a separate not-for-profit corporation formed by Merit, IBM and MCI), undertook to upgrade the backbone to T-3 - a 30-fold increase in capacity. The transition was not smooth, but the problems were solved over a period of several quarters. Initially, there were nineteen nodes, as shown in Figure 2. Merit retained overall responsibility for the effort, and things worked smoothly for about another four years.
Many believed that, to catalyze the spread of the Internet into the private sector, commercial entities had to be involved in the activity. From the early days of the T-1 NSFNET, commercial organizations were allowed to connect to their backbone, provided they were sponsored by a research or educational institution - the commercial organization had to be doing collaborative work which required Internet access. This policy required NSF to approve every single commercial connection - an unmanageable approach. Then, under the new contract for the T-3 backbone, ANS began offering commercial access directly, because they had contracted with the NSF to provide only a portion of their network infrastructure - there was "spare" capacity available, which was offered by ANS to commercial entities via an activity termed CO+RE (Commercial plus Research and Education).
However, other network providers were emerging, who argued with the NSF that the ANS CO+RE activity represented unfair competition with the private sector, since the cost to ANS to provide such service was only an incremental cost - the bulk of the costs were being paid by the NSF under the contract with Merit. Rumors abounded that lawyers were retained to pressure the NSF into disallowing commercial connectivity to NSFNET via ANS. NSF's lawyers advised the NSF that there were indeed significant liabilities, and that NSF should cease and desist the activity. One alternative was for the NSF to disallow commercial traffic on the Internet. However, it was impossible to delineate definitively the boundary between commercial and non-commercial traffic. Besides, dismissing all the commercial sites, especially those which were participating in NSF sponsored Centers of Excellence, was unpalatable to the NSF. Indeed, it was always NSF's stated intention to promote the Internet until it developed to the point of self-sufficiency, and then withdraw support. A solution which seemed to satisfy all constraints was for the NSF to privitize the Internet.
A new solicitation to this effect was issued by the NSF, which mandated three "priority" Network Access Points (NAP's), to which consortia would connect directly or via an ISP. A Routing Arbiter (RA) was funded to maintain a database of routes, which could be used by network providers. More detail on the architecture is available at http://www/merit.edu/. To receive funding for connectivity, regional networks had to propose to the NSF to connect to a single ISP (Internet Service Provider). NSF funded most regional networks at a "flat" rate of about the cost of a single T-3 connection. In Westnet, this left the region in a quandary, as actually four fractional T-3 connections were needed from our six states, but this exceeded our available resources. Instead, Westnet decided to implement thirteen T-1 connections (more expensive than a single T-3, but much less expensive that four fractional T-3's and associated interconnects) to Sprint, based upon a competitive bid. The transition was scheduled for August 1994, but was delayed eight to nine months due to the fact that two of the three priority NAPs initially did not work. It was very fortunate that two Federal Internet eXchanges (FIXes) and a Metropolitan Area Exchange (MAE) existed to augment the NAPs.
Merit decommissioned NSFNET April 30, 1995 (see http://www.merit.edu/nsfnet/nsfnet.retired). The ten-year project of NSFNET was tremendously successful, beyond anyone's wildest dreams in 1986. The major beneficiaries of the new architecture initially were the ISP's of national scope: ANS, Sprint, MCI, and UUNet, which had in late 1994 and early 1995 partial T-3 and T-1 networks.
However, many problems, in addition to the delay derived from the NAP's, existed in the new model. Immediately, capacity problems existed in the T-1 portion of the ISP's networks. It took the ISP's about six months to get their circuits upgraded to the point where their networks were usable - much longer than it should have. During this time, network service was extremely poor. Also, the ISP's were using Cisco routers as their backbone routers, and capacity problems appeared in the backbone routers during peak traffic periods. It took Cisco and the ISP's about a year to resolve these problems. Additionally, over the period of the first year, the NAP's became very congested. Due to these and other problems, there were multiple occasions when selected national networks were down for minutes (sometimes tens of minutes) - this was a serious problem for regional networks, which initially were mandated by the NSF to select a single ISP. Furthermore, the very good relations and high quality of service we enjoyed during NSFNET were nowhere to be found.
In retrospect, it should have been easy to foresee and avoid some of these problems. The fundamental problem is that, every four years, we need a 20-30X growth in capacity. During the transition, at the end of a four-year period, just when the country needed a 20-30X growth in capacity, we obtained a growth of about 3X. Also, now, no single entity has responsibility for the national network; in fact, there are now about five, large national networks. Thus, coordination and management under a single entity is no longer possible. Another weakness is that only a Routing Arbiter (not a Routing Authority) exists, which some of the large ISP's have chosen not to use. Not surprisingly, we have experienced many bad routes since the transition. Finally, Merit used to provide traffic statistics for individual sites. Now, these are not being provided by anyone. We have lost our ability to manage/diagnose the network - we no longer can tell where our traffic goes, so we can not engineer appropriate solutions. The intangible "glue," previously provided by Merit to hold the Internet together, has seemingly vanished.
Moreover, the cost of Internet connectivity had gone up considerably. Vendors are now charging based in usage. At the highest level of usage for a T-3 Internet connection, MCI charges about $780,000/year! At close to $1 million/year for a T-3 connection, we in higher education may have just been priced out of the modern information age. Many universities (by IBM's estimate, over 300) now need much more network capacity, but we can not afford it at current market prices.
Some of the reasons for our present sad state of networking are obvious from the previous section. Most notably, when we should have been increasing the national network capacity by a factor of 20-30, we instead concentrated all of our efforts on privitization, which resulted in an increase in national network capacity of only a factor of from 3 to 5. Some reasons which may not be so obvious are covered in the remainder of this section.
Lack of Responsibility - There is no single entity responsible for the national network, as Merit was before. All too frequently, we see problems between national networks, mostly at the exchange points. Typically, when attempting to debug these problems we see circular "finger pointing," for example, provider A blames the problem on provider B, which in turn denies it is their problem, and blames it on provider A. Current information and historical reports about problems are not available, as the vendors are very reluctant to admit having problems at all (ergo the finger pointing), perhaps due to a possible negative impact on sales. Previously, accurate information was available from NSFNET, which would allow us to engineer appropriate remedial action. Now, the problems are kept internal to the ISP's; to those of us on the "outside," the network now appears as a series of loosely interacting "black boxes."
Lack of Focus - The Internet vendors serve thousands of customers, from 56 kbps, up to T-3, with far too little support. Previously, the structure of NSFNET was hierarchical - the managers of NSFNET had to deal with only 19 regional networks, who in turn dealt with their users. This made support manageable for the backbone network engineers. Good working relations were developed among a small community of very dedicated knowledgeable personnel. Today, the ISP's backbone engineers deal with thousands of customers, some of whom have very little technical expertise. This problem is exacerbated because there are more problems today, and on average each is more difficult to isolate and resolve, due to the increased complexity.
Profit - The vendors can not afford to upgrade their networks by a factor of 30X to accommodate future growth, only to have their utilization be very low for the first year or so. Instead, we have seen some vendors delay upgrades far past the time when needed, and then upgrade capacity by only a factor of 2-3X.
Inability to React Quickly - It is not in the telcos' culture to plan for such massive growth as exists on the Internet. One ISP has reported 7% growth per week (a factor of 31X, annually)! Indeed, it may be beyond any company's ability to plan for and react to this incredibly high growth rate.
Lack of Vision - Personnel at the ISP's can not see why much higher network speeds are required, and expect us to manage our growth internally. In fact, we in education have been accused in a public meeting by an employee of a major ISP of frivolous use of the Internet. The case in point was an accusation that some users in education transmit real-time "video clips of trees growing." Philosophically, the point is that until the end user pays directly for Internet access (just as we do for telephone service), we can expect to have frivolous usage. This, termed "charge back," we can see no way to do without counting every packet - impossible to do due to the huge volume of traffic among millions of users. Also, restricting usage during the early phases of a new technology is just what we as educators should not at this time do, as it will have a negative impact upon learning, before we can assess the true, long-term potential upon learning.
One must question how successful the transition to the private sector has been. It is my belief that it has been extremely successful for the ISP's (those who originally pressured the NSF for privitization). However, this article is written from the viewpoint of higher education. The quality of our networking has severely degraded, and costs have risen dramatically. Stated unequivocally:
The transition to the private sector has not been successful in maintaining a robust, high-quality network, at prices that the majority of higher education can afford.
Four significant problems with networking which we now face are: 1) problems in the ISP's backbones and at the exchange points, 2) understanding the network, its problems, utilization, and growth, 3) insufficient capacity in the tail circuits "behind" national networks, and 4) pricing. Six months ago, the major problems were numbers 1 and 2, above. Significant upgrades of the NAP's and the ISP's national networks have occurred since then, so that problem one is not currently the most significant problem. Indeed, additional exchange points are emerging among the vendors. Thus, now, the greatest problems are numbers 2 through 4. Most of the problems we now see arise due to there being multiple vendors involved (the "finger-pointing" problem). Also, we continue to observe congestion on many links to individual sites, and to aggregates of sites. These problems arise for several reasons: 1) the ISP's have been very slow to process orders, in some cases because they have had insufficient capacity in their backbones, and in other cases for other reasons, and 2) the cost of connecting to the Internet has risen dramatically, and at T-3 speed is usage-based (as mentioned previously, one vendor now charges $780,000/year for the highest traffic level at T-3 speed). In some cases, individual sites do not know what their traffic level is, as we now have no monitoring of traffic, so can not plan and budget quickly enough to avoid severe congestion.
The first obvious strategy is to aggregate traffic behind a single connection, to minimize cost (without overloading the connection). A second is to minimize the traffic which must be exchanged among vendors. This is accomplished by purchasing IRC (Inter-Regional Connectivity) from the vendor with the most customers. A third, related concept, is to purchase services from multiple vendors, so that a network outage on a single ISP's backbone will not be debilitating. In this case, the two different sites should be connected to different vendors' backbones, and interconnected at the same speed as the connections to the backbones. Traffic to each vendor should flow directly to that vendor (possibly across the high speed interconnect), so as not to traverse an exchange point. Of course, to maintain good service during an outage, each IRC connection should be capable of carrying the full traffic load, meaning that the cost essentially doubles under this arrangement. Finally, all sites which are "local" should be well connected, perhaps through a local NAP, so that local traffic need not traverse a national network. This provides protection from a backbone outage, from problems at the NAPs, and reduces IRC cost by reducing the traffic to the ISP.
Note that some of our networking problems can be solved simply with more money. This represents a significant problem for higher education, as financial resources are not plentiful in higher education today. The more difficult problems, such as the interactions among vendors, are cultural and may not be resolved by market forces. Indeed supply and demand are supposed to result in a "fair market value." Unfortunately, the demand for networking services continues vastly to outstrip the supply offered by the vendors. Not surprisingly, prices have risen dramatically (a factor of 2-3 in the last year), with no slacking off due to a decreased demand. We, in higher education, have been told that we are networking's "problem children," as we complain about quality of service far more than other sites and are unwilling to pay as much (i.e. unwilling to pay fair market value).
Two very important questions are, "When will supply outstrip demand, resulting in price reductions," and "Will higher education be able to afford networking at the very high capacities required to catalyze the next phase of the modern information age?" The answer to the first question, in my opinion, is, "Not for at least two to three years," and the answer to the second question is now unequivocally, "No, we can not even afford the prices we are being charged now" We would be delinquent in our responsibilities to our institutions were we not to consider what the real costs are for installing, operating and maintaining a national network. It may be far cheaper (and better) were we to do the job ourselves.
To assess real costs for networking is very difficult. Winston Churchill's quotation about Russia is appropriate here, "This is a puzzle, within a mystery, all wrapped up in an enigma." The ISP's claim that they are loosing money at current pricing. Critics from higher education state that this may be due simply to internal accounting. To understand this requires some knowledge of corporate structure. First of all, most ISP's are owned by an IXC (Inter-eXchange Carrier, i.e. long distance telephone company or telco) parent company. Many of the IXC parent companies have created separate sub-corporations for networking (ISP's). Each ISP purchases circuits from circuit sub-corporations within the parent company, at rates set by the circuit sub-corporation. If the circuits are highly overpriced, then it is possible for the networking sub-corporation to be losing money, and the parent company to be making money. The problem is that we don't have any idea how much it costs to provide high-speed, digital long-distance circuits. We have seen digital circuits provided by CAPs (Competitive Access Providers) at 1/3 the cost of the circuits available from the IXC's, and circuit costs represent the majority of cost for a national, high-speed network.
By undertaking a design exercise for a national network, higher education could determine real costs, provided "real" circuit costs can be obtained (still an issue in question, but at least fair market value for circuits could be determined). However, higher education has rarely if ever executed projects of this scope by itself. In 1986, it required NSF, by issuing solicitations for the backbone and for regional networks, to act as a catalyst for this to happen. In my opinion, the federal government should once again undertake this cause, but the legal issues mentioned above may be insurmountable. Fortunately, on April 19, 1996, IBM stepped forward and offered to undertake this exercise, jointly with NTTF and FARNET.
Immediately, the national attitude changed from "gloom and doom" to wonder and skepticism. The thrust of the IBM Proposal is to build the next generation R&E network, initially for higher education only, with "no holds barred" on voice and video. Multiple vendors are to be involved, with a commitment to interoperability. Funding is expected to derive primarily from universities/federal research labs, with start-up funding from the NSF. Technology partnerships with the private sector are expected. Initially, the network is to be constructed at OC-3 speed (155 Mbps), and involve a voice replacement strategy with a radical video infrastructure.
IBM is ideally suited to provide industry leadership for the partnership, because of their existing relationships with the telecommunications industry, their relationships with other industries/vendors, and their advanced video and voice applications. A major emphasis initially will be management/diagnostics. Early application and management/diagnostic pilots are anticipated, together with a next generation network pilot project.
The projected stages of participation are that there are about 35 universities which will initially buy from the package, although about 350 need immediate improvement in networking. It is anticipated that 1,000 universities will participate by the year 2000. The project eventually may be extended to other not for profit entities, such as K-12, research, government and communities, although this point remains moot.
To date, an organizational steering group has been formed under the auspices of the NTTF and IBM. NSF has been an active participant. A meeting called by NTTF/FARNET is scheduled for August 7-9, in Colorado Springs, CO to involve the broader higher education community.
This section is devoted to comments on the proposal by IBM. First, we are exceptionally pleased that IBM has stepped forward with this proposal. IBM accomplished much of the network engineering for NSFNET, and performed superbly in that activity. We have thought deeply about whether there is another company which we would trust to do this very important job, and we have come up empty.
We in higher education in Westnet unequivocally and enthusiastically endorse the IBM proposal - some of us regard this as our last chance to achieve high quality, high speed networking at an affordable cost. However, we do have some minor suggestions, which may be beneficial to the effort:
We have argued that the existing National Information Infrastructure (NII) will be either of insufficient quality or too costly to support the next generation of multimedia information, which is so critical to the future of higher education. Since the NII has been privitized, we have experienced that we can influence the evolution of the NII very little if at all. Therefore, we have no choice but to explore a separate infrastructure, an Educational Information Infrastructure (EII). The proposal by IBM is very auspicious in this regard. In fact, we could well have titled this article, "On Track to the EII."
We gratefully acknowledge the efforts of Mr. David C. M. Wood and Mr. Chris Garner of Westnet, and Mr. Jim Williams of FARNET, who reviewed this article. This is but one of the small ways in which they and dedicated others like them have made networking in this country flourish.
ANS Advanced Network Services AUP Acceptable Use Policy CAP Competitive Access Provider CO+RE COmmercial plus Research and Education FARNET Federation of American Research NETworks FDDI Fiber Distributed Data Interface FIX Federal Internet eXchange point (sometimes just exchange point) IP Internet Protocol IXC Inter-eXchange Carrier ISP Network Service Provider IRC Inter-Regional Connectivity kbps thousands of bits per second MAE Metropolitan Area Exchange (NAP for a metropolitan area) Mbps millions of bits per second Merit Michigan Education, Research and Information Triad NAP Network Access Point (sometimes termed just an exchange point) NII National Information Infrastructure NSF National Science Foundation NTTF National Telecommunications Task Force (an Educom activity) OC-3 Optical Carrier-3 (digital circuit running at 155 Mbps) OC-12 Optical Carrier-12 (digital circuit running at 622 Mbps) PBX Public Branch eXchange (i.e. a phone switch) RA Routing Arbiter RADB Routing Arbiter Data Base R&E Research and Education SNA Systems Network Architecture T-1,T-3 digital circuits running at 1.544 Mbps and 45 Mbps, respectively TCP/IP Transmission Control Protocol/Internet Protocol VBNS NSF's Very high speed Backbone Network Service
Back to the Westnet Home Page