Ethernet At 40: From A Napkin Sketch To Multi

News

HomeHome / News / Ethernet At 40: From A Napkin Sketch To Multi

May 28, 2023

Ethernet At 40: From A Napkin Sketch To Multi

September 30th, 1980 is the day when Ethernet was first commercially introduced,

September 30th, 1980 is the day when Ethernet was first commercially introduced, making it exactly forty years ago this year. It was first defined in a patent filed by Xerox as a 10 Mb/s networking protocol in 1975, introduced to the market in 1980 and subsequently standardized in 1983 by the IEEE as IEEE 802.3. Over the next thirty-seven years, this standard would see numerous updates and revisions.

Included in the present Ethernet standard are not just the different speed grades from the original 10 Mbit/s to today's maximum 400 Gb/s speeds, but also the countless changes to the core protocol to enable these ever higher data rates, not to mention new applications of Ethernet such as power delivery and backplane routing. The reliability and cost-effectiveness of Ethernet would result in the 1990 10BASE-T Ethernet standard (802.3i-1990) that gradually found itself implemented on desktop PCs.

With Ethernet these days being as present as the presumed luminiferous aether that it was named after, this seems like a good point to look at what made Ethernet so different from other solutions, and what changes it had to undergo to keep up with the demands of an ever-more interconnected world.

These days, most computers and computerized gadgets are little more than expensive paper weights whenever they find themselves disconnected from the global Internet. Back in the 1980s, people were just beginning to catch up on the things one could do with a so-called ‘local area network’, or LAN. Unlike the 1960s and 1970s era of mainframes and terminal systems, a LAN entailed connecting microcomputers (IBM PCs, workstations, etc.) at for example an office or laboratory.

During this transition from sneakernet to Ethernet, office networks would soon involve thousands of nodes, leading to the wonderful centrally managed office network world. With any document available via the network, the world seemed ready for the paperless office. Although that never happened, the ability to communicate and share files via networks (LAN and WAN) has now become a staple of every day life.

What did change was the rapidly changing landscape of commodity network technology. Ethernet's early competition was a loose collection of smaller network protocols. This includes IBM's Token Ring. Although many myths formed about the presumed weaknesses of Ethernet in the 1980s, summarized by this document (PDF) from the 1988 SIGCOMM Symposium, ultimately Ethernet turned out to be more than sufficient.

Token Ring's primary points of presumed superiority were determinism instead of Ethernet's multiple access with collision detection approach (CSMA/CD). This led to the most persistent myth, that Ethernet couldn't sustain saturation beyond 37% of its bandwidth.

For cost reasons, the early years of Ethernet was dominated by dumb hubs instead of smarter switches. This meant that the Ethernet adapters had to sort out the collisions. And as anyone who has used Ethernet hubs probably knows, the surest sign of a busy Ethernet network was to glance over at the ‘collision’ LED on the hub(s). As Ethernet switches became more affordable, hubs quickly vanished. Because switches establish routes between two distinct nodes instead of relying on CSMA/CD to sort things out, this prevented the whole collision issue that made hubs (and Ethernet along with it) the target of many jokes, and the myth was busted.

Once Ethernet began to allow for the use of cheaper Cat. 3 (UTP) for 10BASE-T and Cat. 5(e) UTP cables for 100BASE-TX (and related) standards, Ethernet emerged as the dominant networking technology for everything from homes and offices to industrial and automotive applications.

While the list of standards listed under IEEE 802.3 may seem rather intimidating, a more abbreviated list for the average person can be found on Wikipedia as well. Of these, the ones one most likely has encountered at some point are:

While the 5GBASE-T and 10GBASE-T standards also have been in use for a few years now, the 25 Gb and 40 Gb versions are definitely reserved for data centers at this point, with the requirement for Cat. 8 cables, and only allowing for runs of up to 36 meters. The remaining standards in the list are primarily aimed at automotive and industrial applications, some of which are fine with 100 Mbit connections.

Still, the time is now slowly arriving where a whole gigabit is no longer enough, as some parts of the world actually have Internet connections that match or exceed this rate. Who knew that at some point a gigabit LAN could become the bottleneck for one's Internet connection?

Back in 1972, a handful of engineers over at Xerox's Palo Alto Research Center (PARC) including Robert "Bob" Metcalfe and David Boggs were assigned the task of creating a LAN technology to provide a way for the Xerox Alto workstation to hook up to the laser printer, which had also been developed at Xerox.

This new network technology would have to allow for hundreds of individual computers to connect simultaneously and feed data to the printer quickly enough. During the design process, Metcalfe used his experience with ALOHAnet, a wireless packet data network developed at the University of Hawaii.

The primary concept behind ALOHAnet was the use of a shared medium for client transmissions. In order to accomplish this, a protocol was implemented that could be summed up as ‘listen before send’, which would become known as ‘carrier sense multiple access’ (CSMA). This would not only go on to inspire Ethernet, but also WiFi and many other technologies. In the case of Ethernet the aforementioned CSMA/CD formed an integral part of early Ethernet standards.

Coaxial cabling was used for the common medium, which required the use of the cherished terminators at the end of every cable. Adding additional nodes required the use of taps, allowing for the BNC connector on the Ethernet Network Interface Card to be attached to the bus. This first version of Ethernet is also called ‘thicknet’ (10BASE5) due to the rather unwieldy 9.5 mm thick coax cables used. A second version (10BASE2) used much thinner coax cables (RG-58A/U) and was therefore affectionately called ‘thinnet’.

In the end, it was the use of unshielded, twisted-pair cabling that made Ethernet more attractive than Token Ring. Along with cheaper interface cards, it turned into a no-brainer for people who wanted a LAN at home or the office.

As anyone who has ever installed or managed a 10BASE5 or 10BASE2 network probably knows, interference on the bus, or issues with a tap or AWOL terminator can really ruin a day. Not that figuring out where the token dropped off the Token Ring network is a happy occasion, mind you. Although the common-medium, ‘aether’ part of Ethernet has long been replaced by networks of switches, I’m sure many IT professionals are much happier with the star architecture.

Thus it is that we come from the sunny islands of Hawaii to the technology that powers our home LANs and data centers. Maybe something else would have come along to do what Ethernet does today, but personally I’m quite happy with how things worked out. I remember the first LAN that got put in place at my house during the late 90s as a kid, first to allow my younger brother and I to share files (i.e. LAN gaming), then later to share the cable internet connection. It allowed me to get up to speed with this world of IPX/SPX, TCP/IP and much more network-related stuff, in addition to the joys of LAN parties and being the system administrator for the entire family.

Happy birthday, Ethernet. Here is to another forty innovative, revolutionary years.