Tech —

Cutting the cord: how the world’s engineers built Wi-Fi

Wireless networking has exploded over the last 15 years, but how do our …

Cutting the cord: how the world's engineers built Wi-Fi
Aurich Lawson

In the 1980s, even before connectivity to the Internet became commonplace, people realized that connecting a group of computers together in a local area network (LAN) made those computers much more useful. Any user could then print to shared printers, store files on file servers, send electronic mail, and more. A decade later, the Internet revolution swept the world and LANs became the on-ramp to the information superhighway. The LAN technology of choice was almost universally Ethernet, which is terrific apart from one big downside: those pesky wires.

In the late 1990s, the Institute of Electrical and Electronics Engineers (IEEE) solved that problem with their 802.11 standard, which specified a protocol for creating wireless LANs. If ever the expression "easier said than done" applied, it was here. Huge challenges have been overcome in the past 15 years to get us to the point where reasonably reliable, fast, and secure wireless LAN equipment can today be deployed by anyone, and where every laptop comes with built-in Wi-Fi. But overcome they were—and here's how.

Early Aloha

The journey started back in the early 1970s. The University of Hawaii had facilities scattered around different islands, but the computers were located at the main campus in Honolulu. Back then, computers weren't all that portable, but it was still possible to connect to those computers from remote locations by way of a terminal and a telephone connection at the blazing speed of 300 to 1200 bits per second. But the telephone connection was both slow and unreliable.

A small group of networking pioneers led by Norman Abramson felt that they could design a better system to connect their remote terminals to the university's central computing facilities. The basic idea, later developed into "AlohaNET," was to use radio communications to transmit the data from the terminals on the remote islands to the central computers and back again. In those days, the well-established approach to sharing radio resources among several stations was to divide the channel either into time slots or into frequency bands, then assign a slot or band to each of the stations. (These two approaches are called time division multiple access [TDMA] and frequency division multiple access [FDMA], respectively.)

Obviously, dividing the initial channel into smaller, fixed-size slots or channels results in several lower-speed channels, so the AlohaNET creators came up with a different system to share the radio bandwidth. AlohaNET was designed with only two high-speed UHF channels: one downlink (from Honolulu) and one uplink (to Honolulu). The uplink channel was to be shared by all the remote locations to transmit to Honolulu. To avoid slicing and dicing into smaller slots or channels, the full channel capacity was available to everyone. But this created the possibility that two remote stations transmit at the same time, making both transmissions impossible to decode in Honolulu. Transmissions might fail, just like any surfer might fall off her board while riding a wave. But hey, nothing prevents her from trying again. This was the fundamental, ground-breaking advance of AlohaNET, reused in all members of the family of protocols collectively known as "random access protocols."

The random access approach implemented in the AlohaNET represented a paradigm shift from a voice network approach to a data network. The traditional channel sharing techniques (FDMA and TDMA) implied the reservation of a low speed channel for every user. That low speed was enough for voice, and the fact that the channel was reserved was certainly convenient; it prevented the voice call from being abruptly interrupted.

But terminal traffic to the central computers presented very different requirements. For one thing, terminal traffic is bursty. The user issues a command, waits for the computer to process the command, and looks at the data received while pondering a further command. This pattern includes both long silent periods and peak bursts of data.

The burstiness of computer traffic called for a more efficient use of communication resources than what could be provided by either TDMA or FDMA. If each station was assigned a reserved low-speed channel, the transmission of a burst would take a long time. Furthermore, channel resources would be wasted during the long silent periods. The solution was a concept that was implemented by AlohaNET's random access protocol that is central to data networks: statistical multiplexing. A single high speed channel is shared among all the users, but each user only uses it only some of the time. While Alice is carefully examining the output of her program over a cup of tea, Bob could be uploading his data to the central computer for later processing. Later, the roles might be reversed as Alice uploads her new program while Bob is out surfing.

To make this multiplexing work, the team needed a mechanism that would allow the remote stations to learn about the failure of their initial transmission attempt (so that they could try again). This was achieved in an indirect way. Honolulu would immediately transmit through the downlink channel whatever it correctly received from the uplink channel. So if the remote station saw its own transmission echoed back by Honolulu, it knew that everything went well and Honolulu had received the transmission successfully. Otherwise, there must have been a problem, making it a good idea to retransmit the data.

Standards wars

"The wonderful thing about standards is that there are so many of them to choose from." Grace Hopper, as per the UNIX-HATERS Handbook (PDF) page 9 or 49.

By the end of the last century, two standards were competing head to head in the wireless local area network arena. The American alternative, developed by the IEEE, relied on simpler, more straightforward approaches. The alternative, proposed by the European Telecommunications Standards Institute (ETSI), was more sophisticated, featuring higher data rates and traffic prioritization for service differentiation. Vendors favored the easier to implement IEEE alternative, ignoring all optional features.

Obviously, a simpler approach had the advantage of a shorter time-to-market, which was critical for obtaining substantial market share and which paved the way for the ultimate success of the IEEE specification over the one standardized by ETSI. The IEEE 802.11 standard in question belongs to the 802 standard family that also includes IEEE 802.3 (Ethernet). As time passed, the IEEE 802.11 standard was refined to incorporate some features that were present in the early ETSI proposal, such as higher data rates and service differentiation.

Channel Ars Technica