Skip to main content

The rise of connected PCs

The arrival of the networked PC changed everything about the future of technology. In this second part of a series on the history of network architecture, learn how these new devices changed everything.
Image
Computer network cable

Image by Markus Spiske from Pixabay

This is the second of a four-part series about the emergence of the modern data center. The previous installment described how mainframes evolved from standalone computers to the connected machines that were the foundation of early data processing centers.

This installment looks at the emergence of the networked personal computer in corporate IT.

PCs put the "P" in computing

Mainframe computers are big machines that were meant to do big things in a big way. They sent humanity to the moon and back again. They kept track of hundreds of millions of bank accounts. They predicted election outcomes. They made sure that people paid their taxes. And they were not cheap. For example, the IBM/360 Model 75 used by the NASA Manned Spacecraft Center for the Apollo 11 Moon Landing in 1969 cost between $50,000 to $80,000 a month to lease and $2.2 million to $3.3 million to buy outright The cost of that monthly lease in today's 2020 dollars is between $345,200 and $552,320.

Image
The IBM/360 Model 75 was the computer NASA used during Apollo 11 Moon Landing

Figure 1: The IBM/360 Model 75 was the computer NASA used during Apollo 11 Moon Landing

The notion that anybody would be able to have their own computer was akin to the idea that anybody could have their own spacecraft. And, if you told someone that one day all the computers on the planet would be connected and sharing data, at best, you'd be considered an aspiring visionary who had no understanding of the breadth of financial resources required to accomplish such an outcome. At worst, you'd be accused of being a counterculture dreamer who had watched too many episodes of the original Star Trek, which had just been canceled in 1969 after a 3-year run.

It all changed in 1975 when a small company in Albuquerque, New Mexico, put an advertisement in Radio-Electronics magazine for the Altair 8800. The company, Micro Instrumentation and Telemetry Systems (MITS), advertised the Altair 8800 as a "real, full-blown computer." For a price of $959 ($4497 in today's value), anybody could have their own computer. The price tag not only covered the cost of a fully assembled Altair but included a 4000-word dynamic memory card as well. (1 word = 1 byte)

Image
The Altair 8800 was the first personal computer sold commercially

Figure 2: The Altair 8800 was the first personal computer sold commercially

Prior to the Altair, the closest a person could hope to get to owning a computer was to buy a minicomputer, such as one of Digital Equipment Corporation's popular PDP series that came with a price tag of ~$10,000 excluding memory ($46,897 in 2020 dollars). With the introduction of the Altair 8800, the era of personal computing was born.

IBM released its first entry into personal computing in 1981 with the introduction of the IBM-PC. By 1990, dozens of companies made personal computers, from tried and true Hewlett Packard to upstarts Gateway 2000 and Dell Computers. Gateway was started in a barn in Sioux City, Iowa. Dell was created in a dorm room at the University of Texas. Even Zenith, a company that traditionally made televisions, was trying to grab market share.

By the mid-1990s, computer sales in the US were measured in tens of millions. Computer sales outpaced auto sales by almost a 3 to 1 margin. In 1995, 22 million computers were sold in the US compared to 8.6 million cars that same year. Bill Gates's vision of a computer on every desk and in every home was coming to be. Connecting them all was another matter completely.

Early networking: A cacophony of connectivity

There's a big difference between connecting devices and networking them. Connecting requires little more than a direct coupling between source and target; for example, plugging an electric guitar into an amplifier or connecting a train locomotive to a passenger car.

Networking requires a good deal more. Essentially there are four factors to consider. First, the devices wanting to join the network need to be compatible with the physical network itself. For example, for a train to join a railroad line, which is essentially a railroad network, the rail gauge of the train's wheels must fit the rail gauge of the railroad line. If the train's rail gauge is too wide or not wide enough, the train can't travel over the rails.

Second, once a device can join the network physically, it must be discoverable. Take the US Postal Service as an example. In order for a letter to be delivered by the USPS, which is essentially a mail delivery network, it must have an address on the envelope that conforms to the format supported by the service. For example, using the address Bobby's House won't work. (See Figure 3.)

Image
Discovery is an essential aspect for moving data over a network

Figure 3: Discovery is an essential aspect for moving data over a network

The service doesn't know how to process such an address. However, using an address such as 1600 Pennsylvania Avenue, Washington DC, 20500 will work because it conforms to the supported address format. (See Figure 4.)

Image
Network discovery depends on a universally supported address format

Figure 4: Network discovery depends on a universally supported address format

Third, there needs to be a way to transport data over the network. Going back to the postal service example above, once a letter is addressed and ready to be sent, there needs to be a way to actually move it through the network. When you drop the letter off at a mailbox, a postal worker will come by at regularly scheduled intervals to empty the mailbox and take the letters back to the main post office. The letters will then be forwarded on to other post offices according to the USPS's mail delivery process. In some cases, a postal worker will stop by a business or residence and pick up the outgoing mail and drop off incoming mail. The same is true of computer networks. Once information makes its way onto the wire, there needs to be a way to move the binary data across the wire toward the target destination.

Finally, fourth, there is the issue of ensuring that the data traveling between source and target over the network is consumable in a way that is appropriate to the application using the data. This problem can be described using the following example.

Imagine that you have a telecommunication network on which fax machines and telephones are members. One day an owner of one of the fax machines wants to send a party invitation to one of his friends on the network. The fax machine owner writes up the invitation and puts it in the document feeder of the fax machine. He then taps in the recipient's phone number on the fax machine's keyboard and sends the fax. The fax machine attempts to make the connection. The network mechanisms do the work of discovering the fax machine's location according to the fax's phone number asking for a connection. The recipient's device accepts the request to connect.

But there's a problem. The recipient's device is a telephone. It has no idea how to conduct a conversation with a fax machine, nor should it. Each device is basically a different appliance (a.k.a. application). A telephone is an appliance for exchanging voice data. A fax machine is an appliance for sending and receiving data for a written document. The internals of each device have the intelligence required to consume the incoming data according to the appliance's purpose.

These requirements, physical access, discoverability, transport, and application consumption, are the basic building blocks that need to exist in any network architecture. You can't have a network without them. Fortunately, those designing network technologies early on understood this and created technologies to meet these basic requirements. The problem is that they did it in different ways.

IBM created the System Network Architecture (SNA). Another mainframe manufacturer, Burroughs, had Burroughs Network Architecture (BNA). Minicomputer company Digital Equipment Corporation had DECnet. Even PC manufacturers had different networking technologies. Microsoft published NetBIOS. Apple had AppleTalk. And Novell, which was for a while the biggest player in network operating systems, had its own stable of protocols, which included Internet Package Exchange (IPX), NetWare Core Protocol (NCP), and Sequenced Package Exchange (SPX).

Collected connected devices

Networking was becoming commonplace within companies large and small. But to get universal, ubiquitous networking, there were still two significant hurdles to overcome. First, most networks were private, and second, the actual technologies the networks used varied among installations. The notion of a shared network that operated according to a single, common standard and had both public and private segments was still an idea on the horizon. Fortunately, it was not a distant horizon. All that was needed were compelling reasons to standardize. Making data centers and colocation facilities resources that could be shared by many companies provided these compelling reasons and then some. The next chapter in universal, distributed computing was at hand.

Read other articles from this series: 

Subscribe to Enable Architect for more on the history of network architecture.

Author’s photo

Bob Reselman

Bob Reselman is a nationally known software developer, system architect, industry analyst, and technical writer/journalist. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement