Vannevar Bush, a professor at MIT, built and demonstrated a differential analyzer in 1930. It was large, and had many gears, but it used electric motors. it worked, and could be programmed to perform many different types of calculating work. Bush’s machine was also the first to use titles. His machine could store numbers or quantities of electricity in one part of the system. This ability led some to name Bush the Father of the Electronic Computer.
The day of the gear driven computer was almost over. Konrad Zuse, a German engineer, and Howard Aiken, a Harvard math professor, both built hybrid ( part mechanical, part electronic) machines in the period between 1930 and 1950. Both used binary arithmetic and both used electric relays to perform math operations.
Professor Aiken worked in conjunction with IBM and had discovered Babbage’s work. The ideas were so close to Aiken’s that he thought he had received a personal message from the past. Professor Aiken was building a computer named Mark I. Instead of punched cards, Professor Aiken used rolls of punched paper to tell the machine what to do. Electricity turned the counter wheels, and eight hundred thousand switches, buttons, and other electrical parts filled a room three times as big as an ordinary living room.
In 1942, two men and their associates were at work at the Moore School of the University of Pennsylvania on a machine which, while embodying enormous advances 2n automatic computing, was less famed that the Mark I if only because it was not operational until two months after the Japanese surrender and therefore did not get credit for helping to win World War II. The co-inventors of ENIAC ( Electronic Numerical Integrator and Calculator) which was actually the world”s first electric computer were Dr. J. Prosper Eckert, an electrical engineer, and Dr. John Mauchly, a physicist. It would have been easily possible for them to build ENIAC twelve to fifteen years earlier, as it would have been possible to build the Mark I—all of the components and the theory required were in existence except for the fact that nobody put up the money or had the incentive to do so. The patron of ENIAC was the United States government, more specifically, the Army.
The most significant feature of ENIAC was that it introduced vacuum tube technology, and no longer were calculations and operations performed by moving mechanical parts. This feature allowed for greatly increased speed of performance.
The next computer was developed by Mauchly, Eckert, and others and was called the Electronic Discrete Variable Automatic Computer (EDVAC). It was smaller and more powerful than its predecessors. It also had two other important features: it used binary numbering systems, and it could internally sort instructions in numerical form. Today, a1l data and programs are stored in binary form. This method of storing instructions inside the computer is far more efficient than paper tape storage used in earlier devices.
Another member of the first generation of computers was the Electronic Delayed Storage Automatic Computer (EDSAC) built at Cambridge University in England. This computer introduced the concept of stored programs. Before this, computers often had to be rewired to be used for various operations. Their memories were incapable of storing more than one program at a time. EDSAC helped eliminate time consuming and costly rewiring procedures.
In 1946, Mauchly and Eckert formed a corporation to build computers for commercial use; the UNIVAC (1951) was the first electronic computer used by large business firms. This launched the major growth of computers into the business field.
The first generation of computers, which thrived from 1951 until 1964, was characterized by vacuum tube technology. Although they were amazing devices in their time, they were large, took up valuable space, were expensive to operate, and required almost constant maintenance to function properly. The next generation of computers attempted to resolve some of these problems.
The second generation of computers extended from 1959 until 1964 and were characterized by transistor technology. The transistor was developed by John Bardeen and others at the Bell Laboratories in New Jersey. Bardeen studied substances that permitted a limited amount of electricity through them—semiconductors. Transistors using semiconductor material could perform the work of vacuum tubes and took up less space.
Because transistors were smaller, the distance between operating parts was reduced and speed of performance was increased significantly. Transistors were also much cooler than vacuum tubes, reducing the need for expensive air conditioning in areas where computers were housed.
Transistors did present several problems, though. They were relatively expensive because each transistor and its related parts had to be individually inserted into holes in a plastic board. Also, wires had to be fastened by floating boards in a pool of molten solder. Even thought the distance between individual parts was reduced, it was still great enough to limit speed of computer operations. The next generation of computers helped to alleviate some of these problems.
The development of integrated circuits in 1963 spawned the third generation of computers, lasting from 1964 to 1975. Integrated circuits developed from a need to mass produce transistors in a few simple production steps. The production process begins when tubes of silicon are sliced into wafer thin disks that are chemically pure and cannot hold electrical charge. Then a preconceived design is etched onto the surface of the wafer with the use of light rays.
The integrated circuit continued the trend toward miniaturization that has resulted in the popularity of the microcomputer and the personal computer system. Integrated circuit technology spawned a generation of computers that had greater storage capacity and technically increased speeds of performance. Many accessory devices were developed and marketed, such as magnetic tape drives and disc drives. Popular programming languages were developed and refined, many of which are still in use today.
Third generation computers are not aimed at specific applications such as business or scientific use. Rather, they were designed as general purpose computers. They represented a giant leap forward in the data processing field. Not only were speed and reliability enhanced, but power consumption was decreased markedly. Computers became smaller and less expensive, putting computer power into the hands of a greater number of users than ever before. Computer Technology began to snowball.
Engineers were not satisfied with the degree of miniaturization that resulted from the integrated circuit. Also, the integrated circuits of the third generation were designed primarily with chips having the only function. As engineers learned how to manufacture chips more easily, they conceived the idea of grouping an assortment of functions on a single chip, creating a microelectronic “system” capable of performing various tasks required for a single job. This technology became known as Large Scale Integration (LSI). Thus, the fourth generation of computers was born in the mid-1970’s.
LSI technology has also been responsible for the recent popularity of the microcomputer. These “Litt1e giants” fit easily on a desk top and put computer power in the hands of an increased number of people. Declining prices of powerful computer systems have also encouraged development of the electronics field in genera1. LSI turned computer technology into big business, and this trend will certainly continue in the foreseeable future.
A hint of tomorrow’s computer capability can be found in the IBM 3081, introduced in 1980. This computer is twice as powerful as its immediate predecessor. It was designed with Very Large Scale Integrated (VLSI) circuitry further increases the speed at which computers are able to function. Multiprocessing—the simultaneous running of several programs by one computer is likely to develop further in the fifth generation of computers. Computers will continue to get smaller as well as prices becoming lower.