Somewhere around 3000BC the first mechanical counting device created was the abacus. The abacus is still used today and, amazingly to me, with great speed and accuracy.
In 1642 another mechanical device was created called the Pascaline (after Blaise Pascal, a famous French mathematician). The Pascaline used gears and wheels (“counting-wheels”) to perform the calculations. The interesting thing to note is that the counting-wheel design was used in calculators until the 1960s.
The next major breakthrough in computer history revolves around Charles Babbage and his Difference Engine and Analytical Engine.
The machines that Charles designed in the early 1800s were not electronic computers as we know them now but they were general-purpose computational devices that were designed to be driven by steam. Charles is credited with being the “Father of Computing” due to the fact that his designs were WAY ahead of his time. He laid the foundation for the modern computer.
Another computer development spurred by the war was the Electronic Numerical Integrator and Computer (ENIAC) produced by a partnership between the U.
S. government and the University of Pennsylvania. Consisting of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints, the computer was such a massive piece of machinery that it consumed 160 kilowatts of electrical power, enough energy to dim the lights in an entire section of Philadelphia. Developed by John Presper Eckert (1919-1995) and John W. Mauchly (1907-1980), ENIAC, unlike the Colossus and Mark I, was a general-purpose computer that computed at speeds 1,000 times faster than Mark I.
These first computers were extremely large, slow, and inefficient.
Many things happened between the creation of the ENIAC and now. Among the most interesting and pertinent to us in this course is the development of the microcomputer. The major development of the microcomputer took place in the 1970s during a time when most of us were alive to witness it firsthand. During this time we have seen the creation of huge and very profitable corporations including Microsoft, Apple, Dell, Compaq, etc. and the growth and prosperity of pioneering companies like IBM.
The first generation of computers took place during the mid1940s to the late 1950s. The computers that were created during this time used vacuum tubes and wires for their circuitry. If you’ve ever had or been around a vacuum tube television, radio, or amplifier you know that, when they are left on for any length of time, they get very hot and, like light bulbs, they burn out. In addition to vacuum tubes, the first-generation computers used magnetic drums for main memory. The use of magnetic drums and vacuum tubes required that these computers were HUGE, some up to half a football field in size! They also were very expensive to operate, generated a lot of heat, used a lot of electricity, and failed (shut down) often. The programming of these computers (having them perform a different task) required that wires were disconnected from one place and connected to another or that one circuit was turned on and another turned off. All programming during this generation was done in Machine Language – the language of the machine (1s and Os)!
The second generation of computers took place during the late 1950s to mid 1960s. Just like the evolution of televisions, radios, and amplifiers during this time revolved around the shift from vacuum tube to transistor, so did computers. The use of transistors allowed radios, TVs, amplifiers, and computers to become much smaller, faster, less energy-draining, etc. Main memory (RAM) shifted from revolving magnetic drums to tiny wire-wrapped magnetic donuts called magnetic core memory. This also allowed computers to become much smaller and more efficient. Programming languages evolved from the 1s and Os of machine language to something closer to the language of humans. These languages are known as assemblers and early high level languages and were easier to use for humans but required more work by the computer. They are still a long cry from English. Companies that were purchasing computers during this time were using them mostly for accounting purposes and interacted with the computer via punched cards for input and reams of printed paper for output.
The third generation of computers began during the mid1960s and lasted until the mid 1970s. Computers became much smaller, much faster, and much more affordable due to creation of the integrated circuit. Main memory for computers during this time was made out of silicon. The use of silicon chips and integrated circuits brought the creation of the minicomputer (multi-user desk sized computers). It was during this time period that humans began to interact with computers directly through the use of terminals with keyboards and monitors.
The fourth generation began in the mid 1970s and, as many would argue, is still going on. The trend continues during this period as well; faster, smaller, and cheaper! Integrated circuits continue to get smaller and the circuits continue to get more and more complex. This integration is known as very large scale integration (VLSI). Microcomputers and supercomputers were both created during this time period and telecommunications (making computers communicate with each other) has become the biggest challenge.