Archive

Monthly Archives: October 2013

Apple_II_tranparent_800

Fourth Generation Computing began with the introduction of the Microprocessor. The microprocessor is an silicon chip with thousands of integrated circuits built onto a single chip. The introduction of the microprocessor enabled computers to be MUCH smaller and run MUCH more efficiently. It also DRASTICALLY reduced cost and made computers even more available to everyday consumers.

 

The first of the microprocessors was the Intel 4004 chip which housed all the main components of a computer which included central processing and memory to I/O buses onto a single chip. It enabled consumer computer products such as the Apple II computer by Apple and the IBM 5150 Personal Computer. These computers were eventually advanced and upgraded until they became the computers that we see and use today.

Microchips

The third generation of computing consists of miniaturization of computer circuits. Computer circuits were shrunken down to a size where multiple computer parts could be included on a single chip. This chip was called an integrated circuit. Integrated circuits played a vital role in shaping computers today as they improved efficiency, drastically lowered power consumption, and drastically lowered the cost of the materials needed to produce computers which made the average consumer able to afford computers for their own personal use.

A new form of “printing” the data that computers created was also developed through the form of monitors. Keyboards and mice were also developed around this time.

Operating systems were also introduced around this point as more average people began purchasing computers. The operating system acted as a mask and a interface for the often complicated lines of programming that the average consumer knew nothing about. Advancements in technology also allowed for multitasking on computers which was further made better by operating systems.

For the first time computers became accessible by the general public and computers began to look like the modern day computers we have now.

1949 First junction transistor (Shockley et al)

Second generation computing began with the invention of the transistor which replaced cumbersome, expensive and often failure prone. Transistors make up the basis for all digital circuits today making it a  The first transistor was invented in 1947 at Bell Laboratory but did not see widespread usage until the late 1950’s. 

Transistors allowed computers to become smaller, faster, cheaper, and much more energy-efficient. While transistors did generate a lot of heat they still were much more superior to vacuum tubes due to their cost efficiency.

Second generation computing also introduced computing language/assembly. Computing language did away the 1s and 0s that computers understood and made computing language similar to English which the computer would then take and convert back into 1s and 0s. The first higher level computing languages introduced FORTRAN which was developed in San Jose, California for IBM and Common Business Oriented Language, also known as COBOL, developed by Grace Hopper. 

The invention of the transistor and creation of higher level programming levels led to the creation of smaller computers and integrated circuits.

Image

Around 1946 the first modern computer was created for the purpose of calculating ballistic trajectory for the United States Army’s Ballistic Research Laboratory. This computer was called the Electronic Numerical Integrator And Computer or “ENIAC” for short. The ENIAC was the first computer designed to be “Turning Complete” meaning that it could simulate other computer languages besides its own. The ENIAC computer was made up of nearly 18,000 vacuum tubes, 1,500 relays, 70,000 resistors, and had nearly five million hand soldered joints. The ENIAC weighed about 27 tons and consumed about 150 kilowatts of power.

The ENIAC was a modular computer made up of several different parts that each performed a different function. 20 of the modules were accumulators that would not only add and subtract numbers but would also store a ten digit binary number in memory. In order to add and subtract numbers the accumulators were hooked up to buses which passed the number along the accumulators until the computation was completed.

The ENIAC used ten-position ring counters to store digits with each digit using 36 vacuum tubes.

The ENIAC was prone to failure however due to the use of vacuum tubes. When the ENIAC was first created the vacuum tubes were not very resistant to thermal stress which occurred specially  and several tubes burned out a day. This issue was fixed later on in the ENIAC’s life when they replaced the vacuum tubes with specialty high stress vacuum tubes. They were also able to limit the burning out of vacuum tubes to one tube every two days.

The ENIAC was programmed to preform complex sequences of operations which included loops, branches, and subroutines. Programming the ENIAC was time consuming and mapping the problem could take weeks. Researches would first figure out the problem on paper and then import it into the ENIAC by manipulating its switches and cables on the ENIAC. They would then verify and subsequently debug the program.

The ENIAC was the first step to modern day computing and would lead to the invention of the next step in computers known as transistors.