In the early 1950s, after UNIVAC became available, there were only about 100 computers in operation worldwide. By the year 2000, 54 million households in the United States alone contained one or more computers. What is perhaps even more astonishing is the fact that these machines had more capacity in a single silicon chip than the entire ENIAC system that was such a crucial step along the way to contemporary computers. A host of technological advances are responsible for such a drastic change in capability and for making computers feasible for personal use. A computer's memory in the early 1960s, for instance, was typically a ferrite core memory, an architecture that relied on the magnetization of small iron ferrite doughnuts (ferrite cores) to store information. Depending upon the direction of the energizing current, the cores would become magnetized in either a clockwise (0) or counterclockwise (1) direction. When reading the value of the core, the direction of the current was used to determine whether the value of information held by the core was 0 or 1, the binary information used by the computer. The development and subsequent improvements of the integrated circuit, however, revolutionized computer design. Indeed, in former times, each ferrite core represented one digit, or bit, and the total memory of a computer was considered in terms of thousands of bytes (kilobytes), while contemporary semiconductor devices correspond to millions of bytes, or megabytes, and the total memory of a mainframe may be measured in billions of bytes (gigabytes).
Sunday, July 19, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment