Development Of Computer Science
Computer science as an independent discipline dates to only about 1960, although the electronic digital computer that is the object of its study was invented some two decades earlier. The roots of computer science lie primarily in the related fields of electrical engineering and mathematics. Electrical engineering provides the basics of circuit design—namely, the idea that electrical impulses input to a circuit can be combined to produce arbitrary outputs. The invention of the transistor and the miniaturization of circuits, along with the invention of electronic, magnetic, and optical media for the storage of information, resulted from advances in electrical engineering and physics. Mathematics is the source of one of the key concepts in the development of the computer—the idea that all information can be represented as sequences of zeros and ones. In the binary number system, numbers are represented by a sequence of the binary digits 0 and 1 in the same way that numbers in the familiar decimal system are represented using the digits 0 through 9. The relative ease with which two states (e.g., high and low voltage) can be realized in electrical and electronic devices led naturally to the binary digit, or bit, becoming the basic unit of data storage and transmission in a computer system.
The Boolean algebra developed in the 19th century supplied a formalism for designing a circuit with binary input values of 0s and 1s (false or true, respectively, in the terminology of logic) to yield any desired combination of 0s and 1s as output. Theoretical work on computability, which began in the 1930s, provided the needed extension to the design of whole machines; a milestone was the 1936 specification of the conceptual Turing machine (a theoretical device that manipulates an infinite string of 0s and 1s) by the British mathematician Alan Turing and his proof of the model’s computational power. Another breakthrough was the concept of the stored-program computer, usually credited to the Hungarian-American mathematician John von Neumann. This idea—that instructions as well as data should be stored in the computer’s memory for fast access and execution—was critical to the development of the modern computer. Previous thinking was limited to the calculator approach, in which instructions are entered one at a time.
The needs of users and their applications provided the main driving force in the early days of computer science, as they still do to a great extent today. The difficulty of writing programs in the machine language of 0s and 1s led first to the development of assembly language, which allows programmers to use mnemonics for instructions (e.g., ADD) and symbols for variables (e.g., X). Such programs are then translated by a program known as an assembler into the binary encoding used by the computer. Other pieces of system software known as linking loaders combine pieces of assembled code and load them into the machine’s main memory unit, where they are then ready for execution. The concept of linking separate pieces of code was important, since it allowed “libraries” of programs to be built up to carry out common tasks—a first step toward the increasingly emphasized notion of software reuse. Assembly language was found to be sufficiently inconvenient that higher-level languages (closer to natural languages) were invented in the 1950s for easier, faster programming; along with them came the need for compilers, programs that translate high-level language programs into machine code. As programming languages became more powerful and abstract, building efficient compilers that create high-quality code in terms of execution speed and storage consumption became an interesting computer science problem in itself.
Increasing use of computers in the early 1960s provided the impetus for the development of operating systems, which consist of system-resident software that automatically handles input and output and the execution of jobs. The historical development of operating systems is summarized below under that topic. Throughout the history of computers, the machines have been utilized in two major applications: (1) computational support of scientific and engineering disciplines and (2) data processing for business needs. The demand for better computational techniques led to a resurgence of interest in numerical methods and their analysis, an area of mathematics that can be traced to the methods devised several centuries ago by physicists for the hand computations they made to validate their theories. Improved methods of computation had the obvious potential to revolutionize how business is conducted, and in pursuit of these business applications new information systems were developed in the 1950s that consisted of files of records stored on magnetic tape. The invention of magnetic-disk storage, which allows rapid access to an arbitrary record on the disk, led not only to more cleverly designed file systems but also, in the 1960s and ’70s, to the concept of the database and the development of the sophisticated database management systems now commonly in use. Data structures, and the development of optimal algorithms for inserting, deleting, and locating data, have constituted major areas of theoretical computer science since its beginnings because of the heavy use of such structures by virtually all computer software—notably compilers, operating systems, and file systems. Another goal of computer science is the creation of machines capable of carrying out tasks that are typically thought of as requiring human intelligence. Artificial intelligence, as this goal is known, actually predates the first electronic computers in the 1940s, although the term was not coined until 1956.
Computer graphics was introduced in the early 1950s with the display of data or crude images on paper plots and cathode-ray tube (CRT) screens. Expensive hardware and the limited availability of software kept the field from growing until the early 1980s, when the computer memory required for bit-map graphics became affordable. (A bit map is a binary representation in main memory of the rectangular array of points [pixels, or picture elements] on the screen. Because the first bit-map displays used one binary bit per pixel, they were capable of displaying only one of two colours, commonly black and green or black and amber. Later computers, with more memory, assigned more binary bits per pixel to obtain more colours.) Bit-map technology, together with high-resolution display screens and the development of graphics standards that make software less machine-dependent, has led to the explosive growth of the field. Software engineering arose as a distinct area of study in the late 1970s as part of an attempt to introduce discipline and structure into the software design and development process.