You can scroll through the BIG list of questions below OR you can click on one
of the subsections below to jump to that section.
You can also see the entire clickable list of computer questions.
and it will be added to the list as soon as possible.
An illustration of a basic computer system is below.
A Basic Computer System.
In a remarkably short period of time the computer has changed our world. During the first half of the twentieth century, economic growth in the world's industrial societies was fueled by large-scale manufacturing processes. Back then, most manufacturing was involved in converting natural resources into products that were then sold to the public. During that period, the industrialized countries of the world developed factories with assembly lines that were designed to efficiently build everything from household appliances to automobiles, ships, and locomotives. The countries that were best able to adapt their societies to produce these kinds of products became the "industrialized" societies and they dominated the world's economy.
But now, as we move into a new century, a relatively new invention, the computer, is leading to a shift in the world's power structure. Economic growth is now more likely to be fueled by the processing of information, the storage and delivery of facts and knowledge. We are now in the information age. While industrialized societies still build and sell the products of heavy industry - like automobiles and tractors - the computer has become an indispensable element in their design, manufacture, and distribution. Today, in the industrialized countries, much of the business and economic activity involves the computer. The computer is now involved in design work, the management of money, and the manufacture, marketing, and distribution of products. And, as the world's international markets become ever more competitive, the computer's role will continue to grow steadily (also see the General Computer Question #2 below: How do we use computers?)
Almost all of our businesses now use the computer to maintain information about customers and products. Our schools use computers to teach and to maintain student records. Computers are now commonly used in medicine for diagnosis and treatment. In fact, every day it gets harder to find any type of business, educational institution, or government office that does not use computers in some way.
A variety of new types of specialized hardware and software tools have made the computer valuable for everything from the most repetitive tasks, such as scanning items in a supermarket, to incredibly detailed and complex tasks, such as designing spacecraft. Because computers can store accurate information, they are used to help people make better decisions. Because computers can continue to operate day or night, 24 hours a day, they are now used to provide a level of services to humans that was unknown before their invention.
Computers are also used extensively in the world of stocks and investments. Around the world, investors, investment brokers, financial advisors, and the stock exchanges themselves rely on huge databases of information about world financial markets. Through a worldwide network of computers, this information can be quickly updated as financial events occur. This computerized financial network has created a global market for currencies and financial instruments. Today, a change in a stock on the Hong Kong stock market will be known instantly by everyone who has access to the computer network.
All of us have by now experienced how the point-of-sale ( POS) product scanning systems in stores have speeded up the check-out process and made it more accurate by eliminating the need for checkers to punch in the price for each individual item. These point-of-sale systems not only make it more convenient for shoppers, but they also provide an accurate inventory of product availability for the store's management.
In the motion picture industry, the time required to create animation has been greatly reduced through the use of computers and special graphics software. The movie industry also uses computers routinely for a variety of special effects and specialized computer programs have even made it possible to "colorize" old black-and-white films.
Musicians are also taking advantage of advances in technology by using computerized electronic synthesizers to store, modify, and access a wide variety of sounds. Special word processing software has been created for scoring music and other applications give musicians a way to actually cut and paste stored sounds to create compositions.
In some jobs, for example, where assembly-line workers have been displaced by robots, employees have to be totally retrained. New technology-based manufacturing systems often require an entirely new set of worker skills and people who have habituated to doing their work in a particular way often find it difficult to make the changes necessary to fit in. Many of us are afraid of change, until we learn more about what it means. Fear of technology is known as technophobia, and there are a fair number of people suffering from it these days. However, many people feel that as new generations grow up with computers and learn to use them in a variety of environments they will feel more comfortable with the technology and will not suffer the discomfort of this transitional period.
In addition to giving consideration to physical ergonomic issues, the computer industry is also trying to improve the way we interface with the computer by making the computer easier and more intuitive to use. New software designs utilize standard ways to carry out common computer tasks. If the computer-user interface employed in software programs is the same or very similar from one program to the next, the user can generalize from skills they have previously learned. This approach helps to relieve some of the stress related to having to learn a new program.
Because today's input and output devices provide the interface between human users and the computer, ergonomic analysis often focuses on these devices.
Most people understand the benefits derived from electronic databases. For example, they understand that there must be a computerized record if they are to receive their Medicare payment. But some fear that this information could be misused. More and more personal information is now accessible via the internet. There are occasionally reports that agencies sell personal information for use as mailing lists by sales organizations. Would you be concerned if, for example, the motor vehicle department in your state began selling descriptive information gleaned from your driver's license application? In some states, this type of information is already available to businesses who specialize in putting together mailing lists based on personal characteristics and preferences of value to businesses who want to market their goods and services.
Nongovernmental agencies, such as credit bureaus, also maintain databases that contain personal information about us. Recently, some of these agencies have come under fire for selling our personal information to businesses for marketing purposes. Businesses are always looking for mailing lists that target people with particular characteristics, and there is often some company or group willing to sell this type of information. If you subscribe to a particular type of magazine, say a computer magazine, you can almost bet you'll receive a subscription offer for every other computer magazine that comes along. Or, if you enter a contest to win a car, don't be surprised if you receive a phone call telling you about a new condominium development in your area. Although, some of these agencies have decided that a person's right to privacy takes precedence over a company's right to make money, many agencies are still selling this kind of information.
Some people are also concerned that by pulling together information from a variety of databases, it is possible for individuals to obtain comprehensive information about us. Many feel that it is one thing for someone to have information about our credit record, but it is another thing altogether if someone is able to collect all of the personal data that is available in all of the various databases and gather it into one computer record.
In response to problems related to privacy issues and computers, a number of laws have already been passed. The Freedom of Information Act, passed in 1970, requires that government agencies allow citizens to know what information is filed on them. The Fair Credit Reporting Act, also passed in 1970, requires credit bureaus to allow people to inspect and challenge any information in their credit records. The Privacy Act of 1974 makes it illegal for government agencies to collect information on citizens for illegitimate reasons. The Comprehensive Crime Control Act of 1984 made it a crime to access computers without authorization in order to obtain classified information and protected financial information. The Electronic Communications Privacy Act of 1986 provides privacy protection for computer communications, including electronic mail. This act makes it a federal crime to intercept these kinds of computer-based transmissions. Since these original laws were enacted, a number of other follow-up acts have been introduced to expand and clarify them at both the national and state level.
It is particularly important to bolt down light-weight microcomputers and peripheral devices. A number of different manufacturers have produced security products that can be used to secure hardware. Although, bolting down equipment will not always keep it from being stolen, it does make the equipment less attractive to thieves and may encourage them to look for an easier target.
Computer equipment can be protected from theft to some extent by installing it away from high traffic areas in windowless rooms behind locked doors. Although this may not be practical for microcomputers which are generally installed on the desks of individual users, it is possible to secure expensive mainframe computers or minicomputers in this way. Because these large computers are generally controlled and operated by computer professionals, it is possible to limit access to the equipment to those people who are directly responsible for maintenance and operations. Doors can have built-in security systems which require magnetically encoded cards to be used or special codes to be entered before someone can gain access to the room. Closed-circuit television cameras can be used to determine who has gained access.
It is also important to set up some kind of system to identify computer equipment in case it is stolen. A number of methods can be used to permanently label computers and peripheral devices with a unique identification number. Most computer equipment has serial numbers which can be used for this purpose, but often these numbers are on plates that can be removed from the computer. A descriptive list of all equipment, including serial numbers, should be kept for insurance purposes. These numbers can also be used to identify a computer if it is stolen and then recovered by police. The list of equipment showing identification numbers should be stored in a safe place. It is also helpful to have photographs of equipment to show to insurance companies in case of theft and to maintain sales receipts or other types of proof of purchase.
The theft and illegal use of data is most often associated with large computer systems that are shared by many users. This type of crime may entail the access of data by unauthorized users or the illegal use of data by authorized users. Although many organizations work hard to protect their data from illegal access by someone outside their organization, statistics show that most often the person committing a crime related to data is an employee of the organization, an insider. People who access computers illegally from outside of the organization have been nicknamed hackers, but computer hobbyists who like to explore the lesser-known capabilities of computers are also referred to as hackers. It is probably more appropriate to refer to those who access data illegally simply as computer criminals. What happens once the criminal breaks into a system depends on their motivation. For some it may be enough just to know they were able to get past the security measures and gain access to the system. For others, the intent is to make an illegal copy of the data stored in the system, alter it, or even erase it. The computer criminal's purpose may be to sell the data or use the information illegally. There are also ways to profit from gaining access to banking or credit information. In some cases, the criminal may be trying to damage the organization that stored the data by damaging the data itself.
Most organizations protect their important data by requiring each employee to enter a special password each time they use the data system. This password protection not only limits access to the data, but it also identifies each user each time the data system is used. However, someone with a great deal of knowledge about computers might find a way into an organization's data system despite a password protection system. Upon analysis, many organizations have been found to store important information in computers without the use of any data protection system. It is especially common to find unprotected data stored on microcomputers on individual desks. Even when important data is protected on a large computer with a secure password system, legal users of the data may have downloaded the data to a personal computer's storage system, leaving it unprotected.
There are basically two ways to keep computer virus programs out of your computer. The first has to do with an awareness of how the virus gets into your computer. Computer viruses are usually written to "ride along" with another computer program. When a computer user inserts a disk into the computer with an "infected" program on it, the virus is duplicated in the computer and can then be transmitted to other disks. (This can also happen when programs are downloaded from other computer systems.) Some virus programs are written with such sophistication that they can detect whether or not the computer has been previously infected. If the system has not been infected, the virus program is triggered and goes into action. Often, if you are aware of how computer viruses are transmitted, you can avoid them by being careful about which programs you use. Unless a program is a legitimate commercial product from a known supplier, it should be regarded with suspicion. Which leads us to the second way to protect your computer system against virus programs. If you suspect that a program is a carrier of a virus, you can analyze the program (or the entire disk) using a special virus detection program to see if it contains any known viruses. Some newer virus detection programs are capable of analyzing disks or programs to look for "suspicious" elements and may even be able to detect the possibility of a new, unknown type of virus being present. Most virus detection programs can be used to eliminate a found virus from a disk. Many computer users have installed these virus detection programs on their computers and use them to analyze every disk that is inserted into the computer.
Most importantly, this type of procedural approach establishes the proper way for users to access data. Data-access methods that vary from the set procedures can signal the need for a closer inspection. Because a fair amount of computer crime occurs at night when most of the employees have gone home, it is important to have special controls in place for overtime workers. Remote access can be controlled by having callback devices in place which hang up and return the call to the calling number.
A password is a special set of characters that are assigned to users of a computer system to control access to programs and information. Passwords can be used to prevent unauthorized users from accessing data or programs. They can also be used to control the level of interaction a user may have with system files. For example, some users can only view information in files, while others can be authorized to change or modify that information. Passwords are frequently used to protect the individual work of users in a network system and should also be used to provide user security on bulletin boards and e-mail systems.
To further protect the data and the programs that are stored in a computer system, data encryption methods can be used. Data encryption scrambles files so that even if someone is able to get hold of a password, they will not be able to use programs or make sense of data. A special decoding program is required to unscramble the encrypted data before it can be used. This security method can be used to protect data on floppy disks, fixed disks, and other types of magnetic media.
Copyright laws regarding software are very similar to those related to books and other sources of information. Just as you are not supposed to make a copy of a book and sell it to someone else, you can't sell copies of software. Copyright laws regarding what you do with your own copy of a program for your own use are not as clear. In most cases, licensing agreements for off-the-shelf software packages allow you to make at least one backup copy of the software. If the software is provided on disk, it is always a good idea to put the original disks away in a safe place and use the backup copy. Then, if something happens to the backup copy, another copy can be made from the original disks.
Although there are many restrictions on making multiple copies of commercial programs, in practice making a working copy for your own personal use is generally not restricted. However, making multiple copies of a program for use on different computers in an organization is generally not allowed and can result in legal proceedings being initiated against the person who makes the copies and against the organization that allows it. When a large organization uses the same software on a number of different computers, a separate copy of the program must be purchased for each computer that it is used on. It is up to that organization to maintain records of how many programs were purchased and which computers used which copy.
When software can be used with a number of computers that are linked together on a network, software must usually be purchased for each computer attached to the network. However, many software manufacturers now sell a special network version that licenses their program for a set number of computers attached to a network. Alternatively, some software manufacturers sell a site license that licenses their program for use throughout an entire organization. By using the multiple copies license, or site license, approach the software manufacturer can save money because they don't have to provide a large number of separately packaged products. The buyer also saves money by purchasing a number of copies of the software at a discounted price.
To avoid illegal copying of programs, some software manufacturers have devised elaborate copy protection schemes that are designed to keep users from making illegal copies of their products. Some of these copy protection methods require the user to keep the original diskette in a drive at all times. Others allow the user to make one copy only. Because these copy protection schemes can, at times, make computing more difficult, users have resisted them. For that reason, many companies have stopped using copy protection methods and have focused on more positive solutions such as offering inexpensive upgrades and special support systems for registered users. However, many programs now require the user to enter a name and the product's unique ID number whenever the program is installed on a new computer.
Because businesses and other organizations put themselves at legal risk if they allow illegal copying and use of software, they may be somewhat more reticent to do it than individuals. However, individual users may feel that they can get away with it. Nevertheless, individual users should consider that the money they pay for software goes back into the company where it can be used for development of other programs that may be of value to them. If everyone legally purchased their software, the company would profit from increased sales and the price of software might well be reduced.
Another category of inexpensive software, called shareware, may be copyrighted, but generally the developer allows users to make copies without an initial charge. However, if you intend to use the software beyond a brief tryout, the developer requests that you pay for the program. The cost of shareware is generally quite a bit less than off-the-shelf software and is frequently sold at computer swap meets, conventions, through the mail, and over the internet.
If users (individual users as well as businesses and government agencies) do not use computer data in a responsible way, if they use the power of computers or the knowledge inherent in the possession of computer-stored information without regard for individuals, more and more laws will be passed to restrict the use of computers and computer data. Such restrictions can be very detrimental to the free flow of information and to the development of new computer tools that will enrich our lives. As in all issues of ethics, each individual must decide how to act. It is important for our society to educate people at an early age about ethical issues related to the use of computers in order to protect the free flow of information that we currently enjoy.
A number of universities now offer an introductory computer course that must be taken by all students. These courses are designed to introduce students to basic computing concepts, such as those presented in this text. In addition, these courses often provide hands-on tryouts of the most popular computer applications. By providing the student with experience using basic software applications, such as word processing, database management, graphics, communications, and spreadsheet programs, these courses introduce some of the many advantages of getting work done through the use of a computer.
Another educational resource for computer training is the variety of workshops and seminars offered by companies that specialize in this type of training. These courses often cover one computing topic intensively for one or two days. Sometimes seminars are offered as part of the program at computer conferences that are put together by a professional group. These seminars are usually related to the main focus of the conference. Attendees can pay a fee to attend the conference and then select from the list of training sessions according to their own needs or interests. Companies are frequently willing to pay for employees to attend these workshops and seminars when the training provides the employee with new computing skills.
Alternatively, companies sometimes hire outside consultants to come into the company to conduct computer training with employees. This kind of training is particularly valuable if a company has standardized on a particular application program and wants to bring all of its employees up to speed right away.
Also there are now a number of companies who are producing video tapes and computer-based tutorials that cover everything from general computer literacy to very specialized subjects such as how to use one particular application program. Since these training programs include visual components as part of the training process, they can be very helpful for supplementing written training material. Some manufacturers of hardware and software products have produced videos and computer-based tutorials for their own products. Others are produced by companies in the business of selling training materials.
As the use of computers has become more common, many businesses now
are more likely to require potential employees to have computer skills
before they are hired.
Technical support people help in the installation of hardware and systems software. After installation, they are involved in maintenance of the equipment. They also maintain networking hardware and data communications systems. These employees should be familiar with diagnostic procedures and electronics and they should be able to read and understand technical manuals. These jobs require at least two years of college, but often a bachelor's degree is preferred. Customer support technicians are needed by many different types of companies. They hire employees to help customers use technical devices. For example, manufacturers of computer hardware and computer software usually hire technical support personnel to answer user's questions related to the company's products. These people need to know not only about their company's products, but also how the products interface with other systems. Retail stores that sell computer hardware and software may also have positions for technical support people in order to keep their customers satisfied. Technical support personnel usually have a background in computer technology before they are hired; nevertheless, since these positions require knowledge of a great variety of potential hardware and software problems, these employees will usually receive additional specialized training.
Technical writers, those who can write instructional manuals describing about how to use computers and related technologies, are always in great demand. All of today's hardware and software products include user manuals, references guides, and often a variety of other technical documents. Technical writers may also work with computer trainers to produce training materials and they may be called upon to produce specification sheets, product information sheets, brochures, and newsletters.
The technical writer must be skilled at translating technical jargon into a simplified language that can be readily understood by users of the product. Today, the technical writer is frequently called on to produce camera-ready copy for their employers. This requires special training in the use of desktop publishing and graphics programs, as well as knowledge of page design and a variety of other publishing skills.
For large projects, the technical writer may also become a project manager who works with technical editors and document-production staff during the production of the manual. As part of the production of technical documents, technical specifications must be deciphered, interviews with engineers and programmers must often be conducted, and arrangements must be made with data-entry people, desktop publishers, artists, photographers, and printers. The more of these skills a technical writer has, the more they can offer to potential employers. Sometimes technical writers are hired as outside consultants. Since technical writers must demonstrate knowledge of computer technology and possess excellent writing skills, they often have extensive experience and considerable education. An applicant for a technical writing position must usually show potential employers copies of manuals they have previously written.
With the proliferation of hardware and software products designed to facilitate the creation of high-quality graphics, there is a growing demand for people who have the skills to put them to use. They are known as Computer artists. Ad agencies and design houses are now using microcomputers to create professional marketing documents and other types of advertising. Magazines, newspapers, and book publishers are hiring designers and graphic artists who are able to do their work on computers. Computer artists usually have completed specialized training in art and in the use of computer graphics programs.
Many types of organizations are now hiring trainers to develop and implement computer-based training programs for their employees. These training programs may be entirely or partially delivered by computer. The designer of a computer-based training program, the computer-based training specialist, must have a great deal of knowledge about the topics being taught and about the hardware and software that is used in the training. These specialists must have education and experience not only in computer technology, but they must also have skills as a teacher. They must have training in instructional technology, in instructional methods, and they must possess excellent communication skills, both verbal and written. They may also be responsible for developing the training manuals and instructional materials that often accompany computer-based training programs.
Customer support staff are often employed by manufacturers of computer hardware and software to provide information and advice to customers. If the customer is purchasing a complex computer system, these employees may have to spend a great deal of time at the customer's office during installation. They are there to assure that no problems arise during and after the installation. These employees must know how to work with programmers and engineers. Usually they have experience and training in systems analysis and programming. They may also be involved in training the customer's employees to use the products.
There are a large number of jobs available for sales people in the computer field, people who have the skills to sell computer hardware and software. These people may work for the manufacturers of products or they may work for retail or wholesale houses that sell hardware or software products. The growth of the computer industry has also resulted in technical sales positions with publishers of technical books and magazines and a number of other businesses related to the use of computers. In addition to having skills in sales, these employees must have knowledge of the products they are selling.
Database managers (database administrators) are employees responsible for the development of an organization's database-management system. Or they may be responsible for the maintenance of a system already in place. They generally do not have to be hardware specialists, but they must have completed extensive training on using database-management software. In addition, they must have excellent communications skills because they will often be working with users to solve problems related to the organization's data. These positions generally require at least two years of specialized training. An applicant with a college degree will have an advantage when applying for these positions.
Because data is so important to businesses and other organizations, it is important to have a mechanism for checking the accuracy of data input into the system. Data control employees are responsible for double-checking data that is input by other people. They keep records and conduct periodic checks to be sure procedures are being followed. These employees usually have completed at least two years of training at a college or technical school.
In addition, an organization may hire one or more individuals who are responsible for managing and protecting the organization's data storage media. These employees who keep track of active and backup copies of data may also be responsible for the protection of data and programs against theft or damage. Often these employees need at least two years of college or technical school training.
Computer operator people keep complex computer systems running. They may be involved in scheduling data analysis, and maintaining program and data files. There are a number of different levels of these positions. The entry-level position usually requires at least a degree from a two-year college or training at a technical school. Experience and on-the-job training can lead to advancement to higher-level positions. A college degree in a technical field is generally required for the highest-level (management) positions in computer operations.
The development of new hardware and software and the installation and
maintenance of computer systems are areas that are handled by professionals
with extensive training in computer science; they are known as computer professionals.
Engineers and programmers (software engineers) are responsible for the development of hardware and systems software. They are generally hired both by manufacturers of commercial computer software and by large organizations that develop software in-house. These employees may work with a systems analyst in the design and implementation of data-management systems. Engineers and programmers are generally classified into trainee, junior, or senior (lead) levels. Trainees may have as little as two years of college, but more often a college degree is required. Often trainees have little or no practical experience with the organization's computer system and must therefore work under the supervision of others. With more experience and specialized education, trainees can move to the junior level. Often additional specialized education, such as a graduate degree in a technical field, along with a great deal of experience is required before a junior employee can become a lead engineer or senior programmer.
Systems analysts are often responsible for developing and implementing new computer-based, data-management systems. They are also responsible for maintaining and implementing changes to existing computer systems. A systems analyst may be an engineer or a programmer and they often have specialized skills related to the overall design of an organization's computer system. They must also have the organizational and communication skills (written and verbal) to serve as a liaison between all the users of the computer system. This person must have education and experience in computer technology and should have knowledge about computer programming and training in the type of organization where employed. A bachelor's or master's degree in computer science with additional training in business administration or a related technical field may be required.
In addition, experienced managers of the departments that are responsible for overseeing an organization's computer operations are always in demand. There are a number of jobs available for people-oriented individuals who want to be involved at the management level. Managers are needed throughout the computer industry as well as in companies and organizations that have installed extensive computer systems. Managers of operations, information systems managers, database managers, managers of systems development, product managers, managers of technical support, and managers of end-user support are all needed in today's computer-using organizations.
Other problems related to the use of computers are less mysterious: eyestrain, headaches, backaches, neck pain, and wrist pain can result from extended use of keyboards, monitors, and mice. If an employee is going to use a computer for long periods of time, it is important that employers provide a good ergonomic design for the work environment. Ergonomics involves the study of how humans use devices such as computers. Ergonomic research has yielded many guidelines for the design of safe and comfortable computer workplaces, and these guidelines should be utilized by all businesses that use computers.
Some ergonomic considerations are summarized in the table below.
||Should be soft and comfortable (but not too soft). Should allow user to adjust seat height, arm rests, angle and height of backrest. Backrest should support curve of user's spine. Arm rests should allow freedom of movement and should be at a height to allow arms to be at a 90 degree angle while typing.|
||Should allow wrist to rest at keyboard height while typing.|
||User's arms should be at 90 degree angle with elbows resting on chair's arm rests.|
||The top of the screen should be level with the top of the user's head. Distance from the user's eyes to the computer screen should be between 30 and 48 inches. Light source should not be coming from behind the the computer screen, and it should not reflect in screen. Glare protection may be required for some screens.|
||Should be adjustable to differing angles and heights.|
Ergonomic Considerations in the Computer Workplace
A Decentralized System
The advantage of a decentralized system is that users have more immediate
access to information and do not have to wait for processing time as they
may have to with a centralized system. Also, it may be easier to initiate
changes in the system if the changes can be made at the local level rather
than being referred to a centralized computer center. However, the decentralized
system can make it more difficult to share information because different
users may be using different types of hardware and software at the local
sites. This can be solved to some degree by establishing an information
center that is in charge of establishing hardware, software and
procedural standards. The decentralized approach may be cost-effective
for individual departments because smaller, lower-cost computers can be
used. However, the overall cost to the entire organization may be greater
because an entire system must be purchased and maintained at each location.
A Distributed System
While the use of networked computers solves some of the problems of decentralized systems, the network's communication systems are themselves complex and may require additional software, hardware, and technical support personnel. Many organizations provide one or more trained network managers to oversee network communications and to develop data-sharing programs and procedures. In addition, most organization now must develop specialized programs and staff to support their internet-based information systems.
Another computational device, known as Napier's Bones, is similar in design to the abacus. Designed by John Napier in the early 1600s, it was comprised of multiplication tables inscribed on ivory rods that looked like bones. It was used for mathematical calculations including multiplication and division and is similar in principle to the modern slide rule.
Another notable device on the path to modern computing was invented in 1642 by Blaise Pascal, a French philosopher and mathematician (the Pascal programming language is also named after him). Pascal's adding machine used a hand-powered mechanical system to add and subtract numbers. The system of dealing with numbers in Pascal's device is similar to the system used in today's computers and it is worth noting that, at the time, the device was seen as a threat to the livelihood of those employed to calculate numbers.
Pascal's device was not improved upon until 40 years later when a German, Gottfried Wilhelm von Leibniz, developed a device that was not only able to add and subtract, but was also capable of carrying out multiplication and divisions (as a series of repeated additions and subtractions).
Another device, the Jacquard loom, may not, on first analysis, seem related to the early computational devices. But the French inventor, Joseph Marie Jacquard, developed a device to automate rug weaving on a loom in 1804. The device used holes punched in cards to determine the settings for the loom, a task that normally required constant attention by the loom operator. By using a set of punched cards, the loom could be "programmed" to weave an entire rug in a complicated pattern. This system of encoding information by punching a series of holes in paper was to provide the basis for the data-handling methods that would eventually be used in the early computers.
Despite the great success of Jacquard's loom, many were disturbed by this "high tech" invention when they learned that it could completely eliminate jobs that had been done by humans for centuries. As a result, in England, a group that called themselves Luddities smashed some of the automated looms as a protest against mechanical innovation and the related threat to their jobs.
A few years later, in England, Charles Babbage proposed the design for a new calculator that was in many ways the forerunner of today's computers. In 1822, Babbage built a working model of the difference engine and received a grant from the British government to develop a full-scale version. Unfortunately, he soon discovered that the parts that he needed could not be manufactured to tolerances that he required.
In 1842, Ada Augusta Byron, the daughter of the poet Lord Byron, became interested in Babbage's project. She was a trained mathematician and saw the potential of his device (the Ada programming language that is supported by the U.S. Department of Defense was named after her). She helped provide funds to continue research for the project and she collaborated with Babbage on some of his scientific writings. Today she is credited with coming up with the concept of a programmed loop, a way to carry out the sequence of steps that are part of a mathematical calculation. Based on her published descriptions of the process, many consider her to be the world's first programmer.
Forty years later, Dr. Herman Hollerith, an employee of the U.S. Census Bureau, put Jacquard's punched-card concept together with some of the same kind of ideas that had been proposed by Charles Babbage and Ada Byron to solve a real-world problem. The Census Bureau realized that it was taking so long to complete census calculations they wouldn't even be able to complete one census before it was time to undertake the next one. Hollerith proposed a solution based on what he termed a census machine that would count data that was fed in on punched cards. He chose cards that were about the size of dollar bills to be fed into a hand cranked machine. Using Hollerith's machine, the census was tabulated in less than half the time it had previously taken.
Based on his Census Bureau success, Hollerith formed the Tabulating Machine Company in 1896 and began designing census tabulation machines. The company eventually evolved into the International Business Machines (IBM) company, the world's largest computer company.
Although computational machines continued to evolve, the invention of modern computers could not come about until the supporting technologies of electrical switching devices were in place. By 1937, electricity was in general use in most of the world's cities and the principles of radio were well understood. Using these new tools, several researchers were working on electrically powered versions of the earlier computing devices. Among them was Howard Aiken of Harvard University. Working with the support of the IBM company, in 1944 he completed the basic development of a machine that was dubbed the Mark 1. The machine, which was also known as the Automatic Sequence Controlled Calculator, is now seen as the first full-sized digital computer (smaller-scale electric calculating devices had been created earlier). The Mark 1 filled an entire room and weighed 5 tons, included 500 miles of wiring, and was controlled by punched paper cards and tapes.
Despite the many advances in computational technology represented by this new machine, it was very limited by today's standards. It was used only for numeric calculations and took three seconds to carry out one multiplication. However, with the world-wide expansion of industrial technologies that accompanied World War II, others were proceeding along the same path established by the Mark 1. For example, John Mauchly and J. Presper Eckert were developing a large-scale computing device known as the Electronic Numerical Integrator and Calculator ( ENIAC) at the University of Pennsylvania with the support of the U.S. government. Based on mechanical switches and radio vacuum tubes, this device is now seen as the first electronic computer. The huge machine consumed so much power that it often caused the lights in nearby Philadelphia to dim. But it was far more capable than Aiken's Mark 1 computer: it could perform thousands of calculations per second and was used for a variety of purposes including scientific research and weather prediction.
Computers of the first generation were all very large, room-sized computers that used thousands of vacuum tubes (the same kind of glowing glass tubes that were used in radios of that era). Their design was functional for the time, but their role in business was limited by three factors - their size, the heat they generated, and their reliability problems. And, during this period, new methods of programming evolved along with the hardware developments. The programs for the first large-scale computers were generally changed via a slow, detailed changing of the computer's circuits. Later John von Neumann joined Maunchley and Eckert and his ideas for designing a programmable computer were incorporated into their design (that method of using stored programs is still used in computers today). To transfer data and programs, a number of devices were invented that were based on punched paper tapes or punched cards.
In 1964 IBM changed the way computers were sold by introducing a "family" of computers known as the System 360. The family consisted of six different computers, but programs written for one of them could also be used on the others. IBM planned to sell an entry-level computer to a company and then later sell them an even more powerful computer as their business grew. The company could buy more computing power without rewriting their software. This plan was very successful and was a key to IBM's growth.
As the market for computers grew, so did the variety of computing solutions. The Digital Equipment Company developed a smaller, less costly computer, the PDP-8. Whereas all of the first generations of computers were huge, room-sized computers known as mainframes, these new smaller computers became known as minicomputers. The availability of these lower-cost, smaller-scale computers meant smaller businesses could computerize. Eventually, computerization became a key to business success. It also meant that a new group of users began to deal with computers. Prior computer users had been professionals who learned about computer design and computer programming in advanced courses. Now, clerical employees were expected to enter data into the computer. It required a rethinking of how to design the interface between the computer and the end user. That analysis of the human-computer interface is still going on today.
However, the need for smaller and faster computers meant that even the integrated circuits of the third generation of computers had to be made more compact. Fourth generation computers are based on large-scale integration (LSI) of circuits. New chip manufacturing methods meant that tens of thousands and later hundreds of thousands of circuits could be integrated into a single chip (known as VLSI for very large-scale integration).
Nevertheless, at that time, computing was still mostly seen as a time-sharing process. One mainframe or minicomputer could service many users, each with a terminal that was connected to the computer by wire. But during this period, a new concept of "personal" computing was being developed. And, surprisingly, this new type of computer was not being developed by the well-established computer companies. It was the electronics hobbyists and a few fledgling electronics companies that were beginning to create computing devices that used small, limited processors, known as microprocessors. These microprocessors were being built into small computers known as microcomputers that were designed to be used by only one user at a time. For that reason, most businesses did not at first recognize their value. To users who had grown up with expensive room-sized mainframes that served the needs of the entire organization, the idea of a small computer that could serve the needs of only one user at a time seemed more like a toy. Many believed that these new "personal" computers would continue to be only a hobby for "electronics nuts." But this view was soon to change as new microprocessor designs began to deliver considerable computing power in a very small package.
Although several scientists were working with microprocessor technology during this period, the best known team was working for the Intel Corporation. The team of Ted Hoff, Jr., Frederick Faggin, and Stan Mazor were in the process of expanding on the sophisticated electronics that were being used in the very small Japanese calculators. They reduced all the processing power needed for basic computing down to a set of four small circuits, or chips, one of which was to become known as the Intel 4004 microprocessor. Several special-purpose microprocessors followed and in 1974 Intel produced the 8080, their first general-purpose microprocessor.
During this period Steven Jobs and Steven Wozniak began putting together kit computers in Jobs' garage. These personal computers sold very well and their endeavor eventually became the Apple Computer Corporation, the most successful of the early microcomputer companies.
But it was the world's largest computer company that legitimized the personal computer (PC). In 1981, the International Business Machine (IBM) Corporation introduced their own microcomputer. Its widespread acceptance by the business community instigated a flood of copycat PCs. During the next few years just about every company in the world that had anything to do with electronics produced a microcomputer, most of them very similar to the IBM PC.
During the 1980s, with the spread of specialized software, personal computers found a role in almost all organizations. As many businesses purchased an IBM PC (or one of its work-alike "clones"), it gradually became something of a standard for PC design. This much-needed standardization of PC design meant that programs that ran on one brand of microcomputer would also run on other similar types of PCs that used the same microprocessor.
Computer programming methods continued to evolve during the fourth generation as new high-level programming languages continued to be developed that were both easier to use and more closely related to specific computer tasks.
Others believe that the emergence of the internet and enhanced communications systems (including wireless) will make the concept of computer generations irrelevant. The overriding trends in computer evolution - smaller, faster, more powerful - continue today. Today's little microcomputers are far faster and more capable than any of the earlier generation computers; today's PCs are even more powerful than most of the huge mainframe computers of the past. But today's mainframe and minicomputers are also more powerful and they now work in close concert with PCs rather than using the dumb terminals that used to be attached to large computers.
Each new generation of computers is faster, includes more memory and storage, and their operating system are constantly being improved. Software development methods are being improved just fast enough to keep up with the new computing capabilities and, despite the new capabilities, new user-computer interface designs are making them easier to use.
Perhaps the most important of today's trends is the fact that computers and the internet are both becoming a part of our daily lives. As computers continue to be used in marketing, retailing, and banking, we will grow ever more accepting of their presence. As computers are incorporated into other machines, we may find ourselves operating a computer when we drive, buy a can of soda, or when we want a tank of gas or a bite to eat. And as the computer's presence grows in our society, it will become far easier to use. As this history of computing has demonstrated, it is the needs of humans that continually drives the development of new computers and new computing technologies.
Businesses that process and store large amounts of data will generally use one or more mainframe computers. For example, banks use mainframes to keep track of checks and transactions at both human and automated tellers (ATMs). Libraries use mainframes to keep track of the books on hand and the ones that have been checked out. Businesses of all sizes use mainframes to maintain inventories, accounts, and payroll.
The first computers were mainframes. Although they were very slow - even when compared to today's low-cost personal computers - the early mainframe computers were very large and very expensive. Nevertheless, they were able to process data faster than anything previously available.
Minicomputers are generally thought of as medium-sized computers: while the mainframe may do the data processing and data storage for the widespread offices of an entire large company, minicomputers are generally limited to data processing and storage in one location (often for one department or for a smaller company).
Like the mainframe computers, minicomputers can serve a number of different users at the same time, but because of their somewhat more limited capacity and speed when many users are in contact with the minicomputer, the computer's response time may be noticeably slower.
Mincomputers sometimes use operating systems designed specifically for them, but many use either the UNIX or Linux operating system (also see Computer Software Question 2: What is systems software).
Today, micros come in all sizes and shapes. Some have grown too large to fit on desks and now reside under the desk. On the other hand, some of the new microcomputers are so small that you can carry them in your pocket (but they are still referred to as microcomputers. Sometimes more powerful microcomputers are tucked away in the back room where they serve the function of a file server for a group of networked microcomputers.
Regardless of their size and appearance, all microcomputers are, basically, "personal"; that is, they are designed to be used by one person at a time. This was the revolutionary idea that PCs brought to the computer world. Up to that point, no one could have conceived of the idea that individual users might have access to their very own computer. Previously, it was part of the very concept of computers that they were big, expensive, and that they were to be shared by many people. Unlike mainframes and minicomputers, microcomputers generally are not "host" to several users at the same time.
Workstations can be very expensive so they are usually reserved for applications that would overtax the capabilities of a standard PC. Often they are attached to a minicomputer or a mainframe so that data can be downloaded (transferred by wire) from these larger, host computers.
Today's CPUs are incredibly complex devices. To understand them, it
is best to view them in terms of their function. Functionally, the CPU
is composed of two main parts, the control unit and the arithmetic logic
The CPU and processing system is illustrated below.
The CPU and Processing System
Both the control unit and the ALU contain registers. They are temporary storage locations for managing instructions and data as they are being processed. For example, the ALU might temporarily store the result of one arithmetic calculation in a register while it performs a second calculation using that result.
Data stored on these chips remains in storage until the computer changes it by changing the pattern or until the computer is turned off. Without power, the circuits in the chips change back to their normal off-state and all the data is lost. For that reason, this type of memory is known as volatile and it is contrasted to more permanent types of storage systems that are known as nonvolatile. This type of primary storage is also known as random-access memory ( RAM) and the chips are referred to as random-access memory chips. However, the term "random" may not be the best way to refer to this type of memory. While almost all of today's computers use some random-access method of storing data (that is, the computer can retrieve data from wherever it is stored, randomly), the term RAM is reserved for the computer's primary, chip-based memory system.
A computer program will usually be stored in secondary storage. When that program is started, key instructions related to that program's functions are transferred from permanent storage to main memory. The program will usually provide a way for the user to load data from secondary storage to be used while the program is in operation and a way to save data back to secondary storage after processing.
Since both data and processing instructions can be temporarily stored in the chip-based primary memory system, it is not necessary for secondary storage systems to be as fast as main memory. The constant data transfers between the CPU and main memory take place in a few billionths of a second (nanoseconds). Data transfers to and from secondary storage are more likely to be measured in thousandths of a second (milliseconds), a considerably slower rate of transfer.
In these systems, a disk drive is used to rotate the disk. Although, a disk drive may appear to be a fairly simple device, it is actually a complex system with several devices that must work in concert. Based on a system of precise timing, a read head hovers above the spinning diskette surface to "read" magnetically encoded data from the disk. The data is encoded on the diskette by a separate device, the write head, that also floats just above the surface of the spinning diskette.
Before the computer can write data to a disk, the disk must be formatted. The formatting process organizes the disk's magnetic medium into tracks. Some disks have as few as 40 tracks, but other special disks have as many as 500.
Today's diskettes for personal computers vary considerably in their storage capacities. These diskettes may have 40 tracks, 80 tracks, or more. More tracks mean more storage capacity, but it also means that the data on diskettes with differing numbers of tracks cannot be read by disk drives that do not have the capability to read or write that many tracks. This can cause problems when you are using diskettes to transport data from one computer to another.
While magnetic tape is an effective tool for the storage of data that is not likely to change very often, it is not very useful when data is constantly being altered. The reason is that data must be sent to and retrieved from the tape sequentially as the tape runs through the drive. Finding data on one small section of the tape can be a time-consuming process, especially when using very long tapes. Nevertheless, tape storage is still widely used with large computer systems and it is useful for long-term storage of data that is not often changed.
Like magnetic disks, optical disks use a spinning platter; but information is stored on the disk by a laser that burns tiny pits into the surface of the disk. Then, to retrieve data from the disk, a laser-based mechanism can detect information coded in the pattern of pits in the disk surface.
This method of storing and retrieving information on a spinning disk is very much like the method used with magnetic disks, but, since the data is encoded by physically burning patterns into the surface of the disk, it can't be accidentally erased. The drawback is that these disks can't be re-recorded. Therefore, they are more appropriate for the storage of large amounts of data that is not likely to change often.
Today, CD-ROM ( compact disk - read-only memory) and DVD systems are very popular, especially for use with microcomputers. These systems use a disk that is less than 5 inches in diameter and yet can hold very large amounts of data. Although these disks look just like the well-known music CDs, they are used to store the kind of digital information used in computers.
The large color graphic and video image files that are used with multimedia are often stored on optical disks and, in addition, these disks can store two channels of sound just like the popular music CDs. In fact, a computer-controlled optical drive can be used to play the two music channels on a standard music CD, and optical disks are now available with both the music and computerized information about the music.
Some users want the high capacity and reliability of optical disks but also want to periodically re-record data on the disks; as a result, some optical systems provide a way to store new data on the disks. One version, known as write once, read many, or WORM drives, are optical disk systems that can be written to, but only once. New data cannot overwrite old data. Another newer type of optical storage can be written to as often as necessary. This type, known as magneto-optical storage actually uses a combination of laser and magnetic technologies.
Pointing devices are required with modern computer graphics applications that let you "paint" using a number of painting "tools." These tools are used to draw lines or shapes with differing thicknesses or patterns.
How an Image Scanner Works
The scanner is connected to the computer and special software is used
to control the digitizing process. Once the scanning process is complete,
and the smooth tones of the picture have been converted into a digital
map the image can be displayed on the computer's display monitor. And,
once it has been digitized and stored as a computer file, the picture can
be modified using a graphics management program.
The CRT's electron beam creates a visible pattern on the display screen by activating (lighting up) the phosphor dots on the screen: these dots are known as picture elements or pixels. Today's display screens do not all use the same number of pixels on the screen to display characters and graphics: the higher the number of pixels used, the better the clarity of the image formed. The display screen's resolution refers to the clarity of the image and it is directly related to the number of pixels used to create the image: the higher the number of pixels used, the higher the resolution.
The original display monitors were monochrome, designed only to produce
individual characters in one color (usually green) on a black background.
But, with the advent of personal computers, more and more manufacturers
began to provide monitors that could display images in color. In these
monitors, three electron beams are used to activate the screen's phosphors
with a combination of three basic colors, red, green, and blue. For that
reason, these monitors are often known as RGB monitors
The RGB Color Monitor
The early monochrome monitors showed characters on the screen by displaying pre-set patterns of dots in the character's shape. Such monitors are known as a character-mapped displays. These alphanumeric monitors were limited to the display of a standard set of letters, numbers, and special characters like the period (.), the equal sign (=), and the dollar sign ($). Most were capable of displaying up to 80 characters on each line with up to 24 lines on the screen at the same time. When this type of monitor is used, the characters that appear on the screen all match a standard format that is built into the computer (usually in a special video read-only memory known as the video ROM). When, in response to software requests, a character is to appear on the screen, the pattern for that character is looked up in a table that is stored in the video ROM. The CRT's electron beam then uses that character's pattern to activate a matching pattern in the phosphor dots (pixels) that appear on the screen.
Monitors that can display a variety of images, including characters, designs and patterns, are known as graphics monitors. These monitors are known as dot-addressable monitors because all of the pixels on the screen can be addressed by software. That means that any pattern of pixels can be illuminated to produce any type of text, character or picture. Both character-based alphanumeric monitors and bit-mapped graphics monitors can display characters, but bit-mapped monitors can display characters in different shapes and styles, as dictated by the software program that is running. This gives these monitors the capacity to display the same character in a different font (a font represents a type style) and in a different size. These monitors are also known as bit-mapped monitors because a representation, or map, of the image on the screen is maintained in the memory of the computer.
Today's monitor types are known by the names of their image-producing technology. A PC could be configured to use a color graphics adapter ( CGA) monitor that displays four colors at a resolution of 320 by 200 pixels or monochrome images at 640 by 200 pixels. Or, it could use an extended graphics adapter ( EGA) monitor that produces images in up to 16 different colors at a higher resolution, 640 by 350 pixels. A video graphics array ( VGA) monitor can produce up to 256 color shades simultaneously at resolutions up to 720 by 400 pixels. Not only do these monitors represent a great variety of different display standards, but new ones continue to emerge. For example, S-VGA monitors ( super VGA) can display up to 256 color shades in resolutions up to 800 by 600 and XGA monitors can display up to 256 colors with resolutions up to 1024 by 768. New display monitors are still being designed today with even higher resolution capabilities.
Another nonimpact printer that is often used with both desktop and portable PCs is the ink-jet printer. This type is like the dot-matrix printer in that characters are developed one dot at a time (also see Computer Hardware Question #39: What is an inkjet printer?).
A number of other new nonimpact printer technologies are currently being developed. New printing technologies are yielding attractive color output through ink-jet and thermal technologies and printing technologies based on ion-deposition, light-emitting diodes (LED), and liquid crystal shutters are showing promise. These evolving technologies may provide the basis for the printers of the future.
Point-of-sale computers are used in other innovative ways. Some companies have begun using point-of-sale computers in place of sales personnel. For example, some fast-food restaurant chains now take orders directly from the customer and then relay the order to the food preparer. In other businesses the computer can be used to provide the customer with information about products and sales locations.
Electrostatic plotters do not use drawing pens. These
plotters produce graphics by applying an electrostatic charge to rolls
of special paper. Although they are generally more expensive, electrostatic
plotters are faster than plotters that use drawing pens and they produce
If we analyze the history of computer development, we see that while the basic design of the computer itself has not changed much in the last few decades, the way we interact with computer programs is undergoing rapid changes. Software that emerged after the punched-card era provided ways to more directly access the computer by using a keyboard to input data and a monitor based on a television-type cathode-ray tube (CRT) to get information about what the computer was doing. However, results of data processing were still generally output on a printer. This system of keyboard-based, one-character-at-a-time, input via a keyboard and output on display monitors and printers continued until the beginning of the personal computer era (also see Computer Software Question #11: What is the human-computer interface?).
Despite their size, the earliest large computers were designed to be used for only one task, by only one user at a time. As a result, the systems programs that were used with these computers were relatively simple and their capabilities were directly related to the needs of that single user. Today's large computers operate in a multiuser environment; that is, the systems software must keep track of many users who are all in contact with the computer at the same time. This is known as time-sharing and it requires more sophisticated systems software.
To avoid the necessity for specialized training and to make it easier to hire skilled computer operators, some large computers are now being designed to use the same standardized operating systems that are used on other computer models. For example, some manufacturers of large computers have adopted a version of the UNIX operating system that is also used on desktop and midrange computers. Unix was first developed for minicomputers by Bell Laboratories in the early 1970s. Over the years, it has undergone many revisions and today it is available for many different types of computers, large and small.
As described above, on host computers, the system software must manage computer resources for the many users that may be in contact with the computer at any time. It must handle all the user processing and it must keep track of requests from the user terminals, prioritizing them and determining when to allow input from or output to the many users that may be in contact with the computer simultaneously. On personal computers, the system software only has to deal with one user; for that reason, system software is usually provided as a set of specialized utility programs that are used to manage the computer and its storage devices and input and output devices. Collectively, these programs are known as the personal computer's operating system.
It is important to understand that software must be designed specifically for the operating system it is to be used with. For example, many of today's personal computers are designed to be used with a collection of programs that comprise the MS-DOS operating system (developed by the Microsoft Corporation). Computers made by IBM, Compaq, Dell, Gateway, Tandy, and many, many others use the same basic processing components designed to work with the MS-DOS operating system. Thousands of different computer programs have been designed to run "under" this operating system on these types of computers.
On the other hand, some computers, such as the Macintosh line of personal computers made by the Apple Corporation include a built-in operating system that is not shared with most of the other computer types. Software developed for Macintosh computers must be specifically designed for use with the Macintosh operating system.
Today, PC users expect that no matter what their need might be, someone will soon create a program to meet it. PC applications programs were created for farmers and mechanics, for dog breeders and beauty shop owners, for scientists and teachers. Today, there are so many software packages available that it is impossible to calculate exactly how many there are.
Generally PC applications software comes in a package that includes not only the disks with the program files, but a set of user's manuals, known as documentation, that provide instructions on using the program. The package will also usually include registration cards, license agreements, and promotional information on upgrades (improved versions of the program) and other products manufactured by the same company.
A somewhat different system is employed by other hardware manufacturers. For example, although computers made by the Apple Corporation are based on a microprocessor made by a third party company (they use microprocessors made by Motorolla and IBM), Apple has designed their own operating system. This graphically oriented operating system is used on Apple computers, such as the Macintosh. Software developers who want to design programs for Apple's Macintosh computers must create their programs to run specifically on the Macintosh. Because Apple Corporation controls the operating system, they also control the way programs present information to the user. This results in much greater consistency from one program to the next. The advantage to the user is that when programs look and act more alike, it is much easier to learn each new program. In the past few years, several new graphically-oriented operating environments have been created to bring this kind of consistency to the MS-DOS based group of microcomputers.
Although microcomputers and larger computers can now use other operating systems (such as the Linux operating system), the vast majority still use the hardware/software systems described above.
In the figure below, the number 4,082 is represented in the decimal
notation system we are used to seeing. As indicated, the farthest number
to the right is the unit position (the value of 2). The next digit to the
left is the tens position (the value of 80). As we keep moving to the left,
each position represents an increment of ten.
The Binary and Decimal Number Systems
In contrast to the base ten decimal system, the binary number system uses only two symbols, 0 and 1. These two symbols are used to represent all numbers: it is therefore a base two number system. The principle of representing numbers is the same though, except that each position represents a power of 2 instead of 10. If you look at the diagram, you will see that to represent the number 29, five digit positions are required. Again, each digit to the left represents an incremental value of the base number, in this case the number 2.
Although the base two number system does not seem natural to us, it works well in a computer system that represents data in one of two states, on or off.
Using the decimal system, the number 10 is represented using two digits
(10). Using the binary system, the number 10 is represented as 1010, but
it requires four digit positions (and therefore four bits to represent
an on or off state). In the hexadecimal system, the number 10 is represented
using the letter A. In each case, the higher the base number,
the fewer digits that are required to represent large numbers. Although
the binary system is useful to the computer, very large numbers require
far more digits, making the data difficult to read when printed out. The
hexadecimal system makes it easier for computer professionals to read a
printout of data stored in main memory (known as a memory dump) and that is one of the primary uses of this number system.
Comparison of Numbering Systems
A word refers to the size of a group of binary digits that can be stored in each of the computer's memory locations. The wordsize or word length is, therefore, the number of bits of data that can be manipulated by the CPU in one block. A computer with a word length of 32 bits should be able to transfer data between the CPU's memory and its internal registers considerably faster than a computer with a word length of 16 bits. However, in many microcomputers, the circuits that are used to transfer data between CPU components (referred to as the data path or the data bus) limit the transfer of data to 16-bit groups, resulting in a somewhat slower overall rate of data processing.
The methods and devices we use to interact with the computer can be collectively thought of as the human-computer interface. The term relates not only to the input and output devices we use, but also to how we think about computer-based information. An important part of the human-computer interface is how information is presented to the computer user. In the past, computers dealt with data that was in the form of discrete units, usually in the form of characters that could be input using a keyboard and output using a printer. The human-computer interface was designed around those functions. For example, if a computer user was required to choose from one of several options, those options would be presented as a numbered list. To choose an option, the computer user would enter its number from the list by pressing the appropriate number key on the keyboard. The input device was the keyboard, an effective tool for entering a number. The display monitor was the most common output device, quite effective at displaying the numbered options using single characters. These types of human-computer interfaces were known as character-based interfaces because all input at the keyboard and all output on the screen was in the form of discrete characters.
As more and more people began to use computers to deal with data that was in the form of images, the human-computer interface began to change. Since computers were being used to present pictures, some computer makers began to redesign the human-computer interface to make it more graphical also. In the 1970s, as part of a research program on how humans use computers, the Xerox Corporation designed a computer with this type of interface. It was a powerful personal computer known as the Star and it used a special type of pointing device that was designed to rest on the desktop next to the computer. It was about the right size to fit nicely under the human hand and when the computer user moved it about on a desktop, a pointer would mimic those movements on the screen. When the user moved the pointing device to the right, the pointer on the screen moved to the right. When the device moved to the left, the pointer moved to the left. When the user moved the device forward, the pointer on the screen moved up toward the top of the screen. Downward movements were translated by the computer as pointer movement toward the bottom of the screen.
This type of pointing device became known as a mouse and it led to other changes in the human-computer interface. For example, in the character-based example described above, the computer user chose an option from a numbered list by entering the number of the desired option. Using the pointing method, the user can move the mouse on the desktop until the mouse pointer on the screen rests over the name of the desired option. Then the user can select that option by pressing a button on the mouse. The computer determines the location of the mouse pointer on the screen and, if it is positioned over a valid option, takes action just as if the user had entered the number of an option in the character-based example. The pointing method has steadily grown more popular because, as shown in this example, the process of interacting with the computer can be much more direct and intuitive: instead of typing in a character to represent an option, the user can simply "point" to the option and click. These types of human-computer interfaces are known as graphical user interfaces ( GUIs) because pointing devices are used to interact with output on the screen that is often in the form of pictures. Today, most manufacturers of personal computers and computer programs are adopting these GUI methods.
Now that pointing devices have become a common component of the computer
system, new interfaces are being developed to take advantage of them. In
the past, when the computer user choose one option from a list of options,
the program had to be interrupted to present the options list. With that
method, the entire screen changed to show the list of options (often known
as a menu of choices). This method worked alright,
but it required a series of special keystrokes just to display the list,
to make a choice, and then to hide the list again. Today, such menus of
choices are more likely to be hidden away with only a keyword displayed
to indicate the existence of a menu. An example from Microsoft's Windows
program is reproduced in the figure below.
Windows File Menu
In the figure above, the keywords shown across the top of the first screen represent available menus. The user of this type of system can use the pointing device to select any one of the keywords which, in turn, "opens" the selected menu. The second example shows the File menu after it has been opened. Once the menu is open, the pointing device can then be used to select one of the listed options.
Machine languages were developed in the early days of computing. For that reason, they are now often referred to as first-generation languages. Many other types of languages are used today, but, since all computer programs must in the end interact with the computer's processing hardware, all programs, no matter what computer language was used to create them, must eventually be converted into machine language.
The figure below shows an example of programming instructions written in machine language.
A Machine Language Example
An Assembly Language Example
Because computer instructions written in an assembly language use a meaningful code word to symbolize a machine instruction, they are somewhat easier to use. Like machine languages, programs written in assembly language are for use on computers that use only one type of processing hardware. As with machine language, the resulting program is not easily transportable to other types of computers. Assembly languages are now referred to as second-generation languages.
A High-Level Language Example
High-level languages are now referred to as third-generation
Since the 1950s, many different high-level programming languages have
been created. They vary in design based on their purpose. Some examples
include APL, BASIC, C, C++, COBOL, FORTH, FORTRAN, FORTH, LISP, Modula-2,
Pascal, Perl, and PROLOG. And with the advent of the internet, new programming
tools like Java are working hand-in-hand with internet browsers to enhance
the online experience. Some languages are better suited for one task than
another. For example, FORTRAN, one of the first high-level languages, was
designed for scientific uses. COBOL is often used in business and C is
well known for its portability across different kinds of computers. The
table below provides a brief description of some common high-level languages.
|Ada||Developed specifically for the U.S. Department of Defense to replace both FORTRAN and COBOL. The language was named for Ada Byron who many consider to be the first programmer.|
|APL||(A Programming Language) Designed for mathematical applications, APL uses a symbolic notation system that is useful for scientific and engineering programming.|
|BASIC||(Beginner's All-Purpose Symbolic Instruction Code) Designed as a straightforward approach to line-by-line programming. Often used to train beginning programmers. Simple versions of BASIC were commonly provided as the only programming software packaged with the early PCs.|
|C||Originally developed as part of the UNIX operating system. C was designed to provide a structured, machine-independent approach to programming. C includes features that provide programming approaches similar to assembly languages. C is very popular today for application development.|
|C++||Versions of C that include object-oriented methods are often referred to as C++. These new versions of the C language are especially useful for applications development when the application is being designed for modern graphical user interfaces.|
|COBOL||(COmmon Business Oriented Language) Designed as an easier-to-use business oriented language. It includes many English-like statements for automating business tasks.|
|Fortran||(FORmula TRANslation) One of the early high-level languages, FORTRAN was designed to solve mathematical problems in science and mathematics. However, it became a programming standard in many different fields, the most widely used language with the earlier generations of computers.|
|Pascal||Designed to be a powerful, structured approach to applications development. Pascal is still widely used and is one of the most popular languages used in college programming courses. The language was named for Blaise Pascal, the pioneering mathematician and philosopher.|
|Perl||Another structured language, Perl has become very popular since the advent of the internet. It is often used to create communications-related programs on computers that host web pages.|
|PL/1||(Programming Language 1) Designed as a general purpose, easy-to-use language, PL/1 combines many of the features pioneered in earlier languages. It is used in business, science, engineering, and education.|
As with assembly language, programs written in any of the high-level languages must be translated into machine language. There are different ways to do this. Programs may be compiled using a special translator program called a compiler. As with assembly language, a programmer creates a source program by creating a series of instructions using the programming language. Then the compiler program translates the source program into the compiled version that is ready for use (referred to as the object program).
Programming languages that use this compiling system are known as compiled languages. Programming languages such as FORTRAN, C, and Pascal usually use this type of translation system. A few languages, especially those used to teach programming, use a system in which the source code is interpreted; that is, a translator program (known as an interpreter) translates each instruction of the source program to machine language and then the instruction is executed before the next instruction is translated. This can slow down the execution of the program, but this instruction-by-instruction method of execution makes it easy to find errors in the program and makes it easy to fix them immediately. The BASIC programming language is often interpreted, though compiled versions also exist.
Fourth-generation languages are often used to retrieve information from a database - in fact, they are sometimes provided with database-management programs. They are used to organize the data and print out reports based on the stored information. A database query language is an example of this type of programming method. These systems provide report-generation routines that give users a way to ask questions about the stored data. While there is a specified format for these questions, the requests can be phrased as normal human-language statements. An example of an instruction in a database query language is illustrated below.
A Fourth-Generation Language Example
A computer program based on these 4GL methods will usually require far fewer statements. Because they are even easier to use than high-level languages, they are sometimes referred to as very high-level languages. Today, you'll find not only trained computer programmers using these fourth-generation programming languages, but many other types of computer users. These high-level languages can be used by almost anyone who needs to develop reports based on information stored in the computer.
A Fifth-Generation Language Example
The methods used in object-oriented programming can be represented schematically. The figure below illustrates how a few programming instructions can be used to initiate computer activity by sending messages to objects.
An Object-Oriented Programming Example
Although it may be a little difficult to precisely define it, structured programming could be described as a process that results in well-structured programs; that is, programs that are well planned and systematically laid out so they are easy to understand by any skilled programmer. A straightforward design that is clear and understandable is an especially important attribute for modern, complex programs because they may have to be periodically modified by other programmers. The structured programming process consists of a definition stage (programmer defines the problem), a program logic stage (begins the formal design process), a program coding stage (the actual writing of the program code), a testing and debugging Stage (finding and fixing a program's errors), and a program documentation stage (programmer develops written information about how each aspect of the program works).
As shown in this example, the English-like statements in pseudocode are very much like the kinds of comments a programmer may include in a program. In fact, with the cut-and-paste capabilities of many of today's programming editors, the pseudocode statements may be pasted into the final program as comments. Many of today's structured programs use indention as a way for the programmer to clarify the structure of the program. Pseudocode statements are often indented for the same reason.
Many programmers prefer the use of pseudocode over a flowchart because the pseudocoding process is more like the actual programming process. To use a flowchart, the programmer must thoroughly understand the meaning of the flowchart's symbols. Pseudocode, on the other hand, conveys program steps via the meaning of the words used and therefore requires less mental translation. However, many believe that the logic of a complex program is more easily understood when it is indicated by the structure of a flowchart.
Some errors may be due to simple mistakes in entering the program. For example, if the programmer misspells a key word in the program, it will usually be indicated by an error message that is displayed during the programming process before the program is executed. These simple syntax errors are usually the first to be discovered.
The term word processing also refers to the use of a computer program to prepare and print documents. A word processing program can be used to create letters, memos, and a variety of other types of documents. Word processing programs include features that are used to create, edit, format, save, and print documents.
We can define a word-processed document as a text file that was created using a computerized word processing program. Such a file can be revised and reformatted as often as necessary. If the computer has a printer attached, the document can be printed as often as necessary. Today, if you are using a modern, full-featured word processing program, your documents can even include graphics. In addition to text, a modern word-processed document can contain special page-design elements such as lines and boxes and tables designed to make pages easier to read.
The Scroll Bar
You can also move to a new position in the document by dragging the scroll box (also illustrated above) up or down. "Dragging" means to position the mouse pointer over the scroll box, hold down one of the buttons on the mouse, and move the scroll box up or down. When you release the mouse button, a new section of text will be displayed that is associated with the area of the scroll bar where you placed the scroll box. You can also move the area of text displayed by positioning the mouse pointer over the scroll arrows at the top or bottom of the scroll bar (they are also illustrated above) and then clicking the mouse button. Each click on the scroll arrow moves the display one line. It may take some experimenting with the scroll bar, the scroll box, and the scroll arrows to learn how much each action changes the displayed position in the document.
A Ruler Line
Notice that there are small pictures, known as icons, displayed along this type of ruler line. After highlighting areas of text (by holding down the mouse button and dragging the mouse pointer across the text area), you can reset the indents, tabs, and other formatting features by clicking on the ruler's icons with the mouse. For example, you could use the mouse to drag the left-indent marker to the right until it is positioned under the ruler's one-inch mark. As a result, the text that was highlighted would be indented one inch.
The electronic spreadsheet has many advantages over its paper counterpart. Data can be entered in columns and rows just as it can with the paper worksheet, but using the electronic spreadsheet, formulas can be added to the spreadsheet to perform arithmetic calculations on the data. You can deal with almost unlimited amounts of data using an electronic spreadsheet: many of today's modern spreadsheet programs will let you enter more than a billion or more separate data entries on one electronic worksheet.
Today, spreadsheet programs are used in just about every type of organization. They are used for financial analysis by business managers, by small business owners to determine monthly profits, and by teachers to calculate student's grades. They are even used by individuals to keep track of the family checking account.
The design of electronic spreadsheets makes it easy to analyze numerical
data. Spreadsheet programs are frequently used in decision-making situations
for studying data that represents a history of events or outcomes. They
are also useful for analyzing "what if?" situations when you want to predict
an outcome based on hypothetical data. An electronic spreadsheet program
lets you enter data into rows and columns. The columns are usually identified
by a letter and the rows by a number. The intersection of a column and
a row is referred to as a cell. The cell
address is the combination of the letter to indicate the column
it is under and a number to indicate the row it is in. As indicated in
the figure below, the cell that is at the intersection of column A and
row 1 is referred to as cell A1. Cell A2 is at the intersection
of column A and row 2. Cell B1 is at the intersection of column
B and row 1. There are as many cells as there are intersections of rows
Spreadsheet programs provide for three different types of cell entries - labels, values, and formulas. A label is often entered on a spreadsheet to describe the data you are working with. A label can be any combination of letters and numbers. Labels can occupy any cell on the electronic worksheet. Most worksheets include many labels such as "costs" or "total."
A value is a number. You can enter any value, positive or negative, with or without a decimal, in any spreadsheet cell. The cell-based design of the electronic spreadsheet makes it easy to enter data and then to make calculations based on the data.
The real power of electronic spreadsheets is in the use of formulas. Formulas use cell identifiers, numbers, and arithmetic symbols to indicate the calculations that are to be carried out. Some spreadsheet programs require that you designate a formula by starting it with a special symbol such as an at sign (@) or an equal sign (=). A formula is used to carry out a mathematical function. Often formulas are used to carry out calculations on numbers that are stored in different cells. For example, a formula could be used to subtract the value in one cell from the value in another cell. Formulas can be simple or complex. They can be used to add or subtract, to multiply or divide, or you can create more complex formulas to calculate percentages and averages.
The ability to display graphics was greatly enhanced when computer screens were redesigned to display information not as pre-formed characters that took up an exact amount of space on the screen, but as a series of dots that could be turned on or off to create characters or patterns. The quality and smoothness of the image that can be created is directly related to the number of dots or picture elements (pixels) available. The more dots, the better the image. It didn't take long before new computer programs were designed to take advantage of this capability to present much more attractive on-screen graphics. There are five basic types of graphics software.
Most commonly, data is stored and retrieved based on one of the following three types of file organization:
Today, computer communications serve a variety of needs, but most often we use communications to access data that is stored on another computer or to send data to another computer. That is what is happening when you access data using the largest network of them all, the internet: you are using your personal computer to communicate with another computer somewhere else on that network.
The figure below illustrates how computers communicate.
Computer Communications Systems
Coded information can be transmitted as either digital signals or analog signals. A system that uses digital signals sends information coded as a set of bits that can have one of two values. For example, a high pulse can carry the value of 1 and a low pulse can carry the value of 0. Before transmission, patterns of these 1 bits and 0 bits are grouped into bytes and encoded using standard computer coding methods. Systems that use analog signals are somewhat different. They send data as a wave pattern that varies continuously.
Computers manage data in digital form. The telephone system, however, uses an analog signal type. Therefore, data that is sent over telephone lines must first be converted from the digital-type signal that is used within the computer system to the analog type that is used in the phone system.
Since the rate at which information is transmitted over a channel varies,
communicating computer devices must be capable of transmitting and receiving
data at differing rates. Data rates are measured in bits per
second or bps. Each type of channel has a maximum
rate at which data can be transmitted, based on the type of media used
in the channel and its design. Generally, channels with data rates less
than 300 bps are referred to as narrowband. Rates of
300 to 9,600 bps are known as voiceband or
voice-grade. The fastest channels are referred to as
wideband or broadband. They are considered
to be high-speed channels and can carry data at rates in the hundreds of
thousands or even millions of bits-per-second. These high-speed channels
require the use of special coaxial or fiber-optic cables.
Data flow can be managed in one of three modes - simplex, half-duplex, or full-duplex. These three modes refer to the direction of the data flow. If you are using a communications system in the simplex mode, data can only travel through the channel in one direction. Since this mode restricts communication to a one-way transmission, either sending or receiving, it is not used very often for communications between two computers. Using the half-duplex mode, on the other hand, data can be sent in both directions, but not at the same time. Using the full-duplex mode, data can be transmitted in both directions at the same time. Systems with special types of wiring may be able to transmit data at higher speeds using the full-duplex mode. However, the half-duplex mode is more commonly used when transmitting data between two computers.
Two methods are used to transmit characters over a channel. The asynchronous transmission method is used to send one character at a time. Since the transmission is synchronized by sending a start bit and a stop bit, data can be sent at any time. The synchronous transmission method is used to send blocks (groups) of characters in a timed sequence. Although this method requires more sophisticated communications equipment it can be used to send data at higher transmission rates.
Two computer devices may be connected with no other devices on the line.
This configuration is known as a point-to-point connection.
This type of connection may be switched line, as used by the phone
system. The phone company directs the call and establishes the connection.
When the transmission is completed, the line is disconnected. Alternatively,
the transmission may use a dedicated line which is never disconnected.
Such lines can be private (owned by an organization)
or they may be leased from another organization such
as a phone company. However, often several computer devices will share
the same channel. This is known as a multipoint or
multidrop configuration. These systems are designed to establish
communications between a number of devices and involve the use of some
type of controller to manage the traffic of transmitted data on the shared
line. These connection options are illustrated below.
Today, multiplexers are also playing another role in communications
systems. Because data is often transmitted in a variety of different formats
(known as protocols), many organizations use multiplexers
that include a protocol converter. They provide a way
for different types of computers using different types of transmission
methods to communicate with each other. Such systems are often used to
allow communications between microcomputers and mainframes. Multiplexing
can also be done by a concentrator. This device, which
may be a computer with special multiplexing capabilities, divides the data
channel into separate channels. It allocates channel space as the need
arises by providing internal storage of the transmitted data when traffic
on the channel is high and then forwarding the data later when the channel
is available. A concentrator can also have additional data-management capabilities
making it more flexible than other multiplexers. These devices are illustrated
in the figure below.
Long Distance Communications Devices
Networks can be established in a variety of configurations. Three typical configurations are:
The very wide area network that is the internet has been around since the late 1980s. Originally designed by the U.S. military to be a communications tool, it was soon adopted by other government organizations and then by many universities. In other words, it was originally a link between a group of local area networks that already existed in universities and in military establishments. The communication rates were limited by the slow modems of the time and the design of the system limited viewable data to text only. Despite the slow speed, the early users used it often to access information about government or university programs.
You may ask: why was the original internet so slow, and why was it limited to text? To answer that question, you have to understand that the world wide web we use today was quite different from the internet we used back then. Today's new version of the internet is known as the world wide web or simply the web because its many varied connections resemble a spider's web. And it is based on a couple of key design differences that change the way we access data. First off, in the old internet, all of the information we read (remember, it was text only) was stored on remote computers. The text was stored on the distant host computer you were in contact with. Your computer monitor was like a window into that computer. As more and more remote users tried to read that text at the same time, the computer had to try to send out the information to each user before responding to the requests of the next user. The more people trying to download the information, the slower the response. But, because of today's new internet browser programs, today's internet is different (the two most popular browser programs in use today are Internet Explorer made by Microsoft and the browsers developed by Netscape). Today, when you read information (or look at pictures) that are coming from another computer somewhere out there on the internet, not only is the transfer of data much faster, but you are actually in contact with that remote computer for only a few brief moments. The information you are looking at on your computer screen has actually been retrieved from that remote computer and stored on your own computer; in other words, the information is quickly downloaded from that remote computer to the hard disk of the computer you are using. The text and pictures are still stored in files on the other computer, but using the new internet's system, you will also have a copy of those files on your computer. Using your internet browser, you can look at the information to your heart's content, leaving the original computer free to respond to the download requests of the next user. The files you download using this system are stored in special cache files somewhere on your hard disk.
The downloaded files are regular text files, but they are coded in a special way to be understood by the internet browser program you are using. The coding, known as HTML (HyperText Markup Language) is used to "mark up" the text file that contains the information. For example, the word bold in this sentence should appear in bold when it is interpreted by your internet browser. That is because the word is preceded by a code to turn on the bold highlight and followed by another code to turn it off. The markup codes are interpreted by your browser as they are displayed on your computer. The HTML codes are also used to indicate pictures that will displayed (the pictures are also stored as files on the remote computer and they too are quickly downloaded to your computer.
A special aspect of the HTML-based files used on the world wide web is that they can be coded to provide links to other files (pages) on the internet. Each file on the web has an address (known as its URL, or Uniform Resource Locator). It consists of an IP (internet protocol) number (or its associated domain name) which designates the specific computer where the file is stored. A URL can also include the name of folders and/or files. Take, for example, the following URL:
If you are connected to the internet and you click on the above internet
address, it will take you to a page with technical book reviews. In the
above URL, computerseasy.com is a domain name. Domain
names have at least two parts: the part on the left which names
the subdomain, the organization's name (in this case,
computerseasy), and the part on the right, after the dot, which
identifies the domain (in this case, com).
New domains are periodically approved, but the initial domains in the US were com for commercial, edu for an educational institution, orgfor an organization, gov for a government location, mil for a military installation, and net for a network provider.
The domain can also identify the country of origin (for example, ca for Canada or fr for France).
The last part of the URL (in this case, techbooks.html) is the name of a file stored at that location. The file has been coded using the HTML coding system to tell your internet browser how to format the page as you view it.
Computer communications require the use of two computers that are connected to the same network. The internet has expanded that concept to take advantage of the worldwide interconnection that is the backbone of the entire system. These large capacity wide area networks are accessed by individuals by registering with a company that will serve as the internet service provider (ISP). If you are accessing the internet from an office or a university computer lab, the computer you are using may already be connected. But it you are connecting from home, you may have to use a modem to "dial in" to your ISP's computers to establish the connection. Either way, as soon as you are connected, your computer becomes a part of the world's largest computer communications network, the world wide web.
Electronic mail ( e-mail) messages are transmitted by computer in the form of text. Instead of sending voice messages, you can type a message that will be stored in the receiver's name for later retrieval.
An electronic mail system is a combination of computer hardware and software especially designed for message storage and retrieval. Each user makes contact with the e-mail system by using their own computer system. If the e-mail system is an "in-house" system (operated by an organization for the exclusive use of the personnel in that organization), each user will be provided with software for use on microcomputers that are interconnected. The software will provide a way to establish contact with the e-mail management system. Today, users are often encouraged to use free email systems that can be accessed using their internet browsers. These e-mail systems are not maintained in-house; instead, business users contact the e-mail system via the internet.
Whether you are using an in-house system or an internet-based e-mail system, you must have an identifying "logon" name (often called a user name). This user name is used when you connect to or "log onto" the system. The user name is also referred to whenever someone wants to leave you a message. When a message is sent, it is sent to a user's identifying name (or to a group of user names). When the user with that name logs onto the system, a list of messages that have been received is displayed. The user can then read the messages, save them as a file, or print them out on paper. Some systems can also send and receive files that have been "attached" to the email message.
When a document is fed into a fax machine, it is scanned, digitized, and sent to another fax machine. The receiving fax machine must be attached to a phone line that can be accessed by dialing the number of that line. When the phone number of that line is dialed and an appropriate audio signal is recognized, a connection is established between the two fax machines. The receiving fax machine receives a digitized version of the patterns that are on the document in the sending fax machine and, as the digitized pattern is received, the receiving machine prints out the copy.
Today, a microcomputer can also be used to send computer files to fax machines. Special software is used to send the dialing tones and the proper signals that will be recognized by a fax machine. Once the connection is established, the user can specify a computer file to send. The file will be received and printed out by a fax machine just as if it were a copy of a document sent from another fax machine. Usually the document that is printed by the fax machine will be of higher quality than a normal fax because the document didn't have to be scanned and digitized before sending.
A computer can also be used to receive a faxed document by mimicking the kinds of signals sent out by a fax machine. The received document is stored by the computer as a file which can be modified and printed out later.
In addition, a number of internet sites now provide fax receiving and forwarding services. Using these services, businesses can manage their fax traffic virtually, that is, without the need to purchase and maintain actual fax machines.
Today video signals can be digitized and stored as computer files. This means that these video files can be quickly sent to other computer systems using standard phone lines (images can be sent even faster if dedicated fiber-optic lines are installed between sites). Using these methods, teleconferencing can be conducted by recording and sending digitized video images between computers at different meeting sites using the same type of interlinked phone lines that are used for telephone conference calls. As this technology improves, internet-based system are likely to make teleconferencing even more popular. The ability to see other people who are participating in the teleconference may become one of the internet's most important capabilities for business users. This capability becomes especially important at meetings when participants have charts or physical objects to show to the participants at other sites.
These methods of storing video images as computer files also make it possible to send and receive video e-mail using a computerized system similar to that described in the electronic mail section above.
Computer Hardware Question 1: What is Computer
Computer Hardware Question 2: When was the computer developed?
Computer Hardware Question 3: What is a first generation computer?
Computer Hardware Question 4: What is a second generation computer?
Computer Hardware Question 5: What is a third generation computer?
Computer Hardware Question 6: What is a fourth generation computer?
Computer Hardware Question 7: What types of computers will we use in the future?
Computer Hardware Question 8: What is a mainframe?
Computer Hardware Question 9: What is a supercomputer?
Computer Hardware Question 10: What is a minicomputer?
Computer Hardware Question 11: What is a microcomputer (PC, desktop computer)?
Computer Hardware Question 12: What is a workstation?
Computer Hardware Question 13: What is an embedded microprocessor?
Computer Hardware Question 14: What is computer storage?
Computer Hardware Question 15: What is input?
Computer Hardware Question 16: What is an output device?
Computer Hardware Question 17: What is the central processing unit (CPU)?
Computer Hardware Question 18: What is a control unit?
Computer Hardware Question 19: What is an arithmetic/logic unit (ALU)?
Computer Hardware Question 20: What is the instruction cycle (I-Cycle) and the execution cycle (E-Cycle)?
Computer Hardware Question 21: What is the CPU clock?
Computer Hardware Question 22: What is main memory?
Computer Hardware Question 23: What is a secondary storage system?
Computer Hardware Question 24: What is disk storage?
Computer Hardware Question 25: What is a hard disk (fixed disk) ?
Computer Hardware Question 26: What is a diskette (floppy disk?
Computer Hardware Question 27: What is tape storage?
Computer Hardware Question 28: What is an optical disk?
Computer Hardware Question 29: What is read-only memory (ROM)?
Computer Hardware Question 30: What is a pointing devce?
Computer Hardware Question 31: What is an image scanner?
Computer Hardware Question 32: How does a display monitor work?
Computer Hardware Question 33: What is an impact printer?
Computer Hardware Question 34: What is a nonimpact printer?
Computer Hardware Question 35: What is a line printer (also known as a chain, band, or drum printer)?
Computer Hardware Question 36: What is a dot-matrix printer?
Computer Hardware Question 37: What is a letter-quality printer?
Computer Hardware Question 38: What is a laser printer?
Computer Hardware Question 39: What is an ink-jet printer?
Computer Hardware Question 40: What is a point-of-sale system?
Computer Hardware Question 41: What is a plotter?
Computer Software Question 1: What is software?
Computer Software Question 2: What is systems software?
Computer Software Question 3: What is applications software?
Computer Software Question 4: What is programming software?
Computer Software Question 5: What is an operating system?
Computer Software Question 6: What is the binary number systemand how does it compare to the decimal number system?
Computer Software Question 7: What is hexadecimal representation?
Computer Software Question 8: What is ASCII?
Computer Software Question 9: What is EBCDIC?
Computer Software Question 10: What do the terms bit, byte, and word refer to?
Computer Software Question 11: What is the human-computer interface (also known as the computer-user interface?
Computer Software Question 12: What is optical character recognition (OCR)?
Computer Software Question 13: What is a programming language?
Computer Software Question 14: What is machine language?
Computer Software Question 15: What is assembly language?
Computer Software Question 16: What is a high-level language?
Computer Software Question 17: What is a fourth-generation language?
Computer Software Question 18: What is a fifth-generation language?
Computer Software Question 19: What is object-oriented programming (OOP)?
Computer Software Question 20: What is object-oriented authoring (OOA)?
Computer Software Question 21: What is structured programming?
Computer Software Question 22: What is an algorithm?
Computer Software Question 23: What is a flowchart?
Computer Software Question 24: What is pseudocode?
Computer Software Question 25: What is debugging?
Computer Software Question 26: What is beta testing?
Computer Software Question 27: What is end-user documentation?
Computer Software Question 28: What is an internet (web) browser?
Computer Software Question 29: What is word processing?
Computer Software Question 30: What is the cursor?
Computer Software Question 31: What is a mouse?
Computer Software Question 32: What is dragging?
Computer Software Question 33: What is the scroll bar and the scroll box?
Computer Software Question 34: What is a ruler line?
Computer Software Question 35: What is background printing?
Computer Software Question 36: What is a macro?
Computer Software Question 37: What is desktop publishing?
Computer Software Question 38: What is a spreadsheet?
Computer Software Question 39: What is a computer graphic?
Computer Software Question 40: What is a database?
Computer Software Question 41: What are the types of databases?
Computer Communications Question 1: What is
Computer Communications Question 2: What does upload and download refer to?
Computer Communications Question 3: What are the data communications options?
Computer Communications Question 4: What is a modem?
Computer Communications Question 5: What type of equipment do I need for computer communications?
Computer Communications Question 6: What are the communications network types?
Computer Communications Question 7: What is a bridge? Is it the same as a gateway?
Computer Communications Question 8: What exactly is the internet? Is it the same as the world wide web?
Computer Communications Question 9: What is email?
Computer Communications Question 10: What is a fax?
Computer Communications Question 11: What is Voice Mail?
Computer Communications Question 12: What is a computer bulletin board?
Computer Communications Question 13: What is computer teleconferencing?
Computer Communications Question 14: What is file transfer protocol (FTP)?
ComputersEasy Main Menu