School My Project- A/L 2011



Introduction

Globalization and technological change processes that have accelerated in tandem over the past fifteen years have created a new global economy “powered by technology, fueled by information and driven by knowledge.” The emergence of this new global economy has serious implications for the nature and purpose of educational institutions. As the half-life of information continues to shrink and access to information continues to grow exponentially, schools cannot remain mere venues for the transmission of a prescribed set of information from teacher to student over a fixed period of time. Rather, schools must promote “learning to learn,” : i.e., the acquisition of knowledge and skills that make possible continuous learning over the lifetime. “The illiterate of the 21st century,” according to futurist Alvin Toffler,“will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”
Concerns over educational relevance and quality coexist with the imperative of expanding educational opportunities to those made most vulnerable by globalization developing countries in general; low-income groups, girls and women, and low-skilled workers in particular. Global changes also put pressure on all groups to constantly acquire and apply new skills. The International Labour Organization defines the requirements for education and training in the new global economy simply as “Basic Education for All”,“CoreWork Skills for All”and “Lifelong Learning for All”.
Information and communication technologies (ICTs)  which include radio and television, as well as newer digital technologies such as computers and the Internet  have been touted as potentially powerful enabling tools for educational change and reform. When used appropriately, different ICTs are said to help expand access to education, strengthen the relevance of education to the increasingly digital workplace, and raise educational quality by, among others, helping make teaching and learning into an engaging, active process connected to real life.
However, the experience of introducing different ICTs in the classroom and other educational settings all over the world over the past several decades suggests that the full realization of the potential educational benefits of ICTs is not automatic. The effective integration of ICTs into the educational system is a complex, multifaceted process that involves not just technology  indeed, given enough initial capital, getting the technology is the easiest part  but also curriculum and pedagogy, institutional readiness, teacher competencies, and long-term financing, among others.
This primer is intended to help policymakers in developing countries define a framework for the appropriate and effective use of ICTs in their educational systems by first providing a brief overview of the potential benefits of ICT use in education and the ways by which different ICTs have been used in education thus far. Second, it addresses the four broad issues in the use of ICTs in education—effectiveness, cost, equity, and sustainability. The primer concludes with a discussion of five key challenges that policymakers in developing countries must reckon with when making decisions about the integration of ICTs in education, namely, educational policy and planning, infrastructure, capacity building, language and content, and financing.


Offering

I dreamed I stood in a studio
And watched two sculptors there,
The clay they used was a young child’s mind
And they fashioned it with care.

One was a teacher:
the tools she used were books and music and art;
One was a parent
With a guiding hand and gentle loving heart.

And when at last their work was done,
They were proud of what they had wrought.
For the things they had worked into the child
Could never be sold or bought!

And each agreed she would have failed
if she had worked alone.
For behind the parent stood the school,
and behind the teacher stood the home!
Thank you Dad, Thank you mam
                                               
And also my teachers............

 This is  my offering to you.........
W.M Lahiru Sandaruwan.


Spacial thanks to


Mr. Kushan Gunathilaka,
Mr. Chamindu Rashan neththikumara,
Mr. Shammika Dilshan.


For Your Fullest Corporation to do This project
Wel…..



Contents

Basic Of Information And Communication Technology

Computer Networking

Internet And  email

Future Technology

Common Topic on ICT Field










Information and Communication Technology.





what is The Information and communications

technology

Information and communications technology or information and communication technology, usually called ICT, is often used as an extended synonym for information  techonology(IT) but is usually a more general term that stresses the role of unified  communication  and the integration of telecommunication (telephone lines and wireless signals), intelligent building management systems and audio-visual systems in modern information technology. ICT consists of all technical means used to handle information and aid communication, including computer and network hardware, communication middleware as well as necessary software. In other words, ICT consists of IT as well as telephony, broadcast media, all types of audio and video processing and transmission and network based control and monitoring functions. The expression was first used in 1997 in a report by Dennis Stevenson to the UK government  and promoted by the new National Curriculum documents for the UK in 2000.
ICT is often used in the context of "ICT roadmap" to indicate the path that an organization will take with their ICT needs.
The term ICT is now also used to refer to the merging (convergence) of audio-visual and telephone networks with computer networks through a single cabling or link system. There are large economic incentives (huge cost savings due to elimination of the telephone network) to merge the audio-visual, building management and telephone network with the computer network system using a single unified system of cabling, signal distribution and management. See VOIP and Intelligent Infrastructure Management (IIM). This in turn has spurred the growth of organizations with the term ICT in their names to indicate their specialization in the process of merging the different network systems.
                                                            
             



information and Communication Technologies for

Development

Information and Communication Technologies for Development (ICT4Dev) is a general term referring to the application of Information and Communication Technologies (ICTs) within the fields of socioeconomic development, international development and human rights.
The dominant term used in this field is "ICT4Dev". Alternatives include ICTD and development informatics.
ICTD (Information and Communication Technologies and Development) is the application of technological solutions to the problems of the developing world. In theory, it is differentiated from Information and Communication Technologies for Development (ICT4D). ICT4D focuses on using digital technology to deliver specific development goals (most notably the Millennium Development Goals). ICTD looks much more broadly at use of ICTs in developing countries.
This is a difference that is rarely understood or used in practice.
There is a - somewhat loose - community of researchers that has grown up around the annual ICTD conferences, the latter of which  took place in Doha, Qatar. The main feature of this community is its integration of both technical and social science researchers working in the field.
The concept of ICT4Dev can be interpreted as dealing with disadvantaged populations anywhere in the world, but is more typically associated with applications in developing countries. It concerns itself with directly applying information technology approaches to poverty reduction. ICTs can be applied either in the direct sense, wherein their use directly benefits the disadvantaged population, or in an indirect sense, wherein the ICTs assist aid organisations or non-governmental organizations or governments or businesses in order to improve general socio-economic conditions.
The field is becoming recognized as an interdisciplinary research area as can be noted by the growing number of conferences, workshops and publications. Such research has been spurred on in part by the need for scientifically validated benchmarks and results, which can be used to measure the efficacy of current projects.

Opportunity

ICT is central to today's most modern economies. Many international development agencies recognize the importance of ICT4Dev - for example, the World Bank's GICT section has a dedicated team of approximately 200 staff members working on ICT issues.
Developing countries far lag developed nations in computer use and internet access/usage. For example, on average only 1 in 130 people in Africa has a computer while in  North America and Europe 1 in every 2 people have access to the Internet.  90% of students in Africa have never touched a computer.
However, local networks can provide significant access to software and information even without utilizing an internet connection, for example through use of the Wikipedia CD Selection or the eGranary Digital Library.
The World Bank runs the Information for Development Program (infoDev), whose Rural ICT Toolkit analyses the costs and possible profits involved in such a venture and shows that there is more potential in developing areas than many might assume.  The potential for profit arises from two sources- resource sharing across large numbers of users (specifically, the publication talks about line sharing, but the principle is the same for, e.g., telecentres at which computing/Internet are shared) and remittances (specifically the publication talks about carriers making money from incoming calls, i.e., from urban to rural areas).
A good example of the impact of ICTs is that of farmers getting better market price information and thus boosting their income.  Community e center in the Philippines developed a website to promote its local products worldwide.  Another example is the use of mobile telecommunications and radio broadcasting to fight political corruption in Burundi.

 History

The history of ICT4Dev can, roughly, be divided into three periods:
  •       ICT4Dev 0.0: mid-1950s to late-1990s. During this period (before the creation of the term "ICT4Dev"), the focus was on computing / data processing for back-office applications in large government and private sector organizations in developing countries.

  •       ICT4Dev 1.0: late-1990s to late-2000s. The combined advent of the Millennium Development Goals and mainstream usage of the Internet in industrialised countries led to a rapid rise in investment in ICT infrastructure and ICT programmes/projects in developing countries. The most typical application was the telecentre, used to bring information on development issues such as health, education, and agricultural extension into poor communities. More latterly, telecentres might also deliver online or partly online government services.

  •      ICT4Dev 2.0: late-2000s onwards. There is no clear boundary between phase 1.0 and 2.0 but suggestions of moving to a new phase include the change from the telecentre to the mobile phone as the archetypal application; less concern with e-readiness and more interest in the impact of ICTs on development; and more focus on the poor as producers and innovators with ICTs (as opposed to just consumers of ICT-based information). 
  •  
     

information and Communication Technologies for

Education (E-learning)

E-learning comprises all forms of electronically supported learning and teaching. The information and communication systems, whether networked or not, serve as specific media to implement the learning process.  The term will still most likely be utilized to reference out-of-classroom and in-classroom educational experiences via technology, even as advances continue in regard to devices and curriculum.
E-learning is essentially the computer and network-enabled transfer of skills and knowledge. E-learning applications and processes include Web-based learning, computer-based learning, virtual classroom opportunities and digital collaboration. Content is delivered via the Internet, intranet/extranet, audio or video tape, satellite TV, and CD-ROM. It can be self-paced or instructor-led and includes media in the form of text, image, animation, streaming video and audio.
Abbreviations like CBT (Computer-Based Training), IBT (Internet-Based Training) or WBT (Web-Based Training) have been used as synonyms to e-learning. Today one can still find these terms being used, along with variations of e-learning such as elearning, Elearning, and eLearning. The terms will be utilized throughout this article to indicate their validity under the broader terminology of E-learning.

Market

The worldwide e-learning industry is estimated to be worth over $48 billion according to conservative estimates.  Developments in internet and multimedia technologies are the basic enabler of e-learning, with consulting, content, technologies, services and support being identified as the five key sectors of the e-learning industry.

Higher education

By 2006, 3.5 million students were participating in on-line learning at institutions of higher education in the United States. According to the Sloan Foundation reports,  there has been an increase of around 12–14 percent per year on average in enrollments for fully online learning over the five years 2004–2009 in the US post-secondary system, compared with an average of approximately 2 per cent increase per year in enrollments overall. Allen and Seamen (2009)  claim that almost a quarter of all students in post-secondary education were taking fully online courses in 2008, and a report by Ambient Insight Research  suggests that in 2009, 44 per cent of post-secondary students in the USA were taking some or all of their courses online, and projected that this figure would rise to 81 percent by 2014. Thus it can be seen that e-learning is moving rapidly from the margins to being a predominant form of post-secondary education, at least in the USA.
Many higher education, for-profit institutions, now offer on-line classes. By contrast, only about half of private, non-profit schools offer them. The Sloan report, based on a poll of academic leaders, indicated that students generally appear to be at least as satisfied with their on-line classes as they are with traditional ones. Private institutions may become more involved with on-line presentations as the cost of instituting such a system decreases. Properly trained staff must also be hired to work with students on-line. These staff members need to understand the content area, and also be highly trained in the use of the computer and Internet. Online education is rapidly increasing, and online doctoral programs have even developed at leading research universities.

K-12 Learning

E-learning is also utilized by public K-12 schools in the United States. Some E-Learning environments take place in a traditional classroom others allow students to attend classes from home or other locations.

 History

In the early 1960s, Stanford University psychology professors Patrick Suppes and Richard C. Atkinson experimented with using computers to teach math and reading to young children in elementary schools in East Palo Alto, California. Stanford's Education Program for Gifted Youth is descended from those early experiments.
Early e-learning systems, based on Computer-Based Learning/Training often attempted to replicate autocratic teaching styles whereby the role of the e-learning system was assumed to be for transferring knowledge, as opposed to systems developed later based on Computer Supported Collaborative Learning (CSCL), which encouraged the shared development of knowledge.
As early as 1993, William D. Graziadei described an online computer-delivered lecture, tutorial and assessment project using electronic mail. In 1997 he published an article which described developing an overall strategy for technology-based course development and management for an educational system. He said that products had to be easy to use and maintain, portable, replicable, scalable, and immediately affordable, and they had to have a high probability of success with long-term cost-effectiveness.
William D. Graziadei, Sharon Gallagher,Ronald N. Brown,Joseph Sasiadek Building Asynchronous and Synchronous Teaching-Learning Environments: Exploring a Course/Classroom Management System Solution In 1997 Graziadei, W.D., et al.,  published an article entitled "Building Asynchronous and Synchronous Teaching-Learning Environments: Exploring a Course/Classroom Management System Solution".  They described a process at the State University of New York (SUNY) of evaluating products and developing an overall strategy for technology-based course development and management in teaching-learning. The product(s) had to be easy to use and maintain, portable, replicable, scalable, and immediately affordable, and they had to have a high probability of success with long-term cost-effectiveness. Today many technologies can be, and are, used in e-learning, from blogs to collaborative software, ePortfolios, and virtual classrooms. Most eLearning situations use combinations of these techniques.

E-Learning 2.0

The term E-Learning 2.0  is a neologism for CSCL systems that came about during the emergence of Web 2.0 From an E-Learning 2.0 perspective, conventional e-learning systems were based on instructional packets, which were delivered to students using assignments. Assignments were evaluated by the teacher. In contrast, the new e-learning places increased emphasis on social learning and use of social software such as blogs, wikis, podcasts and virtual worlds such as Second Life.  This phenomenon has also been referred to as Long Tail Learning See also (Seely Brown & Adler 2008)
E-Learning 2.0, by contrast to e-learning systems not based on CSCL, assumes that knowledge (as meaning and understanding) is socially constructed. Learning takes place through conversations about content and grounded interaction about problems and actions. Advocates of social learning claim that one of the best ways to learn something is to teach it to others.
However, it should be noted that many early online courses, such as those developed by Murray Turoff and Starr Roxanne Hiltz in the 1970s and 80s at the New Jersey Institute of Technology,  courses at the University of Guelph in Canada,  the British Open University,  and the online distance courses at the University of British Columbia (where Web CT, now incorporated into Blackboard Inc. was first developed),   have always made heavy use of online discussion between students. Also, from the start, practitioners such as Harasim (1995)  have put heavy emphasis on the use of learning networks for knowledge construction, long before the term e-learning, let alone e-learning 2.0, was even considered.
There is also an increased use of virtual classrooms (online presentations delivered live) as an online learning platform and classroom for a diverse set of education providers such as Minnesota State Colleges and Universities and Sachem School District.
In addition to virtual classroom environments, social networks have become an important part of E-learning 2.0. Social networks have been used to foster online learning communities around subjects as diverse as test preparation and language education. Mobile Assisted Language Learning (MALL) is a term used to describe using handheld computers or cell phones to assist in language learning. Some feel, however, that schools have not caught up with the social networking trends. Few traditional educators promote social networking unless they are communicating with their own colleagues.




                                                  

    information and Communication Technologies for

    Govement (e-Government)


    e-Government (short for electronic government, also known as e-gov, digital government, online government, or connected government) is digital interaction between a government and citizens (G2C), government and businesses/commerce/eCommerce (G2B), and between government agencies (G2G), Government-to-Religious Movements/Church (G2R), Government-to-Households (G2H). This digital interaction consists of governance, information and communication technology (ICT), business process re-engineering (BPR), and e-citizen at all levels of government (city, state/provence, national, and international).
    Essentially, the term e-Government or also known as Digital Government, refers to 'How government utilized IT, ICT and other telecommunication technologies, to enhance the efficiency and effectiveness in the public sector'

    Examples of e-Government and e-Governance

    E-Government should enable anyone visiting a city website to communicate and interact with city employees via the Internet with graphical user interfaces (GUI), instant-messaging (IM), audio/video presentations, and in any way more sophisticated than a simple email letter to the address provided at the site” and “the use of technology to enhance the access to and delivery of government services to benefit citizens, business partners and employees”. The focus should be on:
    • The use of Information and communication technologies, and particularly the Internet, as a tool to achieve better government.
    • The use of information and communication technologies in all facets of the operations of a government organization.
    • The continuous optimization of service delivery, constituency participation and governance by transforming internal and external relationships through technology, the Internet and new media.
    Whilst e-Government has traditionally been understood as being centered around the operations of government, e-Governance is understood to extend the scope by including citizen engagement and participation in governance. As such, following in line with the OECD definition of e-Government, e-Governance can be defined as the use of ICTs as a tool to achieve better governance.

    Non-internet e-Government

    While e-government is often thought of as "online government" or "Internet-based government," many non-Internet "electronic government" technologies can be used in this context. Some non-Internet forms include telephone, fax, PDA, SMS text messaging, MMS, wireless networks and services, Bluetooth, CCTV, tracking systems, RFID, biometric identification, road traffic management and regulatory enforcement, identity cards, smart cards and other Near Field Communication applications; polling station technology (where non-online e-voting is being considered), TV and radio-based delivery of government services (e.g., CSMW), email, online community facilities, newsgroups and electronic mailing lists, online chat, and instant messaging technologies.


                          
    information and Communication Technologies in

    Health Care
    mHealth
    mHealth (also written as m-health or mobile health) is a term used for the practice of medical and public health, supported by mobile devices. The term is most commonly used in reference to using mobile communication devices, such as mobile phones and PDAs, for health services and information. The mHealth field has emerged as a sub-segment of eHealth, the use of information and communication technology (ICT), such as computers, mobile phones, communications satellite, patient monitors, etc., for health services and information. mHealth applications include the use of mobile devices in collecting community and clinical health data, delivery of healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vital signs, and direct provision of care (via mobile telemedicine).
    While mHealth certainly has application for industrialized nations, the field has emerged in recent years as largely an application for developing countries, stemming from the rapid rise of mobile phone penetration in low-income nations. The field, then, largely emerges as a means of providing greater access to larger segments of a population in developing countries, as well as improving the capacity of health systems in such countries to provide quality healthcare.
    Within the mHealth space, projects operate with a variety of objectives, including increased access to healthcare and health-related information (particularly for hard-to-reach populations); improved ability to diagnose and track diseases; timelier, more actionable public health information; and expanded access to ongoing medical education and training for health workers.

              
    information and Communication Technologies in

    agriculture

    The application of Information and Communication Technology (ICT) in agriculture is increasingly important.
    E-Agriculture is an emerging field focusing on the enhancement of agricultural and rural development through improved information and communication processes. More specifically, e-Agriculture involves the conceptualization, design, development, evaluation and application of innovative ways to use information and communication technologies (ICT) in the rural domain, with a primary focus on agriculture. E-Agriculture is a relatively new term and we fully expect its scope to change and evolve as our understanding of the area grows.
    E-Agriculture is one of the action lines identified in the declaration and plan of action of the World Summit on the Information Society (WSIS). The "Tunis Agenda for the Information Society," published on 18 November 2005, emphasizes the leading facilitating roles that UN agencies need to play in the implementation of the Geneva Plan of Action. The Food and Agriculture Organization of the United Nations (FAO) has been assigned the responsibility of organizing activities related to the action line under C.7 ICT Applications on E-Agriculture.
    The main phases of the agriculture industry are: Crop cultivation, Water management, Fertilizer Application, Fertigation, Pest management, Harvesting, Post harvest handling, Transporting of food/food products, Packaging, Food preservation, Food processing/value addition, Food quality management, Food safety, Food storage, Food marketing.
    All stakeholders of agriculture industry need information and knowledge about these phases to manage them efficiently. Any system applied for getting information and knowledge for making decisions in any industry should deliver accurate, complete, concise information in time or on time. The information provided by the system must be in user-friendly form, easy to access, cost-effective and well protected from unauthorized accesses.
    I  nformation and Communication Technology (ICT) can play a significant role in maintaining the above mentioned properties of information as it consists of three main technologies. They are: Computer Technology, Communication Technology and Information Management Technology. These technologies are applied for processing, exchanging and managing data, information and knowledge. The tools provided by ICT are having ability to:
    •       Record text, drawings, photographs, audio, video, process descriptions, and other information in digital formats,
    •       Produce exact duplicates of such information at significantly lower cost,
    •       Transfer information and knowledge rapidly over large distances through communications networks.
    •      Develop standardized algorithms to large quantities of information relatively rapidly.
    •      Achieve greater interactivity in communicating, evaluating, producing and sharing useful information and knowledge.



      information and Communication Technologies for 


      environmental sustainability

      Information and Communication Technologies for Environmental Sustainability (ICT Ensure) is a general term referring to the application of Information and Communication Technologies (ICTs) within the field of environmental sustainability. Information and Communication Technologies (ICTs) are acting as integrating and enabling technologies for the economy and they have a profound impact on our society. Recent changes in ICT affect as well the environmental sustainability regarding the Millennium Development Goal (MDG) set up to ensure environmental sustainability in this century. With the usage of new technologies the global community, can be supported in their collaboration to preserve the environment in the long term. New technologies provide utilities for Knowledge acquisition and awareness, early evaluation of new knowledge, reaching agreements and communication of progress in the interest of the human welfare. This includes ethical aspects of protecting human life as well as aspects of consumer safety and the preservation of our natural environment.


      information and Communication Technologies for  

       commerce(E-commerce)


      Electronic commerce, commonly known as e-comm, e-commerce or eCommerce, consists of the buying and selling of prodPortfoliosucts or services over electronic systems such as the Internet and other computer networks. The amount of trade conducted electronically has grown extraordinarily with widespread Internet usage. The use of commerce is conducted in this way, spurring and drawing on innovations in electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web at least at some point in the transaction's lifecycle, although it can encompass a wider range of technologies such as e-mail, mobile devices and telephones as well.
      A large percentage of electronic commerce is conducted entirely electronically for virtual items such as access to premium content on a website, but most electronic commerce involves the transportation of physical items in some way. Online retailers are sometimes known as e-tailers and online retail is sometimes known as e-tail. Almost all big retailers have electronic commerce presence on the World Wide Web.
      Electronic commerce that is conducted between businesses is referred to as business-to-business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to specific, pre-qualified participants (private electronic market). Electronic commerce that is conducted between businesses and consumers, on the other hand, is referred to as business-to-consumer or B2C. This is the type of electronic commerce conducted by companies such as Amazon.com. Online shopping is a form of electronic commerce where the buyer is directly online to the seller's computer usually via the internet. There is no intermediary service. The sale and purchase transaction is completed electronically and interactively in real-time such as Amazon.com for new books. If an intermediary is present, then the sale and purchase transaction is called electronic commerce such as eBay.com.
      Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of the business transactions.

      Global Trends in E-Retailing and Shopping

      Business models across the world also continue to change drastically with the advent of eCommerce and this change is not just restricted to USA. Other countries are also contributing to the growth of eCommerce. For example, United Kingdom has the biggest e-commerce market in the world when measured by the amount spent per capita,even higher than USA.The internet economy in UK is likely to grow by 10% between 2010 to 2015. This has led to changing dynamics for the advertising industry
      Amongst emerging economies, China's eCommerce presence continues to expand. With 384 million internet users,China's online shopping sales rose to $36.6 billion in 2009 and one of the reasons behind the huge growth has been the improved trust level for shoppers. The Chinese retailers have been able to help consumers feel more comfortable shopping online.



      Computer



      what is theComputer

      A computer is an programmable machine designed to read and execute sequentially a list of instructions that make it perform arithmetical and logical operations on binary numbers. Conventionally a computer consists of some form of short or long term memory for data storage and a central processing unit, which functions as a control unit and contains the arithmetic logic unit. Peripherals (for example keyboard, mouse or graphics card) can be connected to allow a the computer to receive outside input and display output.
      A computers processing unit executes series of instructions that make it read, manipulate and then store data. Test and jump instructions allow to move within the program space and therefore to execute different instructions as a function of the current state of the machine or its environment.
      The computer can also respond to interrupts that make it execute specific sets of instructions and then return and continue what it was doing before the interruption.
      The first electronic computers were developed in the mid-20th century (1940–1945). Originally, they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).
      Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.  Simple computers are small enough to fit into mobile devices, and can be powered by a small battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.


      Meaning of data, information


      The terms information and knowledge are frequently used for overlapping concepts. The main difference is in the level of abstraction being considered. Data is the lowest level of abstraction, information is the next level, and finally, knowledge is the highest level among all three.  Data on its own carries no meaning. For data to become information, it must be interpreted and take on a meaning. For example, the height of Mt. Everest is generally considered as "data", a book on Mt. Everest geological characteristics may be considered as "information", and a report containing practical information on the best way to reach Mt. Everest's peak may be considered as "knowledge".
      Information as a concept bears a diversity of meanings, from everyday usage to technical settings. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.
      Beynon-Davies uses the concept of a sign to distinguish between data and information; data are symbols while information occurs when symbols are used to refer to something.
      It is people and computers who collect data and impose patterns on it. These patterns are seen as information which can be used to enhance knowledge. These patterns can be interpreted as truth, and are authorized as aesthetic and ethical criteria. Events that leave behind perceivable physical or virtual remains can be traced back through data. Marks are no longer considered data once the link between the mark and observation is broken.
      Raw data refers to a collection of numbers, characters, images or other outputs from devices to convert physical quantities into symbols, that are unprocessed. Such data is typically further processed by a human or input into a computer, stored and processed there, or transmitted (output) to another human or computer (possibly through a data cable). Raw data is a relative term; data processing commonly occurs by stages, and the "processed data" from one stage may be considered the "raw data" of the next.
      Mechanical computing devices are classified according to the means by which they represent data. An analog computer represents a datum as a voltage, distance, position, or other physical quantity. A digital computer represents a datum as a sequence of symbols drawn from a fixed alphabet. The most common digital computers use a binary alphabet, that is, an alphabet of two characters, typically denoted "0" and "1". More familiar representations, such as numbers or letters, are then constructed from the binary alphabet.
      Some special forms of data are distinguished. A computer program is a collection of data, which can be interpreted as instructions. Most computer languages make a distinction between programs and the other data on which programs operate, but in some languages, notably Lisp and similar languages, programs are essentially indistinguishable from other data. It is also useful to distinguish metadata, that is, a description of other data. A similar yet earlier term for metadata is "ancillary data." The prototypical example of metadata is the library catalog, which is a description of the contents of books.
      Experimental data refers to data generated within the context of a scientific investigation by observation and recording. Field data refers to raw data collected in an uncontrolled in situ environment.


      Process (computing)

      In computing, a process is an instance  of a computer program that is being executed. It contains the program code and its current activity. Depending on the operation system(OS), a process may be made up of multiplethreads of execution  that execute instructions concurrently.
      A computer program is a passive collection of instructions, a process is the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than one process is being executed.
      Multitasking  is a method to allow multiple processes to share processors (CPUs) and other system resources. Each CPU executes a single task at a time. However, multitasking allows each processor to  swich between tasks that are being executed without having to wait for each task to finish. Depending on the operating system implementation, switches could be performed when tasks perform Input/Output operations, when a task indicates that it can be switched, or on hardware interrupts.
      A common form of multitasking is time-sharing. Time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switchs  are performed rapidly. This makes it seem like multiple processes are being executed simultaneously on the same processor. The execution of multiple processes seemingly simultaneously is called concurrency.
      For security and reliability reasons most modern operating system prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality.










                
    History



    History of Computer
    In The Beginning...

    The history of computers starts out about 2000 years ago, at the birth of the abacus, a wooden rack holding two horizontal wires with beads strung on them. When these beads are moved around, according to programming rules memorized by the user, all regular arithmetic problems can be done. Another important invention around the same time was the Astrolabe, used for navigation. Blaise Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von

    Leibniz invented a computer that was built in 1694. It could add, and, after changing some things
    around, multiply. Leibniz invented a special stepped gear mechanism for introducing the addend
    digits, and this is still being used. The prototypes made by Pascal and Leibniz were not used in many places, and considered weird until a little more than a century later, when Thomas of Colmar (A.K.A. Charles Xavier Thomas) created the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved desktop calculators by many inventors followed, so that by about 1890, the range of improvements
    included:
    v   Accumulation of partial results
    v   Storage and automatic reentry of past results (A memory function)
    v   Printing of the results

    Each of these required manual installation. These improvements were mainly made for commercial users, and not for the needs of science.

    Babbage



    While Thomas of Colmar was developing the desktop calculator, a series of very interesting developments in computers was started in Cambridge, England, by Charles Babbage, of which the computer store "Babbages" is named), a mathematics professor. In 1812, Babbage realized that many long calculations, especially those needed to make mathematical tables, were really a series of predictable actions that were constantly repeated. From this he suspected that it should be possible to do these automatically. He began to design an automatic mechanical calculating machine, which he called a difference engine. By 1822, he had a working model to demonstrate with. With financial help from the British government, Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program. The difference engine, although having limited adaptability and applicability, was really a great advance. Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea -- the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of foresight, although this couldn’t beappreciated until a full century later.
    The plans for this engine required an identical decimal computer operating on numbers of 50 decimaldigits (or words) and having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to include everything that a modern general - purpose computer would need, even the all important Conditional Control Transfer Capability that would allow commands to be executed in any order, not just the order in which they were programmed. The analytical engine was soon to use punched cards (similar to those used in a Jacquard loom), which would be read into the machine from several different Reading Stations. The machine was
    supposed to operate automatically, by steam power, and require only one person there.
    Babbage's computers were never finished. Various reasons are used for his failure. Most used is the lack of precision machining techniques at the time. Another speculation is that Babbage was working on a solution of a problem that few people in 1840 really needed to solve. After Babbage, there was a temporary loss of interest in automatic digital computers. Between 1850 and 1900 great advances were made in mathematical physics, and it came to be known that most observable dynamic phenomena can be identified by differential equations (which meant that most events occurring in nature can be measured or described in one equation or another), so that easy means for their calculation would be helpful. Moreover, from a practical view, the availability of steam power caused manufacturing (boilers), transportation (steam engines and boats), and commerce to prosper and led to a period of a lot of engineering achievements. The designing of railroads, and the making of steamships, textile mills, and bridges required differential calculus to determine such things as:

    v   center of gravity
    v   center of buoyancy
    v   moment of inertia
    v   stress distributions

    Even the assessment of the power output of a steam engine needed mathematical integration. A strong need thus developed for a machine that could rapidly perform many repetitive calculations.




    Use of Punched Cards by Hollerith




    A step towards automated computing was the development of punched cards, which were first successfully used with computers in 1890 by Herman Hollerith and James Powers, who worked for the US. Census Bureau. They developed devices that could read the information that had been punched into the cards automatically, without human help. Because of this, reading errors were reduced dramatically, work flow increased, and, most importantly, stacks of punched cards could be used as easily accessible memory of almost unlimited size. Furthermore, different problems could be stored on different stacks of cards and accessed when needed. These advantages were seen by commercial companies and soon led to the  development of improved punch-card using computers created by International Business Machines (IBM), Remington (yes, the same people that make shavers), Burroughs, and other corporations. These computers used electromechanical devices in which electrical power provided mechanical motion -- like turning the wheels of an adding machine. Such systems included features to:


    v  feed in a specified number of cards automatically
    v  add, multiply, and sort      
    v  feed out cards with punched results
    v  
    As compared to today’s machines, these computers were slow, usually processing 50 - 220 cards per minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards were a huge step forward. They provided a means of I/O, and memory storage on a huge scale. For more than 50 years after their first use, punched card machines did most of the world’s first business computing, and a considerable amount of the computing work in science.



    Electronic Digital Computers



    The start of World War II produced a large need for computer capacity, especially for the military. New weapons were made for which trajectory tables and other essential data were needed. In 1942, John P. Eckert, John W. Mauchly and their associates at the Moore school of Electrical Engineering of University of Pennsylvania decided to build a high – speed electronic computer to do the job. This machine became known as ENIAC (Electrical Numerical Integrator And Calculator) The size of ENIAC’s numerical "word" was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was therefore about 1,000 times faster then the previous generation of relay computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card I/O, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters , which served as adders and also as quick-access (.0002 seconds) read-write register storage. The executable instructions making up a program were embodied in the separate "units" of ENIAC, which were plugged together to form a "route" for the flow of information. These connections had to be redone after each computation, together with presetting function tables and switches. This "wire your own" technique was inconvenient (for obvious reasons), and with only some latitude could ENIAC be considered programmable. It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is commonly accepted as the first successful high – speed electronic digital computer (EDC) and was used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIAC's basic digital concepts, the claim being made that another physicist, John V. Atanasoff (left ) had already used basically the same ideas in a simpler vacuum - tube device he had built in the 1930’s while at Iowa State College. In 1973 the courts found in favor of the company using the Atanasoff claim.

    The Modern Stored Program EDC


    Fascinated by the success of ENIAC, the mathematician John Von Neumann  undertook, in 1945, an abstract study ofcomputation that showed that a computer should have a very simple, fixed physical structure , and yet be able to execute any kind of computation by means of a proper programmed control without the need for any change in the unit itself. Von Neumann contributed a new awareness of how practical, yet fast computers should be organized and built. These ideas, usually referred to as the stored - program technique, became essential for future generations of high - speed digital computers and were universally adopted.  The Stored - Program technique involves many features of computer design and function besides theone that it is named after. In combination, these features make very - high – speed operation attainable. A glimpse may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in consecutive order, no human programmer could generate enough instruction to keep the computer busy. Arrangements must be made, therefore, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently. Von Neumann met these two needs by making a special type of machine instruction, called a Conditional control transfer - which allowed the program sequence to be stopped and started again at any point - and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be arithmetically changed in the same way as
    data. As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in "libraries" and read into memory only when needed. Thus, much of a given program could be assembled from the subroutine library. The all - purpose computer memory became the assembly place in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. The computer control survived only as an "errand  runner" for the overall process. As soon as the advantage of these techniques became clear, they became a standard practice.The first generation of modern programmed electroniccomputers to take advantage of these improvements were built in 1947. This group included computers using Random - Access - Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched - card or punched tape I/O devices and RAM’s of 1,000 -
    word capacity and access times of .5 Greek MU seconds (.5*10-6 seconds). Some of them could perform multiplications in 2 to 4 MU seconds. Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first - generation stored - program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. They were usually programmed in ML, although by the mid 1950’s progress had been made in several
    aspects of advanced programming. This group of computers included EDVAC (above) and UNIVAC (right) the first commercially available computers.





    Advances in the 1950’s

    Early in the 50’s two important engineering discoveries changed the image of the electronic - computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor - Circuit
    Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960’s, with access times of 2 to 3 MS (Milliseconds). These machines were very expensive to purchase or even to rent and were particularly expensive to operate because of the cost of expanding programming. Such computers were mostly found in large computer centers operated by industry, government, and private laboratories - staffed with many programmers and support personnel. This
    situation led to modes of operation enabling the sharing of the high potential available.
    One such mode is batch processing, in which problems are prepared and then held ready for computation on a relatively cheap storage medium. Magnetic drums, magnetic - disk packs, or magnetic tapes were usually used. When the computer finishes with a problem, it "dumps" the whole problem (program and results) on one of these peripheral storage units and starts on a new problem. Another mode for fast, powerful machines is called time-sharing. In time-sharing, the computer processes many jobs in such rapid succession that each job runs as if the other jobs did not exist, thus keeping each "customer" satisfied. Such operating modes need elaborate executable programs to
    attend to the administration of the various tasks.

    Advances in the 1960’s

    In the 1960’s, efforts to design and develop the fastest possible computer with the greatest capacity reached a turning point with the LARC machine, built for the Livermore Radiation Laboratories of the University of California by the Sperry - Rand Corporation, and the Stretch computer by IBM. The LARC had a base memory of 98,000 words and multiplied in 10 Greek MU seconds. Stretch was made with several degrees of memory having slower access for the ranks of greater capacity, the fastest access time being less then 1 Greek MU Second and the total capacity in the vicinity of 100,000,000 words.
    During this period, the major computer manufacturers began to offer a range of capabilities and prices, as well as accessories such as

    v  Consoles
    v  Card Feeders
    v  Page Printers
    v  Cathode - ray - tube displays l Graphing devices
    v These were widely used in businesses for such things as:
    v  Accounting
    v  Payroll
    v  Inventory control
    v  Ordering Supplies
    v  Billing

    CPU’s for these uses did not have to be very fast arithmetically and were usually used to access large amounts of records on file, keeping these up to date. By far, the most number of computer systems were sold for the more simple uses, such as hospitals (keeping track of patient records, medications, and treatments given). They were also used in libraries, such as the National Medical Library retrieval system, and in the Chemical Abstracts System, where computer records on file now cover nearly all known chemical compounds.

    More Recent Advances

    The trend during the 1970's was, to some extent, moving away from very powerful, single – purpose computers and toward a larger range of applications for cheaper computer systems. Most continuousprocess manufacturing , such as petroleum refining and electrical-power distribution systems, now used computers of smaller capability for controlling and regulating their jobs. In the 1960’s, the problems in programming applications were an obstacle to the independence of medium sized on-site computers, but gains in applications programming language technologies removed these obstacles. Applications languages were now available for controlling a great range of manufacturing processes, for using machine tools with computers, and for many other things. Moreover, a new revolution in computer hardware was under way, involving shrinking of computerlogic circuitry and of components by what are called large-scale integration (LSI) techniques. In then 1950s it was realized that "scaling down" the size of electronic digital computer circuits and parts would increase speed and efficiency and by that, improve performance, if they could only find a way to do this. About 1960 photo printing of conductive circuit boards to eliminate wiring became more developed. Then it became possible to build resistors and capacitors into the circuitry by the same process. In the 1970’s, vacuum deposition of transistors became the norm, and entire assemblies, with adders, shifting registers, and counters, became available on tiny "chips." In the 1980’s, very large scale integration (VLSI), in which hundreds of thousands of transistors were placed on a single chip, became more and more common. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The "shrinking" trend continued with the introduction of personal computers (PC’s), which are programmable machines small enough and inexpensive enough to be purchased and used by individuals. Many companies, such as Apple Computer and Radio Shack, introduced very successful PC’s in the 1970s, encouraged in part by a fad in computer (video) games. In the 1980s some friction occurred in the crowded PC field, with Apple and IBM keeping strong. In the manufacturing of semiconductor chips, the Intel and Motorola Corporations were very competitive into the 1980s, although Japanese firms were making strong economic advances, especially in the area of memory chips. By the late 1980s, some personal computers were run by microprocessors that, handling 32 bits of data at a time, could process about 4,000,000 instructions per second. Microprocessors equipped with read only memory (ROM), which stores constantly used, unchanging programs, now performed an increased number of process-control, testing, monitoring, and diagnosing functions, like automobile ignition systems, automobile-engine diagnosis, and production-line inspection duties. Cray Research and Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems, through the 1970s and 1980s. In the early 1980s, however, the Japanese government announced a gigantic plan to design and build a new generation of supercomputers. This new generation, the so-called "fifth" generation, is using new technologies in very large integration, along with new programming languages, and will be capable of amazing feats in the area of artificial
    intelligence, such as voice recognition. Progress in the area of software has not matched the great advances in hardware. Software has become the major cost of many systems because programming productivity has not increased very quickly. New programming techniques, such as object-oriented programming, have been developed to help relieve this problem. Despite difficulties with software, however, the cost per calculation of computers is rapidly lessening, and their convenience and efficiency are expected to increase in the early future. The computer field continues to experience huge growth. Computer networking, computer mail, and electronic publishing are just a few of the applications that have grown in recent years. Advances in technologies continue to produce cheaper and more powerful computers offering the promise that in the near future, computers or terminals will reside in most, if not all homes, offices, and schools.



    Computer generations

     

    First-generation machines :vacuum tube

    Even before the ENIAC was finished, Eckert and Mauchly recognized its limitations and started the design of a stored-program computer, EDVAC. John von Neumann was credited with a widely circulated report describing the EDVAC design in which both the programs and working data were stored in a single, unified store. This basic design, denoted the von Neumann architecture, would serve as the foundation for the worldwide development of ENIAC's successors.[54] In this generation of equipment, temporary or working storage was provided by acoustic delay lines, which used the propagation time of sound through a medium such as liquid mercury (or through a wire) to briefly store data. A series of acoustic pulses is sent along a tube; after a time, as the pulse reached the end of the tube, the circuitry detected whether the pulse represented a 1 or 0 and caused the oscillator to re-send the pulse. Others used Williams tubes, which use the ability of a small cathode-ray tube (CRT) to store and retrieve data as charged areas on the phosphor screen. By 1954, magnetic core memory  was rapidly displacing most other forms of temporary storage, and dominated the field through the mid-1970s.


    EDVAC was the first stored-program computer designed; however it was not the first to run. Eckert and Mauchly left the project and its construction floundered. The first working von Neumann machine was the Manchester "Baby" or Small-Scale Experimental Machine, developed by Frederic C. Williams and Tom Kilburn at the University of Manchester in 1948 as a test bed for the Williams tube;  it was followed in 1949 by the Manchester Mark 1 computer, a complete system, using Williams tube and magnetic drum memory, and introducing index registers.  The other contender for the title "first digital stored-program computer" had been EDSAC, designed and constructed at the University of Cambridge. Operational less than one year after the Manchester "Baby", it was also capable of tackling real problems. EDSAC was actually inspired by plans for EDVAC (Electronic Discrete Variable Automatic Computer), the successor to ENIAC; these plans were already in place by the time ENIAC was successfully operational. Unlike ENIAC, which used parallel processing, EDVAC used a single processing unit. This design was simpler and was the first to be implemented in each succeeding wave of miniaturization, and increased reliability. Some view Manchester Mark 1 / EDSAC / EDVAC as the "Eves" from which nearly all current computers derive their architecture. Manchester University's machine became the prototype for the Ferranti Mark 1. The first Ferranti Mark 1 machine was delivered to the University in February, 1951 and at least nine others were sold between 1951 and 1957.
    The first universal programmable computer in the Soviet Union was created by a team of scientists under direction of Sergei Alekseyevich Lebedev from Kiev Institute of Electrotechnology, Soviet Union (now Ukraine). The computer MESM (МЭСМ, Small Electronic Calculating Machine) became operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of power. It could perform approximately 3,000 operations per second. Another early machine was CSIRAC, an Australian design that ran its first test program in 1949. CSIRAC is the oldest computer still in existence and the first to have been used to play digital music.
    Commercial computers
    The first commercial computer was the Ferranti Mark 1, which was delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves). A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.
    In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951  and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO (Lyons Electronic Office). This was the first business application to go live on a stored program computer.
    In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than $1 million each ($8.46 million as of 2011). UNIVAC was the first "mass produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words). A key feature of the UNIVAC system was a newly invented type of metal magnetic tape, and a high-speed tape unit, for non-volatile storage. Magnetic media are still used in many computers.  In 1952, IBM publicly announced the IBM 701 Electronic Data Processing Machine, the first in its successful 700/7000 series and its first IBM mainframe computer. The IBM 704, introduced in 1954, used magnetic core memory, which became the standard for large machines. The first implemented high-level general purpose programming language, Fortran, was also being developed at IBM for the 704 during 1955 and 1956 and released in early 1957. (Konrad Zuse's 1945 design of the high-level language Plankalkül was not implemented at that time.) A volunteer user group, which exists to this day, was founded in 1955 to share their software and experiences with the IBM 701.

    IBM 650 front panel

    IBM introduced a smaller, more affordable computer in 1954 that proved very popular. The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. It cost $500,000 ($4.09 million as of 2011) or could be leased for $3,500 a month ($30 thousand as of 2011). Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture: the instruction format included the address of the next instruction; and software: the Symbolic Optimal Assembly Program, SOAP, assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was not required.
    In 1955, Maurice Wilkes invented microprogramming,  which allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode).  It was widely used in the CPUs and floating-point units of mainframe and other computers, such as the Manchester Atlas  and the IBM 360 series.
    IBM introduced its first magnetic disk system, RAMAC (Random Access Method of Accounting and Control) in 1956. Using fifty 24-inch (610 mm) metal disks, with 100 tracks per side, it was able to store 5 megabytes of data at a cost of $10,000 per megabyte ($80 thousand as of 2011).

    Second generation: transistors

    The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Initially the only devices available were germanium point-contact transistors, which although less reliable than the vacuum tubes they replaced had the advantage of consuming far less power. The first transistorised computer was built at the University of Manchester and was operational by 1953;  a second version was completed there in April 1955. The later machine used 200 transistors and 1,300 solid-state diodes and had a power consumption of 150 watts. However, it still required valves to generate the clock waveforms at 125 kHz and to read and write on the magnetic drum memory, whereas the Harwell CADET operated without any valves by using a lower clock frequency, of 58 kHz when it became operational in February 1955. Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved
     A bipolar junction transistor

    once the more reliable bipolar junction transistors became available.
    Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System  each carrying one to four logic gates or flip-flops.
    A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than ten thousand 1401s between 1960 and 1964.

    This RAMAC DASD is being restored at the Computer History Museum

    Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The IBM 350 RAMAC was introduced in 1956 and was the world's first disk drive. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk stack can be easily exchanged with another stack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.
    Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.
    During the second generation remote terminal units (often in the form of teletype machines like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the Internet.

    Post-1960: third generation and beyond



    The explosion in the use of computers began with "third-generation" computers, making use of Jack St. Clair Kilby's and Robert Noyce's  independent
    invention of the integrated circuit (or microchip), which led to the invention of the microprocessor, by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel. The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
    During the 1960s there was considerable overlap between second and third generation technologies. IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities. It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of the microcomputer, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond.
    In April 1975 at the Hannover Fair, was presented the P6060 produced by Olivetti, the world's first personal with built-in floppy disk: Central Unit on two plates, code names PUCE1/PUCE2, TTL components made, 8" single or double floppy disk driver, 32 alphanumeric characters plasma display, 80 columns graphical thermal printer, 48 Kbytes of RAM, Basic language, 40 kilograms of weight. He was in competition with a similar product by IBM but with an external floppy disk.
    Steve Wozniak, co-founder of Apple Computer, is sometimes erroneously credited[by whom?] with developing the first mass-market home computers. However, his first computer, the Apple I, came out some time after the MOS Technology KIM-1 and Altair 8800, and the first Apple computer with graphic and sound capabilities came out well after the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments.
    Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable – installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.
    In the 21st century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the 1980s, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current during the 'transition' between logic states, except for leakage.
    This has allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. Computing hardware and its software have even become a metaphor for the operation of the universe. Although DNA-based computing and quantum qubit computing are years or decades in the future, the infrastructure is being laid today, for example, with DNA origami on photolithography  and with quantum antennae for transferring information between ion traps. Fast digital circuits (including those based on Josephson junctions and rapid single flux quantum technology) are becoming more nearly realizable with the discovery of nanoscale superconductors.
    Fiber-optic and photonic devices, which already have been used to transport data over long distances, are now entering the data center, side by side with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects.
    An indication of the rapidity of development of this field can be inferred by the history of the seminal article.  By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the pace of development has continued, worldwide.






    Classes Of Computer

    Computers can be classified, or typed, many ways. Some common classifications are summarized below.

     

    Classes by Size

    Microcomputers (Personal computers)

    Microcomputers are the most common type of computers in existence today, whether in a workplace, at school or on the desk at home. The term “microcomputer” was introduced with the advent of single chip microprocessors. The term “microcomputer” itself is now practically an anachronism.
    These computers include:
    • Desktop computers – A case and a display, put under and on a desk.
    • In-car computers (“carputers”) – Built into a car, for entertainment, navigation, etc.
    A separate class is that of mobile devices:
    • Laptops, notebook computers and Palmtop computers – Portable and all in one case. Varying sizes, but other than smartbooks expected to be “full” computers without limitations.
    • Tablet PC – Like laptops, but with only a touch-screen instead of a physical keyboard.
    • Smartphones, smartbooks and PDAs (personal digital assistants) – Small handheld computers with limited hardware.
    • Programmable calculator– Like small handhelds, but specialised on mathematical work.
    • Game consoles – Fixed computers specialized for entertainment purposes (computer games).
    • Handheld game consoles – The same as game consoles, but small and portable.

    Minicomputers (Midrange computers)

    A minicomputer (colloquially, mini) is a class of multi-user computers that lies in the middle range of the computing spectrum, in between the smallest multi-user systems (mainframe computers) and the largest single-user systems (microcomputers or personal computers). The contemporary term for this class of system is midrange computer, such as the higher-end SPARC, POWER and Itanium -based systems from Sun Microsystems, IBM and Hewlett-Packard.

    Mainframe Computers

    The term mainframe computer was created to distinguish the traditional, large, institutional computer intended to service multiple users from the smaller, single user machines. These computers are capable of handling and processing very large amounts of data quickly. Mainframe computers are used in large institutions such as government, banks and large corporations. These institutions were early adopters of computer use, long before personal computers were available to individuals. "Mainframe" often refers to computers compatible with the computer architectures established in the 1960s. Thus, the origin of the architecture also affects the classification, not just processing power.
    Mainframes are measured in millions of instructions per second or MIPS. An example of integer operation is moving data around in memory or I/O devices. A more useful industrial benchmark is transaction processing as defined by the Transaction Processing Performance Council. Mainframes are built to be reliable for transaction processing as it is commonly understood in the business world: a commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council, would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another.

    Workstation

    A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.
    Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.

     

    Supercomputer

    A supercomputer is focused on performing tasks involving intense numerical calculations such as weather forecasting, fluid dynamics, nuclear simulations, theoretical astrophysics, and complex scientific computations. A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. The term supercomputer itself is rather fluid, and the speed of today's supercomputers tends to become typical of tomorrow's ordinary computer. Supercomputer processing speeds are measured in floating point operations per second or FLOPS. Example of floating point operation is the calculation of mathematical equations in real numbers. In terms of computational capability, memory size and speed, I/O technology, and topological issues such as bandwidth and latency, Supercomputers are the most powerful. Supercomputers are very expensive and not cost-effective just to perform batch or transaction processing. Transaction processing is handled by less powerful computer such as server computer or mainframe.


    Classes by technology

    Digital Computer

    A device that processes numerical information; more generally, any device that manipulates symbolic information according to specified computational procedures. The term digital computer—or simply, computer—embraces calculators, computer workstations, control computers (controllers) for applications such as domestic appliances and industrial processes, data-processing systems, microcomputers, microcontrollers, multiprocessors, parallel computers, personal computers, network servers, and supercomputers .
    A digital computer is an electronic computing machine that uses the binary digits (bits) 0 and 1 to represent all forms of information internally in digital form. Every computer has a set of instructions that define the basic functions it can perform. Sequences of these instructions constitute machine-language programs that can be stored in the computer and used tailor it to an essentially unlimited number of specialized applications. Calculators are small computers specialized for mathematical computations. General-purpose computers range from pocket-sized personal digital assistants (notepad computers), to medium-sized desktop computers (personal computers and workstations), to large, powerful computers that are shared by many users via a computer network. The vast majority of digital computers now in use are inexpensive, special-purpose microcontrollers that are embedded, often invisibly, in such devices as toys, consumer electronic equipment, and automobiles. Computer programming;Embedded systems.
    The main data-processing elements of a computer reside in a small number of electronic integrated circuits (ICs) that form a microprocessor or central processing unit (CPU). Electronic technology allows a basic instruction such as “add two numbers” to be executed many millions of times per second. Other electronic devices are used for program and data storage (memory circuits) and for communication with external devices and human users (input-output circuits). Nonelectronic (magnetic, optical, and mechanical) devices also appear in computers. They are used to construct input-output devices such as keyboards, monitors (video screens), secondary memories, printers, sensors, and mechanical actuators.
    Information is stored and processed by computers in fixed-sized units called words. Common word sizes are 8, 16, 32, and 64 bits. Four-bit words can be used to encode the first 16 integers. By increasing the word size, the number of different items that can be represented and their precision can be made as large as desired. A common word size in personal computers is 32 bits, which allows 232 = 4,294,967,296 distinct numbers to be represented.
    Computer words can represent many different forms of information, not just numbers. For example, 8-bit words called characters or bytes are used to encode text symbols (the 10 decimal digits, the 52 upper- and lowercase letters of the English alphabet, and punctuation marks). A widely used code of this type is ASCII (American Standard Code for Information Interchange). Visual information can be reduced to black and white dots ( pixels) corresponding to 0's and 1's. Audio information can be digitized by mapping a small element of sound into a binary word; for example, a compact disk (CD) uses several million 16-bit words to store an audio recording. Logical quantities encountered in reasoning or decision making can be captured by associating 1 with true and 0 with false. Hence, most forms of information are readily reduced to a common, numberlike binary format suitable for processing by computer.

    Analog computer

    An analog computer is a form of computer that uses the continuously-changeable aspects of physical phenomena such as electrical,  mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital computers represent varying quantities incrementally, as their numerical values change.
    Mechanical analog computers were very important in gun fire control in World War II and the Korean War; they were made in significant numbers. In particular, development of transistors made electronic analog computers practical, and before digital computers had developed sufficiently, they were commonly used in science and industry.
    Analog computers can have a very wide range of complexity. Slide rules and nomographs are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated.
    Setting up an analog computer required scale factors to be chosen, along with initial conditions—that is, starting values. Another essential was creating the required network of interconnections between computing elements. Sometimes it was necessary to re-think the structure of the problem so that the computer would function satisfactorily. No variables could be allowed to exceed the computer's limits, and differentiation was to be avoided, typically by rearranging the "network" of interconnects, using integrators in a different sense.
    Running an electronic analog computer, assuming a satisfactory setup, started with the computer held with some variables fixed at their initial values. Moving a switch released the holds and permitted the problem to run. In some instances, the computer could, after a certain running time interval, repeatedly return to the initial-conditions state to reset the problem, and run it again.

     

    Hybrid computer

    Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations.
    In general, analog computers are extraordinarily fast, since they can solve most complex equations at the rate at which a signal traverses the circuit, which is generally an appreciable fraction of the speed of light. On the other hand, the precision of analog computers is not good; they are limited to three, or at most, four digits of precision.
    Digital computers can be built to take the solution of equations to almost unlimited precision, but quite slowly compared to analog computers. Generally, complex equations are approximated using iterative numerical methods which take huge numbers of iterations, depending on how good the initial "guess" at the final value is and how much precision is desired. (This initial guess is known as the numerical seed for the iterative process.) For many real-time operations, the speed of such digital calculations is too slow to be of much use (e.g., for very high frequency phased array radars or for weather calculations), but the precision of an analog computer is insufficient.
    Hybrid computers can be used to obtain a very good but relatively imprecise 'seed' value, using an analog computer front-end, which is then fed into a digital computer iterative process to achieve the final desired degree of precision. With a three or four digit, highly accurate numerical seed, the total digital computation time necessary to reach the desired precision is dramatically reduced, since many fewer iterations are required.
    Consider that the nervous system in animals is a form of hybrid computer. Signals pass across the synapses from one nerve cell to the next as discrete (digital) packets of chemicals, which are then summed within the nerve cell in an analog fashion by building an electro-chemical potential until its threshold is reached, whereupon it discharges and sends out a series of digital packets to the next nerve cell. The advantages are at least threefold: noise within the system is minimized (and tends not to be additive), no common grounding system is required, and there is minimal degradation of the signal even if there are substantial differences in activity of the cells along a path (only the signal delays tend to vary). The individual nerve cells are analogous to analog computers; the synapses are analogous to digital computers.
    Note that hybrid computers should be distinguished from hybrid systems. The latter may be no more than a digital computer equipped with an analog-to-digital converter at the input and/or a digital-to-analog converter at the output, to convert analog signals for ordinary digital signal processing, and conversely, e.g., for driving physical control systems, such as servomechanisms.

    Classes by function

    Servers

    Server usually refers to a computer that is dedicated to providing a service. For example, a computer dedicated to a database may be called a "database server". "File servers" manage a large collection of computer files. "Web servers" process web pages and web applications. Many smaller servers are actually personal computers that have been dedicated to providing services for other computers.

     Workstations


    Workstations are computers that are intended to serve one user and may contain special hardware enhancements not found on a personal computer.

    Information appliances

    Information appliances are computers specially designed to perform a specific user-friendly function - such as playing music, photography, or editing text. The term is most commonly applied to mobile devices, though there are also portable and desktop devices of this class.

    Embedded computers

    Embedded computers are computers that are a part of a machine or device. Embedded computers generally execute a program that is stored in non-volatile memory and is only intended to operate a specific machine or device. Embedded computers are very common. Embedded computers are typically required to operate continuously without being reset or rebooted, and once employed in their task the software usually cannot be modified. An automobile may contain a number of embedded computers; however, a washing machine and a DVD player would contain only one. The central processing units (CPUs) used in embedded computers are often sufficient only for the computational requirements of the specific application and may be slower and cheaper than CPUs found in a personal computer.





    Computer System





    Input device

    An input device is any periperal  (piece ofcomputer hardware equipment) used to provide data and control signals to an information processing system (such as a computer ). Input and output devices  make up the hardware interface between computer as a scanner  or 6DOF controller.
    Many input devices can be classified according to:
    • modality of input (e.g. mechanical motion, audio, visual, etc.)
    • the input is discrete (e.g. key presses) or continuous (e.g. a mouse's position, though digitized into a discrete quantity, is fast enough to be considered continuous)
    • the number of degrees of freedom involved (e.g. two-dimensional traditional mice, or three-dimensional navigators designed for CAD applications)
    Pointing devices, which are input devices used to specify a position in space, can further be classified according to:
    • Whether the input is direct or indirect. With direct input, the input space coincides with the display space, i.e. pointing is done in the space where visual feedback or the cursor appears. Touchscreens  and Lighr pens involve direct input. Examples involving indirect input include the mouse and trackball.
    • Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g. with a mouse that can be lifted and repositioned)
    Note that direct input is almost necessarily absolute, but indirect input may be either absolute or relative. For example, digitizing Grapics tablets that do not have an embedded screen involve indirect input and sense absolute positions and are often run in an absolute input mode, but they may also be setup to simulate a relative input mode where the stylus or puck can be lifted and repositioned.

    Keyboards

    A 'keyboard' is a human interface device which is represented as a layout of buttons. Each button, or key, can be used to either input a linguistic character to a computer, or to call upon a particular function of the computer. Traditional keyboards use spring-based buttons, though newer variations employ virtual keys, or even projected keyboards.
    Examples of types of keyboards include:
    • computer keyboard
    • keyer
    • chorded keybord
    • LPEK

    Pointing devices

    A pointing device is any human interface device that allows a user to input spatial data to a computer. In the case of mice and touch screens, this is usually achieved by detecting movement across a physical surface. Analog devices, such as 3D mice, joysticks, or pointing sticks, function by reporting their angle of deflection. Movements of the pointing device are echoed on the screen by movements of the cursor, creating a simple, intuitive way to navigate a computer's GUI.

     

    High-degree of freedom input devices


    Some devices allow many continuous degrees of freedom as input. These can be used as pointing devices, but are generally used in ways that don't involve pointing to a location in space, such as the control of a camera angle while in 3D applications. These kinds of devices are typically used in CAVEs, where input that registers 6DOF is required.

    Composite devices

    Input devices, such as buttons and joysticks, can be combined on a single physical device that could be thought of as a composite device. Many gaming devices have controllers like this. Technically mice are composite devices, as they both track movement and provide buttons for clicking, but composite devices are generally considered to have more than two different forms of input.
    • Game controller
    • Gamepad (or joypad)
    • Paddle (game controller)
    • Wii Remote

    Imaging and Video input devices

    Video input devices are used to digitize images or video from the outside world into the computer. The information can be stored in a multitude of formats depending on the user's requirement.
    • digital camera
    • Webcam
    • Image scanner
    • Fingerprint scanner
    • Barcode reader
    • 3D scanner
    • Laser rangefinder
    Medical Imaging
    • Computed tomography
    • Magnetic resonance imaging
    • Positron emission tomography
    • Medical ultrasonography

    Audio input devices

    In the fashion of video devices, audio devices are used to either capture or create sound. In some cases, an audio output device can be used as an input device, in order to capture produced sound.
    • Microphone
    • MIDI keyboard or other digital musical instrument


      Output device

      An output device is any piece of computer hardware equipment used to communicate the results of data processing carried out by an information processing system (such as a computer) to the outside world.
      In computing, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world. Inputs are the signals or data sent to the system, and outputs are the signals or data sent by the system to the outside.
      Examples of output devices:
    • Speakers
    • Headphones
    • Screen (Monitor)
    • Printer

    Computer speaker

    Computer speakers, or multimedia speakers, are speakers external to a computer, that disable the lower fidelity built-in speaker. They often have a low-power internal amplifier. The standard audio connection is a 3.5 mm (approximately 1/8 inch) stereo jack plug often colour-coded lime green (following the PC 99 standard) for computer sound cards. A plug and socket for a two-wire (signal and ground) coaxial cable that is widely used to connect analog audio and video components. Also called a "phono connector," rows of RCA sockets are found on the backs of stereo amplifier and numerous A/V products. The prong is 1/8" thick by 5/16" long. A few use an RCA connector for input. There are also USB speakers which are powered from the 5 volts at 500 milliamps provided by the USB port, allowing about 2.5 watts of output power.
    Computer speakers range widely in quality and in price. The computer speakers typically packaged with computer systems are small, plastic, and have mediocre sound quality. Some computer speakers have equalization features such as bass and treble controls.
    The internal amplifiers require an external power source, usually an AC adapter. More sophisticated computer speakers can have a 'subwoofer' unit, to enhance bass output, and these units usually include the power amplifiers both for the bass speaker, and the small 'satellite' speakers.
    Some computer displays have rather basic speakers built-in. Laptops come with integrated speakers. Restricted space available in laptops means these speakers usually produce low-quality sound.
    For some users, a lead connecting computer sound output to an existing stereo system is practical. This normally yields much better results than small low-cost computer speakers. Computer speakers can also serve as an economy amplifier for MP3 player use for those who wish to not use headphones although some models of computer speakers have headphone jacks of their own.

    Common features

    Features vary by manufacturer, but may include the following:
    • An LED power indicator.
    • A 3.5 mm headphone jack.
    • Controls for volume, and sometimes bass and treble
    • A remote volume control.

    Cost cutting measures and technical compatibility

    In order to cut the cost of computer speakers (unless designed for premium sound performance), speakers designed for computers often lack an AM/FM tuner and other built-in sources of audio. However, the male 3.5 mm plug can be jury rigged with "female 3.5 mm TRS to female stereo RCA" adapters to work with stereo system components such as CD/DVD-Audio/SACD players (although computers have CD-ROM drives of their own with audio CD support), audio cassette players, turntables, etc.
    Despite being designed for computers, computer speakers are electrically compatible with the aforementioned stereo components. There are even models of computer speakers that have stereo RCA in jacks.

    Major computer speaker companies

    Headphones

    Headphones are a pair of small loudspeakers, or less commonly a single speaker, with a way of holding them close to a user's ears and a means of connecting them to a signal source such as an audio amplifier, radio, CD player or portable media player. They are also known as stereophones, headsets or, colloquially cans. The in-ear versions are known as earphones or earbuds. In the context of telecommunication, the term headset is used to describe a combination of headphone and microphone used for two-way communication, for example with a telephone.

    Types of headphones

    The particular needs of the listener determine the choice of headphone. The need for portability indicates smaller, lighter headphones but can mean a compromise in fidelity. Headphones used as part of a home hi-fi do not have the same design constraints and can be larger and heavier. Generally, headphone form factors can be divided into four separate categories: circumaural, supra-aural, earbud, and in-ear.

     

    Circumaural




    Circumaural headphones have large pads that surround the outer ear.

    Circumaural headphones (sometimes called full size headphones) have circular or ellipsoid earpads that encompass the ears. Because these headphones completely surround the ear, circumaural headphones can be designed to fully seal against the head to attenuate any intrusive external noise. Because of their size, circumaural headphones can be heavy and there are some sets which weigh over 500 grams (1 lb). Good headband and earpad design is required to reduce discomfort resulting from weight.

     

     

     




    Supra-aural

     A pair of supra-aural headphones.

    Supra-aural headphones have pads that sit on top of the ears, rather than around them. They were commonly bundled with personal stereos during the 1980s. This type of headphone generally tends to be smaller and more lightweight than circumaural headphones, resulting in less attenuation of outside noise.

     

     

     



    In-ear headphones

     

                                                                         Earbuds / earphones
    Earbuds or earphones are headphones of a much smaller size that are placed directly outside of the ear canal, but without fully enveloping it. They are generally inexpensive and are favored for their portability and convenience. Due to their inability to provide any isolation they are often used at higher volumes in order to drown out noise from the user's surroundings, which increases the risk of hearing-loss. During the 1990s and 2000s, earbuds became a common type bundled with personal music devices.

    In-ear monitors


     In-ear monitors extend into the ear canal, providing isolation from outside noise.
    In-ear monitors (also known as IEMs or canalphones) are earphones that are inserted directly into the ear canal. Canalphones offer portability similar to earbuds, and also act as earplugs to block out environmental noise. There are two main types of IEMs: universal and custom. Universal canalphones provide one or more stock sleeve size(s) to fit various ear canals, which are commonly made out of silicone rubber, elastomer, or foam, for noise isolation. Custom canalphones are fitted to the ears of each individual. Castings of the ear canals are made and the manufacturer uses the castings to create custom-molded silicone rubber or elastomer plugs that provide added comfort and noise isolation. Because of the individualized labor involved, custom IEMs are more expensive than universal IEMs and resell value is very low as they are unlikely to fit other people.

    Headset

     A typical example of a headset used for voice chats.

    A headset is a headphone combined with a microphone. Headsets provide the equivalent functionality of a telephone handset with hands-free operation. The most common uses for headsets are in console or PC gaming, Call centres and other telephone-intensive jobs and also for personal use at the computer to facilitate comfortable simultaneous conversation and typing. Headsets are made with either a single-earpiece (mono) or a double-earpiece (mono to both ears or stereo). The microphone arm of headsets is either an external microphone type where the microphone is held in front of the user's mouth, or a voicetube type where the microphone is housed in the earpiece and speech reaches it by means of a hollow tube.

    Telephone headsets

    Telephone headsets connect to a fixed-line telephone system. A telephone headset functions by replacing the handset of a telephone. All telephone headsets come in a standard 4P4C commonly called an RJ-9 connector.
    For older models of telephones, the headset microphone impedance is different from that of the original handset, requiring a telephone amplifier for the telephone headset. A telephone amplifier provides basic pin-alignment similar to a telephone headset adaptor, but it also offers sound amplification for the microphone as well as the loudspeakers. Most models of telephone amplifiers offer volume control for loudspeaker as well as microphone, mute function and headset/handset switching. Telephone amplifiers are powered by batteries or AC adaptors.

    Computer monitor

    A monitor or display (sometimes called a visual display unit) is an electronic visual display for computers. The monitor comprises the display device, circuitry, and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors use a cathode ray tube about as deep as the screen size. Originally computer monitors were used for data processing and television receivers for entertainment; increasingly computers are being used both for data processing and entertainment and TVs implement some typical computer functionality. Displays exclusively for data use tend to have an aspect ratio of 4:3; those used also (or solely) for entertainment are usually 16:9 widescreen, Sometimes a compromise is used, e.g. 16:10

    Screen size


      For any rectangular section on a round tube, the diagonal measurement is also the diameter of the tube

    The size of an approximately rectangular display is usually given as the distance between two opposite screen corners, that is, the diagonal of the rectangle. One problem with this method is that it does not take into account the display aspect ratio, so that for example a 16:9 21 in (53 cm) widescreen display is far less high, and has less area, than a 21 in (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 × 12.6 in (43 × 32 cm) and area 211 sq in (1,360 cm2), while the widescreen is 18.3 × 10.3 in (46 × 26 cm), 188 sq in (1,210 cm2). For many purposes the height of the display is the main parameter; a 16:9 display needs a diagonal 22% larger than a 4:3 display for the same height.
    This method of measurement is inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, only their diameter was needed to describe their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangle was equivalent to the diameter of the tube's face. This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3.
    A problematic practice was the use of the size of a monitor's imaging element, rather than the size of its viewable image, when describing its size in publicity and advertising materials. On CRT displays a substantial portion of the CRT's screen is concealed behind the case's bezel or shroud in order to hide areas outside the monitor's "safe area" due to overscan. These practices were seen as deceptive, and widespread consumer objection and lawsuits eventually forced most manufacturers to instead measure

    Performance measurements

    The performance of a monitor is measured by the following parameters:
    • Luminance is measured in candelas per square meter (cd/m2 also called a Nit).
    • Viewable image size is measured diagonally. For CRTs, the viewable size is typically 1 in (25 mm) smaller than the tube itself.
    • Aspect ratios is the ratio of the horizontal length to the vertical length. 4:3 is the standard aspect ratio, for example, so that a screen with a width of 1024 pixels will have a height of 768 pixels. If a widescreen display has an aspect ratio of 16:9, a display that is 1024 pixels wide will have a height of 576 pixels.
    • Display resolution is the number of distinct pixels in each dimension that can be displayed. Maximum resolution is limited by dot pitch.
    • Dot pitch is the distance between subpixels of the same color in millimeters. In general, the smaller the dot pitch, the sharper the picture will appear.
    • Refresh rate is the number of times in a second that a display is illuminated. Maximum refresh rate is limited by response time.
    • Response time is the time a pixel in a monitor takes to go from active (black) to inactive (white) and back to active (black) again, measured in milliseconds. Lower numbers mean faster transitions and therefore fewer visible image artifacts.
    • Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing.
    • Power consumption is measured in watts.
    • Viewing angle is the maximum angle at which images on the monitor can be viewed, without excessive degradation to the image. It is measured in degrees horizontally and vertically.

     The area of displays with identical diagonal measurements can vary substantially

    Comparison

    CRT

    Pros:
    • High dynamic range (up to around 15,000:1), excellent color, wide gamut and low black level. The color range of CRTs is unmatched by any display type except OLED.
    • Can display natively in almost any resolution and refresh rate
    • No input lag
    • Sub-millisecond response times
    • Near zero color, saturation, contrast or brightness distortion. Excellent viewing angle.
    • Usually much cheaper than LCD or Plasma screens.
    • Allows the use of light guns/pens
    Cons:
    • Large size and weight, especially for bigger screens (a 20-inch unit weighs about 50 lb (23 kg))
    • High power consumption
    • Generates a considerable amount of heat when running
    • Geometric distortion caused by variable beam travel distances
    • Can suffer screen burn-in
    • Produces noticeable flicker at low refresh rates
    • Normally only produced in 4:3 aspect ratio (though some widescreen ones, notably Sony's FW900, do exist)
    • Hazardous to repair/service
    • Effective vertical resolution limited to 1024 scan lines.
    • Color displays cannot be made in sizes smaller than 7 inches (5 inches for monochrome). Maximum size is around 24 inches (for computer monitors; televisions run up to 40 inches).

    LCD

    Pros:
    • Very compact and light
    • Low power consumption
    • No geometric distortion
    • Little or no flicker depending on backlight technology
    • Not affected by screen burn-in
    • No high voltage or other hazards present during repair/service
    • More reliable than CRTs
    • Can be made in almost any size or shape
    • No theoretical resolution limit
    Cons:
    • Limited viewing angle, causing color, saturation, contrast and brightness to vary, even within the intended viewing angle, by variations in posture.
    • Bleeding and uneven backlighting in some monitors, causing brightness distortion, especially toward the edges.
    • Slow response times, which cause smearing and ghosting artifacts. However, this is mainly a problem with passive-matrix displays. Current generation active-matrix LCDs have response times of 6 ms for TFT panels and 8 ms for S-IPS.
    • Only one native resolution. Displaying resolutions either requires a video scaler, lowering perceptual quality, or display at 1:1 pixel mapping, in which images will be physically too large or won't fill the whole screen.
    • Fixed bit depth, many cheaper LCDs are only able to display 262,000 colors. 8-bit S-IPS panels can display 16 million colors and have significantly better black level, but are expensive and have slower response time
    • Input lag
    • Dead pixels may occur either during manufacturing or through use.
    • In a constant on situation, thermalization may occur, which is when only part of the screen has overheated and therefore looks discolored compared to the rest of the screen.
    • Not all LCD displays are designed to allow easy replacement of the backlight
    • Cannot be used with light guns/pens

     Plasma

    Pros:
    • High contrast ratios (10,000:1 or greater,) excellent color, and low black level.
    • Virtually no response time
    • Near zero color, saturation, contrast or brightness distortion. Excellent viewing angle.
    • No geometric distortion.
    • Softer and less blocky-looking picture than LCDs
    • Highly scalable, with less weight gain per increase in size (from less than 30 in (760 mm) wide to the world's largest at 150 in (3,800 mm)).
    Cons:
    • Large pixel pitch, meaning either low resolution or a large screen. As such, color plasma displays are only produced in sizes over 32 inches.
    • Image flicker due to being phosphor-based
    • Heavy weight
    • Glass screen can induce glare and reflections
    • High operating temperature and power consumption
    • Only has one native resolution. Displaying other resolutions requires a video scaler, which degrades image quality at lower resolutions.
    • Fixed bit depth. Plasma cells can only be on or off, resulting in a more limited color range than LCDs or CRTs.
    • Can suffer image burn-in. This was a severe problem on early plasma displays, but much less on newer ones
    • Cannot be used with light guns/pens
    • Dead pixels are possible during manufacturing

    Problems

    Phosphor burn-in

    Phosphor burn-in is localized aging of the phosphor layer of a CRT screen where it has displayed a static image for long periods of time. This results in a faint permanent image on the screen, even when turned off. In severe cases, it can even be possible to read some of the text, though this only occurs where the displayed text remained the same for years.
    Burn-in is most commonly seen in the following applications:
    • Point-of-service applications
    • Arcade games
    • Security monitors
    Screensavers were developed as a means to avoid burn-in, which was a widespread problem on IBM Personal Computer monochrome monitors in the 1980s. Monochrome displays are generally more vulnerable to burn-in because the phosphor is directly exposed to the electron beam while in colour displays, the shadow mask provides some protection. Although still found on newer computers, screen savers are not necessary on LCD monitors.
    Phosphor burn-in can be "fixed" by running a CRT with the brightness at 100% for several hours, but this merely hides the damage by burning all the phosphor evenly. CRT rebuilders can repair monochrome displays by cutting the front of the picture tube off, scraping out the damaged phosphor, replacing it, and resealing the tube. Colour displays can theoretically be repaired, but it is a difficult, expensive process and is normally only done on professional broadcasting monitors (which can cost up to $10,000).

    Plasma burn-in

    Burn-in re-emerged as an issue with early plasma displays, which are more vulnerable to this than CRTs. Screen savers with moving images may be used with these to minimize localized burn. Periodic change of the color scheme in use also helps.

     Glare

    Glare is a problem caused by the relationship between lighting and screen or by using monitors in bright sunlight. Matte finish LCDs and flat screen CRTs are less prone to reflected glare than conventional curved CRTs or glossy LCDs, and aperture grille CRTs, which are curved on one axis only and are less prone to it than other CRTs curved on both axes.
    If the problem persists despite moving the monitor or adjusting lighting, a filter using a mesh of very fine black wires may be placed on the screen to reduce glare and improve contrast. These filters were popular in the late 1980s.  They do also reduce light output.
    A filter above will only work against reflective glare; direct glare (such as sunlight) will completely wash out most monitors' internal lighting, and can only be dealt with by use of a hood or transreflective LCD.

    Colour misregistration

    With exceptions of correctly aligned video projectors and stacked LEDs, most display technologies, especially LCD, have an inherent misregistration of the color channels, that is, the centers of the red, green, and blue dots do not line up perfectly. Sub-pixel rendering depends on this misalignment; technologies making use of this include the Apple II from 1976, and more recently Microsoft (ClearType, 1998) and XFree86 (X Rendering Extension).

    Incomplete spectrum

    RGB displays produce most of the visible colour spectrum, but not all. This can be a problem where good colour matching to non-RGB images is needed. This issue is common to all monitor technologies that use the RGB model. Recently, Sharp introduced a four-colour TV (red, green, blue, and yellow) to improve on this.

    Display interfaces

    Computer terminals

    Early CRT-based VDUs (Visual Display Units) such as the DEC VT05 without graphics capabilities gained the label glass teletypes, because of the functional similarity to their electromechanical predecessors.
    Some historic computers had no screen display, using a teletype, modified electric typewriter, or printer instead.

    Composite signal

    Early home computers such as the Apple II and the Commodore 64 used a composite signal output to drive a TV or color composite monitor (a TV with no tuner). This resulted in degraded resolution due to compromises in the broadcast TV standards used. This method is still used with video game consoles. The Commodore monitor had S-Video input to improve resolution, but this was not common on televisions until the advent of HDTV.

    Digital displays

    Early digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors support LVDS, or TMDS protocols.

    TTL monitors

     IBM PC with green monochrome display.

    Monitors used with the MDA, Hercules, CGA, and EGA graphics adapters used in early IBM PC's (Personal Computer) and clones were controlled via TTL logic. Such monitors can usually be identified by a male DE-9 (often incorrectly called DB-9) connector used on the video cable. The disadvantage of TTL monitors was the limited number of colors available due to the low number of digital bits used for video signaling.
    Modern monochrome monitors use the same 15-pin SVGA connector as standard color monitors. They are capable of displaying 32-bit grayscale at 1024x768 resolution, making them able to interface with modern computers.
    TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four shades were possible; black, dim, medium or bright.
    CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signaling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey. A CGA monitor is only capable of rendering 16 colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.
    EGA monitors used six digital signals to control the three electron guns in a signaling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 colors.
    Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as an MDA, or CGA adapter if a monochrome or CGA monitor was used in place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used.

     

    Single color screens

    Green and amber phosphors were used on most monochrome displays in the 1970s and 1980s. White was uncommon because it was more expensive to manufacture, although Apple used it on the Lisa and early Macintoshes.

    Modern technology

     

    Analog monitors

    Most modern computer displays can show the various colors of the RGB color space by changing red, green, and blue analog video signals in continuously variable intensities. These are almost exclusively progressive scan. Although televisions used an interlaced picture, this was too flickery for computer use. In the late 1980s and early 1990s, some VGA-compatible video cards in PCs used interlacing to achieve higher resolution, but the event of SVGA quickly put an end to them. While many early plasma and liquid crystal displays have exclusively analog connections, all signals in such monitors pass through a completely digital section prior to display.
    While many similar connectors (13W3, BNC, etc.) were used on other platforms, the IBM PC and compatible systems standardized on the VGA connector in 1987.
    CRTs remained the standard for computer monitors through the 1990s. The first standalone LCD displays appeared in the early 2000s and over the next few years, they gradually displaced CRTs for most applications. First-generation LCD monitors were only produced in 4:3 aspect ratios, but current models are generally 16:9. The older 4:3 monitors have been largely relegated to point-of-service and some other applications where widescreen is not required.

    Digital and analog combination

    The first popular external digital monitor connectors, such as DVI-I and the various breakout connectors based on it, included both analog signals compatible with VGA and digital signals compatible with new flat-screen displays in the same connector. Low end older LCD monitors had only VGA inputs with higher end monitors having DVI (once it became available) though LCD monitors without a digital input are uncommon now.

    Digital monitors

    Monitors are being made which have only a digital video interface. Some digital display standards, such as HDMI and DisplayPort, also specify integrated audio and data connections. Many of these standards enforce DRM, a system intended to deter copying of entertainment content.

    Configuration and usage

    Multiple monitors

    More than one monitor can be attached to the same device. Each display can operate in two basic configurations:
    • The simpler of the two is mirroring (sometimes cloning,) in which at least two displays are showing the same image. It is commonly used for presentations. Hardware with only one video output can be tricked into doing this with an external splitter device, commonly built into many video projectors as a pass through connection.
    • The more sophisticated of the two, extension allows each monitor to display a different image, so as to form a contiguous area of arbitrary shape. This requires software support and extra hardware, and may be locked out on "low end" products by crippleware.
    • Primitive software is incapable of recognizing multiple displays, so spanning must be used, in which case a very large virtual display is created, and then pieces are split into multiple video outputs for separate monitors. Hardware with only one video output can be made to do this with an expensive external splitter device, this is most often used for very large composite displays made from many smaller monitors placed edge to edge.

    Multiple video sources

    Multiple devices can be connected to the same monitor using a video switch. In the case of computers, this usually takes the form of a "Keyboard Video Mouse switch" (KVM) switch, which is designed to switch all of the user interface devices for a workstation between different computers at once.

    Virtual displays


    Screenshot of workspaces laid out by Compiz

    Much software and video hardware supports the ability to create additional, virtual pieces of desktop, commonly known as workspaces. Spaces is Apple's implementation of virtual displays.

    Additional features

    Power saving

    Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life.
    Some monitors will also switch themselves off after a time period on standby.
    Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear.

    Integrated accessories

    Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers.

    Glossy screen

    Some displays, especially newer LCD monitors, replace the traditional anti-glare matte finish with a glossy one. This increases saturation and sharpness but reflections from lights and windows are very visible.

    Directional screen

    Narrow viewing angle screens are used in some security conscious applications.

    Autopolyscopic screen

    A directional screen which generates 3D images without headgear.

    Touch screen

    These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints.

    Tablet screens

    A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tilt and rotation as well.
    Touch and tablet screens are used on LCD displays as a substitute for the light pen, which can only work on CRTs.

     

     

     

    Printer

    In computing, a printer is a peripheral which produces a text and/or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces, typically wireless and/or Ethernet based, and can serve as a hard copy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time. In addition, a few modern printers can directly interface to electronic media such as memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit, and can function as photocopiers. Printers that include non-printing features are sometimes called multifunction printers (MFP), multi-function devices (MFD), or all-in-one (AIO) printers. Most MFPs include printing, scanning, and copying among their features.
    Consumer and some commercial printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many inexpensive consumer printers are far slower than that), and the cost per page is actually relatively high. However, this is offset by the on-demand convenience and project management costs being more controllable compared to an out-sourced solution. The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing. The world's first computer printer was a 19th century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.
    A virtual printer is a piece of computer software whose user interface and API resemble that of a printer driver, but which is not connected with a physical computer printer.

    Printing technology

    Printers are routinely classified by the printer technology they employ; numerous such technologies have been developed over the years. The choice of engine has a substantial effect on what jobs a printer is suitable for, as different technologies are capable of different levels of image or text quality, print speed, low cost, noise; in addition, some printer technologies are inappropriate for certain types of physical media, such as carbon paper or transparencies.
    A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.
    Cheques should either be printed with liquid ink or on special cheque paper with toner anchorage. For similar reasons carbon film ribbons for IBM Selectric typewriters bore labels warning against using them to type negotiable instruments such as cheques. The machine-readable lower portion of a cheque, however, must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.

    Modern print technology

    The following printing technologies are routinely found in modern printers:

    Toner-based printers

    A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.
    Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.

    Liquid inkjet printers

    Inkjet printers operate by propelling variably-sized droplets of liquid or molten material (ink) onto almost any sized page. They are the most common type of computer printer used by consumers.

    Solid ink printers

    Solid ink printers, also known as phase-change printers, are a type of thermal transfer printer. They use solid sticks of CMYK-coloured ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. The printhead sprays the ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is transferred, or transfixed, to the page. Solid ink printers are most commonly used as colour office printers, and are excellent at printing on transparencies and other non-porous media. Solid ink printers can produce excellent results. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. In addition, this type of printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line, it is also available by various Xerox concessionaires. Previously, solid ink printers were manufactured by Tektronix, but Tek sold the printing business to Xerox in 2001.

    Dye-sublimation printers

    A dye-sublimation printer (or dye-sub printer) is a printer which employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper or canvas. The process is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers are intended primarily for high-quality colour applications, including colour photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.

    Inkless printers

    Thermal printers

    Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colours can be achieved with special papers and different temperatures and heating rates for different colours; these coloured sheets are not required in black-and-white output. One example is the ZINK technology.

    UV printers

    Xerox is working on an inkless printer which will use a special reusable paper coated with a few micrometres of UV light sensitive chemicals. The printer will use a special UV light bar which will be able to write and erase the paper. As of early 2007 this technology is still in development and the text on the printed pages can only last between 16–24 hours before fading. The latest sample of this printer is InkTec UV printer. A UV printer from world class ink manufacturer: InkTec.

     

    Obsolete and special-purpose printing technologies

     An Epson MX-80

    The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.
    Impact printers rely on a forcible impact to transfer ink to the media, similar to the action of a typewriter. All but the dot matrix printer rely on the use of formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions in the same character position. Impact printers varieties include, typewriter-derived printers, teletypewriter-derived printers, daisy wheel printers, dot matrix printers and line printers. Dot matrix printers remain in common use in businesses where multi-part forms are printed, such as car rental services. An overview of impact printing  contains a detailed description of many of the technologies used.
    Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper, but not impact, per se, and special purpose pens that are mechanically run over the paper to create text and images.

    Typewriter-derived printers

    Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric typewriter were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.

    Teletypewriter-derived printers

    The common teleprinter could easily be interfaced to the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.

    Daisy wheel printers

    Daisy-wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because, during their heyday, they could produce text which was as clear and crisp as a typewriter, though they were nowhere near the quality of printing presses. The fastest letter-quality printers printed at 30 characters per second.

    Dot-matrix printers

    In the general sense many printers rely on a matrix of pixels, or dots, that together form the larger image. However, the term dot matrix printer is specifically used for impact printers that use a matrix of small pins to create precise dots. The advantage of dot-matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).
                     A Tandy 1000 HX with a Tandy DMP-133 dot-matrix printer.




    Dot-matrix printers can be broadly divided into two major classes:
    • Ballistic wire printers (discussed in the dot matrix printers article)
    • Stored energy printers
    Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.
    At one time, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers would have either 9 or 24 pins on the print head. 24-pin print heads were able to print at a higher quality. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favor for general use.
    Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.
    Dot matrix printers are still commonly used in low-cost, low-quality applications like cash registers, or in demanding, very high volume applications like invoice printing. The fact that they use an impact printing method allows them to be used to print multi-part documents using carbonless copy paper, like sales invoices and credit card receipts, whereas other printing methods are unusable with paper of this type. Dot-matrix printers are now (as of 2005) rapidly being superseded even as receipt printers.

    Line printers

    Line printers, as the name implies, print an entire line of text at a time. Three principal designs existed. In drum printers, a drum carries the entire character set of the printer repeated in each column that is to be printed. In chain printers, also known as train printers, the character set is arranged multiple times around a chain that travels horizontally past the print line. In either case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper.
    Comb printers, also called line matrix printers, represent the third major design. These printers were a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers printed a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row could be printed, continuing the example, in just eight cycles. The paper then advanced and the next pixel row was printed. Because far less motion was involved than in a conventional dot matrix printer, these printers were very fast compared to dot matrix printers and were competitive in speed with formed-character line printers while also being able to print dot matrix graphics.
    Line printers, better known as line matrix printers are widely used in the automotive, logistic and banking world for high speed and barcode printing. They are known as robust and durable printers that have the lowest price per page, label or other item. Printronix and TallyGenicom are among the leading manufacturers today.
    Line printers were the fastest of all impact printers and were used for bulk printing in large computer centres. They were virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many computer operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers.

    Pen-based plotters

    A plotter is a vector graphics printing device which operates by moving a pen over the surface of paper. Plotters have been used in applications such as computer-aided design, though they are rarely used now and are being replaced with wide-format conventional printers, which nowadays have sufficient resolution to render high-quality vector graphics using a rasterized print engine. It is commonplace to refer to such wide-format printers as "plotters", even though such usage is technically incorrect. There are two types of plotters, flat bed and drum.

    Sales

    Since 2005, the world's top selling brand of inkjet and laser printers has been HP which now has 46% of sales in inkjet and 50.5% in laser printers.

    Other printers

    A number of other sorts of printers are important for historical reasons, or for special purpose uses:
    • Digital minilab (photographic paper)
    • Electrolytic printers
    • Spark printer
    • Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes
    • Billboard / sign paint spray printers
    • Laser etching (product packaging) industrial printers
    • Microsphere (special paper)

    Printing mode

    The data received by a printer may be:
    • A string of characters
    • A bitmapped image
    • A vector image
    Some printers can process all three types of data, others not.
    • Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots.
    • Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all three.
    • Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all three. This is especially true of printers equipped with support for PostScript and/or PCL; which includes the vast majority of printers produced today.
    Today it is common to print everything (even plain text) by sending ready bitmapped images to the printer, because it allows better control over formatting.  Many printer drivers do not use the text mode at all, even if the printer is capable of it.

    Monochrome, colour and photo printers

    A monochrome printer can only produce an image consisting of one colour, usually black. A monochrome printer may also be able to produce various tones of that color, such as a grey-scale. A colour printer can produce images of multiple colours. A photo printer is a colour printer that can produce images that mimic the colour range (gamut) and resolution of prints made from photographic film. Many can be used on a standalone basis without a computer, using a memory card or USB connector.

    The printer manufacturing business

    Often the razor and blades business model is applied. That is, a company may sell a printer at cost, and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.
    Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on the ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer — expensive ink" or "expensive printer — cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an Economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.

    Printing speed

    The speed of early printers was measured in units of characters per second. More modern printers are measured in pages per minute. These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially colour images. PPM are most of the time referring to A4 paper in Europe and letter paper in the United States, resulting in a 5-10% difference.




    cpu


    Pronounced as separate letters it is the abbreviation for central processing unit. The CPU is the brains of the computer. Sometimes referred to simply as the central processor, but more commonly called processor, the CPU is where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system.
    On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor. Since the 1970's the microprocessor class of CPUs has almost completely overtaken all other CPU implementations.
    The CPU itself is an internal component of the computer. Modern CPUs are small and square and contain multiple metallic connectors or pins on the underside. The CPU is inserted directly into a CPU socket, pin side down, on the motherboard. Each motherboard will support only a specific type or range of CPU so you must check the motherboard manufacturer's specifications before attempting to replace or upgrade a CPU. Modern CPUs also have an attached heat sink and small fan that go directly on top of the CPU to help dissipate heat.
    Two typical components of a CPU are the following:


    Arithmetic logic unit
    n computing, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and logical operations. The ALU is a fundamental building block of the central processing unit (CPU) of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers. The processors found inside modern CPUs and graphics processing units (GPUs) accommodate very powerful and very complex ALUs; a single component may contain a number of ALUs.
    Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a report on the foundations for a new computer called the EDVAC. Research into ALUs remains an important part of computer science, falling under Arithmetic and logic structures in the ACM Computing Classification System.

    Control unit

    A control unit in general is a central (or sometimes distributed but clearly distinguishable) part of the machinery that controls its operation, provided that a piece of machinery is complex and organized enough to contain any such unit. One domain in which the term is specifically used is the area of computer design. In the automotive industry, the control unit helps maintain various functions of the motor vehicle.


    Main memory


    Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
    Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
    This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
    As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
    • Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
    • Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
    Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
    As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
    Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.
    Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.



    Secondary storage


    Secondary storage (also known as external memory or auxiliary storage), differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also two orders of magnitude less expensive than primary storage. Consequently, modern computer systems typically have two orders of magnitude more secondary storage than primary storage and data is kept for a longer time there.
    In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data is transferred to and from disks in large contiguous blocks.
    When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.
    Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.
    The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.
    Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded.








    Hardware


    Computer port

    In computer hardware, a port serves as an interface between the computer and other computers or peripheral devices. Physically, a port is a specialized outlet on a piece of equipment to which a plug or cable connects. Electronically, the several conductors making up the outlet provide a signal transfer between devices.

    Physical shape

    Hardware ports may be physically male or female, but female ports are much more common.
    Computer ports in common use cover a wide variety of shapes such as round (PS/2, etc.), rectangular (FireWire, etc.), square (Telephone plug), trapezoidal (D-Sub — the old printer port was a DB-25), etc. There is some standardization to physical properties and function. For instance, most computers have a keyboard port (currently a round DIN-like outlet referred to as PS/2), into which the keyboard is connected.

    Electrical signal transfer

    Electronically, hardware ports can almost always be divided into two groups based on the signal transfer:
    • Serial ports send and receive one bit at a time via a single wire pair (Ground and +/-).
    • Parallel ports send multiple bits at the same time over several sets of wires.
    After ports are connected, they typically require handshaking, where transfer type, transfer rate, and other necessary information is shared before data are sent.
    Hot-swappable ports can be connected while equipment is running. About the only port on personal computers that is not hot-swappable is the keyboard PS/2 connector. Hot swapping a keyboard on many computer models can cause permanent damage to the motherboard.
    Plug-and-play ports are designed so that the connected devices automatically start handshaking as soon as the hot-swapping is done. USB ports and FireWire ports are plug-and-play.
    Auto-detect or auto-detection ports are usually plug-and-play, but they offer another type of convenience. An auto-detect port may automatically determine what kind of device has been attached, but it also determines what purpose the port itself should have. For example, some sound cards allow plugging in several different types of audio speakers; then a dialogue box pops up on the computer screen asking whether the speaker is left, right, front, or rear for surround sound installations. The user's response determines the purpose of the port, which is physically a 1/8" tip-ring-sleeve (TRS connector) minijack. Some auto-detect ports can even switch between input and output based on context.
    As of 2006, manufacturers have nearly standardized colors associated with ports on personal computers, although there are no guarantees. The following is a short list:
    • Orange, purple, or grey: Keyboard PS/2
    • Green: Mouse PS/2
    • Blue or magenta: Parallel printer DB-25
    • Amber: Serial DB-25 or DB-9
    • Pastel pink: Microphone 1/8" stereo (TRS) minijack
    • Pastel green: Speaker 1/8" stereo (TRS) minijack
    FireWire ports used with video equipment (among other devices) can be either 4-pin or 6-pin. The two extra conductors in the 6-pin connection carry electrical power. This is why a self-powered device such as a camcorder often connects with a cable that is 4-pins on the camera side and 6-pins on the computer side, the two power conductors simply being ignored. This is also why laptop computers usually have only 4-pin FireWire ports, as they cannot provide enough power to meet requirements for devices needing the power provided by 6-pin connections.
    Optical (light) fiber, microwave, and other technologies (i.e., quantum) have different kinds of connections, as metal wires are not effective for signal transfers with these technologies. Optical connections are usually a polished glass or plastic interface, possibly with an oil that lessens refraction between the two interface surfaces. Microwaves are conducted through a pipe, which can be seen on a large scale by examining microwave towers with "funnels" on them leading to pipes.
    Hardware port trunking (HPT) is a technology that allows multiple hardware ports to be combined into a single group, effectively creating a single connection with a higher bandwidth, sometimes referred to as a double-barrel approach. This technology also provides a higher degree of fault tolerance because a failure on one port may just mean a slow-down rather than a dropout. By contrast, in Software Port Trunking (SPT), two agents (websites, channels, etc.) are bonded into one with the same effectiveness; i.e., ISDN B1 (64K) plus B2 (64K) equals data throughput of 128K.

     

    Expansion card

    The expansion card (also expansion board, adapter card or accessory card) in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard to add functionality to a computer system.
    One edge of the expansion card holds the contacts (the edge connector) that fit exactly into the slot. They establish the electrical contact between the electronics (mostly integrated circuits) on the card and on the motherboard.
    Connectors mounted on the bracket allow the connection of external devices to the card. Depending on the form factor of the motherboard and case, around one to seven expansion cards can be added to a computer system. In the case of a backplane system, up to 19 expansion cards can be installed. There are also other factors involved in expansion card capacity. For example, most graphics cards on the market as of 2010 are dual slot graphics cards, using the second slot as a place to put an active heat sink with a fan.
    Some cards are "low-profile" cards, meaning that they are shorter than standard cards and will fit in a lower height computer chassis. (There is a "low profile PCI card" standard[1] that specifies a much smaller bracket and board area). The group of expansion cards that are used for external connectivity, such as a network, SAN or modem card, are commonly referred to as input/output cards (or I/O cards).
    The primary purpose of an expansion card is to provide or expand on features not offered by the motherboard. For example, the original IBM PC did not provide graphics or hard drive capability as the technology for providing that on the motherboard did not exist. In that case, a graphics expansion card and an ST-506 hard disk controller card provided graphics capability and hard drive interface respectively.
    In the case of expansion of on-board capability, a motherboard may provide a single serial RS232 port or Ethernet port. An expansion card can be installed to offer multiple RS232 ports or multiple and higher bandwidth Ethernet ports. In this case, the motherboard provides basic functionality but the expansion card offers additional or enhanced ports.

    History

    The first microcomputer to feature a slot-type expansion card bus was the Altair 8800, developed 1974-1975. Initially, implementations of this bus were proprietary (such as the Apple II and Macintosh), but by 1982 manufacturers of Intel 8080/Zilog Z80-based computers running CP/M had settled around the S-100 standard. IBM introduced the XT bus, with the first IBM PC in 1981; it was then called the PC bus, as the IBM XT, using the same bus (with slight exception,) was not to be introduced until 1983. XT (a.k.a. 8-bit ISA) was replaced with ISA (a.k.a. 16-bit ISA), originally known as AT bus, in 1984. IBM's MCA bus, developed for the PS/2 in 1987, was a competitor to ISA, also their design, but fell out of favor due to the ISA's industry-wide acceptance and IBM's closed licensing of MCA. EISA, the 32-bit extended version of ISA championed by Compaq, was used on some PC motherboards until 1997, when Microsoft declared it a "legacy" subsystem in the PC 97 industry white-paper. Proprietary local buses (q.v. Compaq) and then the VESA Local Bus Standard, were late 1980s expansion buses that were tied but not exclusive[2][3][4] to the 80386 and 80486 CPU bus. The PC104 bus is an embedded bus that copies the ISA bus.
    Intel launched their PCI bus chipsets along with the P5-based Pentium CPUs in 1993. The PCI bus was introduced in 1991 as replacement for ISA. The standard (now at version 3.0) is found on PC motherboards to this day. The PCI standard supports Bridging, as many as ten daisy chained PCI buses have been tested. Cardbus, using the PCMCIA connector, is a PCI format that attaches peripherals to the Host PCI Bus via PCI to PCI Bridge. Cardbus is being supplanted by ExpressCard format. Intel introduced the AGP bus in 1997 as a dedicated video acceleration solution. AGP devices are logically attached to the PCI bus over a PCI-to-PCI bridge. Though termed a bus, AGP usually supports only a single card at a time (Legacy BIOS support issues). From 2005 PCI-Express has been replacing both PCI and AGP. This standard, approved [by who?] in 2004, implements the logical PCI protocol over a serial communication interface. PC104-Plus, Mini PCI, or PCI-104 are often added for expansion on small form factor boards such as Micro ITX.
    The USB format has become a de facto expansion bus standard especially for laptop computers. All the functions of add-in card slots can currently be duplicated by USB, including Video [5][6], networking, storage and audio. USB 2.0 is currently part of the ExpressCard interface and USB 3.0 is part of the ExpressCard 2.0 standard.
    FireWire or IEEE 1394 is a serial expansion bus originally promoted for Apple Inc. Computer expansion replacing the SCSI bus. Also adopted for PCs, often used for storage and video cameras, it has application for networking, video, and audio.
    After the S-100 bus, this article above mentions only buses used on IBM-compatible/Windows-Intel PCs. Most other computer lines that were not IBM compatible, including those from Apple Inc.(Apple II, Macintosh), Tandy, Commodore, Amiga, and Atari, offered their own expansion buses. Apple used a proprietary system with seven 50-pin-slots for Apple II peripheral cards, then later used the NuBus for its Macintosh series until 1995, at which time they switched to a standard PCI Bus. Generally PCI expansion cards will function on any CPU platform if there is a software driver for that type. PCI video cards and other cards that contain a BIOS are problematic, although video cards conforming to VESA Standards may be used for secondary monitors. DEC Alpha, IBM PowerPC, and NEC MIPS workstations used PCI bus connectors[7].
    Even many video game consoles, such as the Sega Genesis, included expansion buses; at least in the case of the Genesis, the expansion bus was proprietary, and in fact the cartridge slots of many cartridge based consoles (not including the Atari 2600) would qualify as expansion buses, as they exposed both read and write capabilities of the system's internal bus. However, the expansion modules attached to these interfaces, though functionally the same as expansion cards, are not technically expansion cards, due to their physical form.
    For their 1000 EX and 1000 HX models, Tandy Computer designed the PLUS expansion interface, an adaptation of the XT-bus supporting cards of a smaller form factor. Because it is electrically compatible with the XT bus (a.k.a. 8-bit ISA or XT-ISA), a passive adapter can be made to connect XT cards to a PLUS expansion connector. Another feature of PLUS cards is that they are stackable. Another bus that offered stackable expansion modules was the "sidecar" bus used by the IBM PCjr. This may have been electrically the same as or similar to the XT bus; it most certainly had some similarities since both essentially exposed the 8088 CPU's address and data buses, with some buffering and latching, the addition of interrupts and DMA provided by Intel add-on chips, and a few system fault detection lines (Power Good, Memory Check, I/O Channel Check). Again, PCjr sidecars are not technically expansion cards, but expansion modules, with the only difference being that the sidecar is an expansion card enclosed in a plastic box (with holes exposing the connectors).

    Expansion slot standards

    Expansion card types






    Stroage


    What is the Computer data storage


    Computer data storage, often called storage or memory, refers to computer components and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention. It is one of the fundamental components of all modern computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used since the 1940s.
    In contemporary usage, memory usually refers to a form of semiconductor storage known as random-access memory, typically DRAM (Dynamic-RAM) but memory can refer to other forms of fast but temporary storage. Similarly, storage today more commonly refers to storage devices and their media not directly accessible by the CPU (secondary or tertiary storage) — typically hard disk drives, optical disc drives, and other devices slower than RAM but more permanent.[1] Historically, memory has been called main memory, real storage or internal memory while storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
    The contemporary distinctions are helpful, because they are also fundamental to the architecture of computers in general. The distinctions also reflect an important and significant technical difference between memory and mass storage devices, which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional nomenclature.

    Purpose of storage

    Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains several kinds of storage, each with an individual purpose.
    A digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, using eight million bits, or about one megabyte, a typical computer could store a short novel.
    Traditionally the most important part of every computer is the central processing unit (CPU, or simply a processor), because it actually operates on data, performs any calculations, and controls all the other components.
    Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators or simple digital signal processors. Von Neumann machines differ in that they have a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
    In practice, almost all computers use a variety of memory types, organized in a storage hierarchy around the CPU, as a trade-off between performance and cost. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit.

    Hierarchy of storage



     Main Memory
    Secondary storag
     Tertiary storage


    Tertiary Storge




     Large tape library. Tape cartridges placed on shelves in the front, robotic arm moving in the back. Visible height of the library is about 180 cm.
    Tertiary storage or tertiary memory, provides a third level of storage. Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; this data is often copied to secondary storage before use. It is primarily used for archival of rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1-10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
    When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.

    Off-line storage

    Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
    Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is less expensive than tertiary storage.
    In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.



    Characteristics of storage


     A 1GB DDR RAM memory module (detail)

    Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressibility. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.

    Volatility

    Non-volatile memory 
    Will retain the stored information even if it is not constantly supplied with electric power. It is suitable for long-term storage of information.
    Volatile memory 
    Requires constant power to maintain the stored information. The fastest memory technologies of today are volatile ones (not a universal rule). Since primary storage is required to be very fast, it predominantly uses volatile memory.

    Differentiation


    Dynamic random access memory 
    A form of volatile memory which also requires the stored information to be periodically re-read and re-written, or refreshed, otherwise it would vanish.
    Static memory 
    A form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied. (It loses its content if power is removed).

     Mutability

    Read/write storage or mutable storage 
    Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage.
    Read only storage 
    Retains the information stored at the time of manufacture, and write once storage (Write Once Read Many) allows the information to be written only once at some point after manufacture. These are called immutable storage. Immutable storage is used for tertiary and off-line storage. Examples include CD-ROM and CD-R.
    Slow write, fast read storage 
    Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and flash memory.

    Accessibility

    Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage.
    The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.

    Addressability

    Location-addressable 
    Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans.
    Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
    Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.


    Capacity

    Raw capacity 
    The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
    The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).

    Performance

    The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency, and in case of sequential access storage, minimum, maximum and average latency.
    The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second or MB/s, though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.

    Energy use

    • Storage devices that reduce fan usage, automatically shut-down during inactivity, and low power hard drives can reduce energy consumption 90 percent.
    • 2.5 inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks.



    Fundamental storage technologies

    As of 2008, the most commonly used data storage technologies are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies have also been used in the past or are proposed for development.

    Semiconductor

    Semiconductor memory uses semiconductor-based integrated circuits to store information. A semiconductor memory chip may contain millions of tiny transistors or capacitors. Both volatile and non-volatile forms of semiconductor memory exist. In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor memory or dynamic random access memory. Since the turn of the century, a type of non-volatile semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers.

    Magnetic


    media


    Wire (1898) • Tape (1928) • Drum (1932) • Ferrite core (1949) • Hard disk (1956) • Stripe card (1956) • MICR (1956) • Thin film (1962) • CRAM (1962) • Twistor (~1968) • Floppy disk (1969) • Bubble (~1970) • MRAM (1995) • Racetrack (2008)

    Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
    In early computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin-film memory, twistor memory or bubble memory. Also unlike today, magnetic tape was often used for secondary storage.

    Optical


    media


    CD (1982): CD-R (1988) · CD-RW (1997)
    DVD (1995): DVD-RW (1999) · DVD+RW (2001) · DVD+R (2002) · DVD+R DL (2004) · DVD-R DL (2005)
    Other:
    Microform (1870) · Optical tape (20th century) · Optical disc (20th century) · Laserdisc (1978) · UDO (2003) · ProData (2003) · UMD (2004) · HD DVD (2006) · Blu-ray Disc (2006)
    Magneto-optic Kerr effect (1877): MO disc (1980s) · MiniDisc (1992) · Hi-MD (2004)
    Optical Assist:
    Laser turntable (1986) · Floptical (1991) · Super DLT (1998)
    Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media ), formed once (write once media) or reversible (recordable or read/write media). The following forms are currently in common use:
    Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
    3D optical data storage has also been proposed.

    Paper


    media


    Writing on papyrus (c.3000 BCE) · Paper (105 CE)


    Punched tape (1846) · Book music (1863) · Ticker tape (1867) · Piano roll (1880s) · Punched card (1890) · Edge-notched card (1896) · Optical mark recognition · Optical character recognition (1929) · Barcode (1948) · Paper disc (2004)
    Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. A few technologies allow people to make marks on paper that are easily read by machine—these are widely used for tabulating votes and grading standardized tests. Barcodes made it possible for any object that was to be sold or transported to have some computer readable information securely attached to it.

    Uncommon

    Vacuum tube memory 
    A Williams tube used a cathode ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since Williams tube was unreliable and Selectron tube was expensive.
    Electro-acoustic memory 
    Delay line memory used sound waves in a substance such as mercury to store information. Delay line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.
    is a medium for optical storage generally consisting of a long and narrow strip of plastic onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.
    uses different mechanical phases of Phase Change Material to store information in an X-Y addressable matrix, and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write once optical disks already use phase change material to store information.
    stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential access, and either write once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
    stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch.


    Related technologies

     

    Network connectivity

    A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors in a much lesser degree.
    • Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This term was coined lately, together with NAS and SAN.
    • Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
    • Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN is the former presents and manages file systems to client computers, whilst the latter provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.

     

    Robotic storage

    Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. Smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
    Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
    Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.














    0 comments: