Copyright © 2010 K-Tutorials.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".

The contents are updated or revised periodically; the document version will be changed based on the update or revision of the contents.

Document version: 1.0

Monday, October 25, 2010

Computer Fundamentals

Computer
The computer is a programmable finite state machine which can perform precise arithmetic and logic operations. A programmable finite state machine is one that can take on only one of a fixed range of values.

A list of instructions can be submitted to the computer in the form of a program. A task is performed by executing the corresponding program.

A collection of related programs are known as the Software.
The physical circuitry and components are known as Hardware.

Hardware can be categorized into five blocks; they are:

  1. Input Unit (I/P)
  2. Arithmetic and Logic Unit (ALU)
  3. Control Unit (CU)
  4. Memory
  5. Output Unit (O/P)

1. Input Unit (I/P)
The input unit is the device used to enter data and instructions into the computer. The data read from the input device is stored in the computer’s memory. E.g. Keyboard, Joystick, Mouse, Light Pen .etc.
2. Arithmetic and Logic Unit (ALU)

The Arithmetic and Logic Unit (ALU) performs arithmetic and logic operations. The arithmetic operations include addition, subtraction, multiplication and division. Logical operations include the Boolean functions such as AND, OR and NOT. These operations are used during conditional branching.

3. Control Unit (CU)

The Control Unit (CU) coordinates the activities of the various components of a computer. The Arithmetic and Logic Unit (ALU) and Control Unit (CU) together comprise the Central Processing Unit (CPU). The CPUs are called Microprocessors. The speed of a computer depends of the clock frequency of the microprocessors (Execution speed of instructions per seconds).

In addition to the main processor (CPU), there are coprocessors to speed up the operations of the main processor of a computer (basically the chip-set used in the motherboard of a computer is the coprocessor). They perform activities like floating point operations, memory management, and input-output management .etc.

4. Memory


The information (instruction and data) required is stored in memory. The Computer’s memory is constructed out of semi-conducting material and stores information in binary form. Binary information is composed of two symbols ‘0’ and ‘1’ called binary digits (bits). The memory is organized into equal sized unit (usually a collection of 8bits, called a byte). These units are arranged in a sequence and are defined by numbers called address (memory address). The memory of a computer can be divided into distinct parts as below.

* Registers

Registers are locations within the microprocessor where data is stored temporarily during processing. These internal high speed registers are used in Arithmetic and Logic operations for holding data and operands. Some registers are accessible by the user through instructions. Others are reserved for the use of the CPU to perform its activities.

* Internal Cache

Cache is a small high speed memory that contains frequently used data. The use of cache avoids repeated reading of data from the slower main memory. Internal cache is located within the microprocessor (indicated as L1, L2 and L3 Cache).

* External Cache

External cache is used to supplement the internal cache. It is used when an internal cache is not available or not present. It is placed between the CPU and Main Memory (The size of the cache memory is indicated in the specification of the microprocessor). Cache size increases the performance of the microprocessor will also increases.

* Main Memory

Main Memory stores the data and instructions required by the microprocessor. The main memory is also called RAM. Microprocessor instructions can directly access main memory locations. Main memory is fast but expensive. However, it is volatile; the contents stored will be lost when the power supply is cut off.

E.g.: SDRAM, DDR1 RAM, DDR2 RAM, DDR3 RAM (Main Memory types used in today’s Computer) .etc.

* Secondary Memory

All the data and programs required by the computer cannot be stored in the main memory because it is small in size and volatile. Secondary memory is needed to store data and programs which is not currently required by the microprocessor. Secondary memory is slower but less expensive than the main memory. It is also non-volatile and larger in size. The processor can access the various type of memory in the memory hierarchy up to the main memory. The microprocessor cannot access the secondary memory directly. Therefore, data from the secondary memory has to be brought to the main memory so that the processor can use it. Memory management techniques are required to transfer information between the main memory and secondary memory, and this function is performed by the operating system.

E.g.: Hard Disk Drives, CD/DVD ROMs, Floppy Disk .etc.
Typical Modern Computer Memory Hierarchy Diagram
5. Output Unit (O/P)

Just as input devices are used to supply the computer with data, there should be some means for the computer to communicate with the user. The information generated by the computer is displayed using an output device.

E.g.: Printers, Monitors, Plotters .etc.

Typical Personal Computer with Peripherals (Input and Output Devices)


Typical Computer Storage Types




Friday, September 24, 2010

History of 'C' Language


Before we start any complex program in C, we must understand what really C is, how it came into existence and how it differs from other languages of that time. ‘C’ seems a strange name for a programming language. In this tutorial I will try to talk about these issues and then move towards view structure of a typical C program.

The root of all modern languages is ALGOL, introduced in the early 1960’s. ALGOL was the first computer programming language gave the concept of structured programming in the computer science community. It never became popular in all over the world, but it was widely used in Europe.
In 1967, Martin Richards developed a language called BCPL (Basic Combined Programming Language) primarily used for writing system software.

The development of UNIX in the C language made it uniquely portable and improvable.

The first version of UNIX was written in the low-level PDP-7 assembler language. Soon after, a language called TMG was created for the PDP-7 by R. M. McClure. Using TMG to develop a FORTRAN compiler, in 1970 Ken Thompson instead ended up developing a compiler for a new high-level language he called B, based on the earlier BCPL language developed by Martin Richard. Where it might take several pages of detailed PDP-7 assembly code to accomplish a given task, the same functionality could typically be expressed in a higher level language like B in just a few lines. B was thereafter used for further development of the UNIX system, which made the work much faster and more convenient.

When the PDP-11 computer arrived at Bell Labs, Dennis Ritchie built on B to create a new language called C which inherited Thompson's taste for concise syntax, and had a powerful mix of high-level functionality and the detailed features required to program an operating system.

C is a programming language which born at “AT & T’s Bell Laboratories” of USA in 1972. It was written by Dennis Ritchie. This language was created for a specific purpose: to design the UNIX operating system (which is used on many computers; one of the most network operating systems used in today and heart of the Internet super highway with rich security features). From the beginning, C was intended to be useful--to allow busy programmers to get things done.

Most of the components of UNIX were eventually rewritten in C, culminating with the kernel itself in 1973. Because of its convenience and power, C went on to become the most popular programming language in the world over the next quarter century.

This development of UNIX in C had two important consequences:
  • Portability. It made it much easier to port UNIX to newly developed computers, because it eliminated the need to translate the entire operating system to the new assemble language by hand:
    • First, write a C-to-assembly language compiler for the new machine.
    • Then use the new compiler to automatically translate the UNIX C language source code into the new machine's assembly language.
    • Finally, write only a small amount of new code where absolutely required by hardware differences with the new machine.
  • Improvability. It made UNIX easy to customize and improve by any programmer that could learn the high-level C programming language. Many did learn C, and went on to experiment with modifications to the operating system, producing many useful new extensions and enhancements.

Because C is such a powerful, dominant and supple language, its use quickly spread beyond Bell Labs. In the late 70’s C began to replace widespread well-known languages of that time like PL/I, ALGOL etc.

During 1970’s, C had evolved into what is now known as “traditional C”. The language became more popular after publication of the book “The C Programming Language” by Brian Kerningham and Dennis Ritchie in 1978 and the book was so popular the language came to be known as “K&R C” among the programming community.

Programmers everywhere began using it to write all sorts of programs. Soon, however, different organizations began applying their own versions of C with a subtle difference. This posed a serious problem for system developers. To solve this problem, the American National Standards Institute (ANSI) formed a committee in 1983 to establish a standard definition of C. This committee approved a version of C in 1989 which is known as ANSI C. With few exceptions, every modern C compiler has the ability to adhere to this standard. ANSI C was then approved by the International Standards Organization (ISO) in 1990 (ANSI/ISO C).

The C language is so named because its predecessor was called B. The B language was developed by Ken Thompson of Bell Labs.