THE   KEY
OF
SUCCESS

(Concept, fundamentals, logic, computer, research)










Dedicated to “Ma Saraswati”





Preface

This book is written for the 1st year students of computer application for MLSU. In this book IT subject is intended for anyone interested in knowing about computers. IT subject also useful for CA students for their ITSM subject. “THE KEY OF SUCCESS” specially wrote for BCA 1st year students. More specifically, different classes of readers can benefit from this book-
It can be used as a textbook for the first course in computers taught in diploma and bachelor’s programming in computer science, computer application, and information technology.

It can be used as a textbook for the first course in computer taught to B. Sc. (IT) and B. Com. Students.

There are four subjects in this book that is Information Technology, Problem Solving through C, Computer Organization and Physics.

It is hoped that students will find the book very helpful.
Although every care has been taken to avoid mistakes and misprint, yet it is very difficult to claim perfection.




Authors.

Special Thanks to “Pratik Badala [PK]"

Website (Block) :-   www.allnotesonbca.webs.com
                               
                                www.bcanotes1styear.webs.co











BCA 101 Information Technologies

      UNIT - 1



Computer Basics

 Algorithm

Planning the program involves defining its logic (the correct sequence of instruction needed to solve the problem at hand). The term algorithm refers to the logic of a program. It is a step-by-step description of how to arrive at a solution to a given problem. It is defined as a sequence of instruction that when executed in the specified sequence, the desired results are obtained.

Rules for developing algorithm

There are no set rules for developing algorithms. While writing an efficient algorithm, the following points must be kept in mind

       every procedure should carefully specify the input & output requirements
       meaning of variables should be clearly defined
       The flow of program should generally be forward except for normal looping & unavoidable instance.

Techniques of problem solving

Problem solving an art in that it requires enormous intuitive power & a science for it takes a pragmatic approach.

Here a rough outline of a general problem solving approach.

      Write out the problem statement include information on what you are to solve & consider why you need to solve the problem
      Make sure you are solving the real problem as opposed to the perceived problem. To check to see that you define & solve the real problem
      Draw & label a sketch. Define & name all variables and /or symbols. Show numerical values of variables if known.
      Identify & name
      relevant principles, theories & equations
      system & subsystems
      dependent & independent variables
      known & unknowns
      Inputs & outputs
      Necessary information
      List assumptions and approximations involved in solving the problem. Question the assumptions and then state which ones are the most reasonable for your purposes.
      Check to see if the problem is either under-specified, figure out how to find the missing information. If over-specified, identify the extra information that is not needed.
      Relate problem to similar problem or experience
      Use an algorithm
      Evaluate and examine and evaluate the answer to see it makes sense.


 What is a Computer


When we think of a computer, we generally picture computer hardware: the monitor, the keyboard, and the electronic circuitry contained within the rectangular case. There is more to a computer than this, however. The missing element is software–the instructions that tell the computer how to operate the hardware. All computers must have these two components to function. However, it is software that gives the computer one of its most distinguishing characteristics—the ability to program a single machine to perform many different functions.
In general terms, a computer is a machine operating under the control of instructions stored in its own memory. These operations and instructions enable the computer to receive data from a user (input), transform and manipulate the data according to specified rules (process), produce results (output). Additionally, data, instructions, and information are stored (storage) for future retrieval and use.  Many computers are also capable of another task: communicating directly with other machines.
The user performs the input task with a device such as a keyboard, mouse, or digital scanner. These devices allow the user to enter data and instructions into the computer. A secondary storage system stores and retrieves additional data and instructions that may also be used in the input and processing stages. This system might include magnetic or optical devices, such as CD-ROMs, hard disks, floppy disks, and tapes. The central processing system, which manipulates the data, is perhaps the most important part of the computer. This system is the “brain” of the computer in that it enables the computer to transform unorganized inputs into useful information. The central processing system includes the central processing unit (CPU) and the primary memory. The computer’s output system displays the results of the data manipulation. The output system might include a monitor, a printer, a plotter, a voice output device, or microfilm/microfiche equipment. A final element of a computer is the communication system, which passes information from computer to computer over communication media. Each of these systems is discussed in more detail below.
As noted above, computers come in many types. It would be difficult to adequately cover the variations in the components of these different computer types in a brief introduction. Therefore, we will confine the following discussion to personal computers (PCs). However, most of the discussion, especially as relating to basic computer operations, is easily transferable to other computer types.

Characteristics of Computer

No matter where computers are found or how they are applied, they’re used for input, processing, output and storage. But computer wouldn’t be worth the trouble without the following characteristics:
      Computers are fast. Many of today’s computers can perform hundreds of millions of processing operations in one second.
      Computers are reliable. Today’s computers may run day in and day out for years without failure.
      Computers are accurate. The computer’s physical processing circuits rarely make errors. Computers make errors, of course, but there are almost always due to faulty programs or incorrect data input.
      Computer can store massive amount of information. Today’s personal computer can be equipped with disk capable of storing more than forty billion characters (letters or numbers). That capacity is enough to store the complete work of William Shakespeare, an unabridged English dictionary, a 32-volume encyclopedia, a world atlas and almanac, dozens of computer programs, and all our written work from the third grade through graduate school with room for more.
      Computer can move information very quickly from one place to another. Using all experimental connection that may soon play a role in the Information Superhighway, one computer can send the entire text of the Encyclopedia Britannica to another linked computer in less than one second.
A computer-literate person knows that the computer is a tool for creating useful information that can be printed, communicated to other, and store for future use.

The computer somehow has a little similarity with the Television or TV. It lets you see the pictures, hear the music etc. as normally as you can do it from the Television screen, but with the big difference that TVs do not take any Input from the user as is in the case of computers. In the following segment we will look at the various advantages provided by the Computer systems.

The fact that today computers have made their impact on the world can hardly be questioned. However, the question that you may ask here is – “How do you benefit by using a computer?”

      Speed

A Computer is a fast device. Inside a system, operations occur through electronic circuits in which data flows close to the speed of light. It can do billions of actions per second.

      Accurate

The computer’s physical processing circuits rarely make errors. In fact most computers errors are caused by mistakes made by the users like faulty programs or incorrect data input. This wrong output is because of the GIGO (Garbage In Garbage Out) principle.

      Diligence

Unlike humans, Computers simply do not get tired. The same task or the repetitive work does not in the very least, affect computers.

      Multipurpose

This is one of the most wonderful features of Computers. It can handle several tasks at the same time. One moment it is preparing a letter, at the same time it can be saving another document; along with that you can listen to the music. The system is flexible enough to perform various tasks in the same time.

      Storage

A computer can store huge amounts of data. Today’s computers are provided with disks capable of storing more than one billion characters i.e. letters or numbers.

      Move Information

It lets you move the information quickly from one place to the other using Computer Networks and storage devices like Floppy Disk or a CD ROM that can be easily carried from one place to another.

Note: Within the limitations prepared by the designers, computers can do whatever they are instructed, but not more. They cannot so far “think “ for themselves.




Problem-solving Using Computers

      The ability to process the input according to the specified instruction, i.e the user not only inputs data but also the instruction specifying what is to be done with the data. It is because of this property that a computer behaves as a general purpose machine a machine that can solve any problem if is instructed appropriately

      How computers can be used to solve problem
      Identify the problem to be solved identify & define
       Develop a method for solving the problem –one must then find out a way (manually) to solve the problem. The solution is called an algorithm
      Analyze the various algorithm to obtain the most suitable one
       Express the solution in clear steps- many tools exist for this purpose such as flowcharts, decision tables, pseudo codes etc.
      Translate the algorithm into a computer programming language also known as a computer program. programming languages are of such nature that both the computer and the programmer can understand what is written
       The programmer is submitted to the computer for execution. The program is then translated into the machine (native) language of the computer in terms of current/ voltage and the same is submitted to the computer for execution.

     The essence of the above discussion is that using a computer for problem    Solving involves 2 clear cut activities:-
      development of suitable algorithm
       converting the algorithm into a program

Problem definition
1) It is first and most important step in determining the information needs of a s/w can be determined by:
2) Studying the existing system
3) Using questionnaires
4) Setting the tentative information needs
    5) Reviewing the more detailed statements of information needs with users To use any of these steps, first the requirements are determined Analysis of the problem produces a clear accurate problem definition.
Problem solving
Become aware of the problem

Gather information to increase the understanding of problem situation
Define the problem in terms of objectives variables & their interactions
Develop alternative solutions
Develop evaluations criteria
Choose the most appropriate solution
Implementation the solution
Monitor & control the implementation


Goals and objectives
All the systems, by definition have to have a purpose. The purpose is to accomplish meaningful goals & objectives. The goal is to solve the need of the user desired from the s/w.
      Mission:- It is the broad statement of the purpose of the organization, to grow continuously through the effort involved by studying to be in organization
      Goals: it is the general statement of what is to be accomplished. Eg, reduce time to respond to the effort without increasing the number of person’s help.
      Objectives:-It is the statement of measurable results to be achieved in a specified time frame. E.g., reduce order processing time from 5 to 3 days.

Problem Identification and definition
      Identifying key persons involved
      Identifying key variables & their inter-relationships
      Specifying the objectives of the solution


Similarities b/w problems
1)    Speed of processing:- the speed at which computer based data processing system can respond adds to these systems
2)    Volume of work: - the volume of information to be handled at each location should be calculated, both for normal & peak hours to avoid bottlenecks in operation.
       The steps in this analysis are:-
      Calculate the number of messages flowing to & from every point in the system.
      Calculate the total transmission time using the data from above 2 points, plus the transmission speed of the type of communication being analyzed
      Calculate the volumes for the peak periods
3)   Reliability: it includes
      The robustness of the design
      Availability of alternative computing facilities in the event of breakdown.
      The provision of sufficient equipment & staff to handle peak loads
4) Accuracy: this determines how accurately the data is being processed by the machine.
5)  Security: - many aspects of security, those which particularly concern the systems 
     Analysts are:-
     a) Confidentiality
     b) Privacy
     c) Security of data
6) Cost: - It is associated with the activities of development as well as operation of data processing system. Operation includes evaluation & maintenance of the system by the users. Real cost of processing includes 5 different categories
a) Manpower
b) Hardware
c) Software
d) Consumables & supplies
e) Overhead costs    


Data Representation

Representation of Characters in computers

Introduction
Even though many people used to think of computers as "number crunchers", people figured out long ago that it's just as important to handle character data.
Character data isn't just alphabetic characters, but also numeric characters, punctuation, spaces, etc. Most keys on the central part of the keyboard (except shift, caps lock) are characters.
As we've discussed with signed and unsigned ints, characters need to represented. In particular, they need to be represented in binary. After all, computers store and manipulate 0's and 1's (and even those 0's and 1's are just abstractions---the implementation is typically voltages).
Unsigned binary and two's complement are used to represent unsigned and signed int respectively, because they have nice mathematical properties, in particular, you can add and subtract as you'd expect.
However, there aren't such properties for character data, so assigning binary codes for characters is somewhat arbitrary. The most common character representation is ASCII, which stands for American Standard Code for Information Interchange.
There are two reasons to use ASCII. First, we need some way to represent characters as binary numbers (or, equivalently, as bit string patterns). There's not much choice about this since computers represent everything in binary.
If you've noticed a common theme, it's that we need representation schemes for everything. However, most importantly, we need representations for numbers and characters. Once you have that (and perhaps pointers), you can build up everything you need.
The other reason we use ASCII is because of the letter "S" in ASCII, which stands for "standard". Standards are good because they allow for common formats that everyone can agree on.
Unfortunately, there's also the letter "A", which stands for American. ASCII is clearly biased for the English language character set. Other languages may have their own character set, even though English dominates most of the computing world (at least, programming and software).
Nice Properties
Even though character sets don't have mathematical properties, there are some nice aspects about ASCII. In particular, the lowercase letters are contiguous ('a' through 'z' maps to 9710 through 12210). The upper case letters are also contiguous ('A' through 'Z' maps to 6510 through 9010). Finally, the digits are contiguous ('0' through '9' maps to 4810 through 5710).
Since they are contiguous, it's usually easy to determine whether a character is lowercase or uppercase (by checking if the ASCII code lies in the range of lower or uppercase ASCII codes), or to determine if it's a digit, or to convert a digit in ASCII to an int value.
ASCII Code (Decimal)
This chart can be found by typing man ascii.
0 nul   16 dle   32 sp    48 0     64 @     80 P     96 `    112 p
1 soh   17 dc1   33 !     49 1     65 A     81 Q     97 a    113 q
2 stx   18 dc2   34 "     50 2     66 B     82 R     98 b    114 r
3 etx   19 dc3   35 #     51 3     67 C     83 S     99 c    115 s
4 eot   20 dc4   36 $     52 4     68 D     84 T    100 d    116 t
5 enq   21 nak   37 %     53 5     69 E     85 U    101 e    117 u
6 ack   22 syn   38 &     54 6     70 F     86 V    102 f    118 v
7 bel   23 etb   39 '     55 7     71 G     87 W    103 g    119 w
8 bs    24 can   40 (     56 8     72 H     88 X    104 h    120 x
9 ht    25 em    41 )     57 9     73 I     89 Y    105 i    121 y
10 nl    26 sub   42 *     58 :     74 J     90 Z    106 j    122 z
11 vt    27 esc   43 +     59 ;     75 K     91 [    107 k    123 {
12 np    28 fs    44 ,     60 <     76 L     92 \    108 l    124 |
13 cr    29 gs    45 -     61 =     77 M     93 ]    109 m    125 }
14 so    30 rs    46 .     62 >     78 N     94 ^    110 n    126 ~
15 si    31 us    47 /     63 ?     79 O     95 _    111 o    127 del
The characters between 0 and 31 are generally not printable (control characters, etc). 32 is the space character.
Also note that there are only 128 ASCII characters. This means only 7 bits are required to represent an ASCII character. However, since the smallest size representation on most computers is a byte, a byte is used to store an ASCII character. The MSb of an ASCII character is 0.
Sometimes ASCII has been extended by using the MSb.
ASCII Code (Hex)
This chart can be found by typing man ascii.
00 nul   10 dle   20 sp    30 0     40 @     50 P     60 `     70 p
01 soh   11 dc1   21 !     31 1     41 A     51 Q     61 a     71 q
02 stx   12 dc2   22 "     32 2     42 B     52 R     62 b     72 r
03 etx   13 dc3   23 #     33 3     43 C     53 S     63 c     73 s
04 eot   14 dc4   24 $     34 4     44 D     54 T     64 d     74 t
05 enq   15 nak   25 %     35 5     45 E     55 U     65 e     75 u
06 ack   16 syn   26 &     36 6     46 F     56 V     66 f     76 v
07 bel   17 etb   27 '     37 7     47 G     57 W     67 g     77 w
08 bs    18 can   28 (     38 8     48 H     58 X     68 h     78 x
09 ht    19 em    29 )     39 9     49 I     59 Y     69 i     79 y
0a nl      1a sub   2a *     3a :     4a J     5a Z     6a j     7a z
0b vt    1b esc   2b +     3b ;     4b K     5b [     6b k     7b {
0c np      1c fs    2c ,     3c <     4c L     5c \     6c l     7c |
0d cr    1d gs    2d -     3d =     4d M     5d ]     6d m     7d }
0e so      1e rs    2e .     3e >     4e N     5e ^     6e n     7e ~
0f si      1f us    2f /     3f ?     4f O     5f _     6f o     7f del
The difference in the ASCII code between an uppercase letter and its corresponding lowercase letter is 2016.
This makes it easy to convert lower to uppercase (and back) in hex (or binary).
Char as a one byte int
It turns out that C supports two char types: char (which is usually considered "signed") and unsigned char, which is unsigned.
This may seem like a particularly odd feature. What does it mean to have a signed or unsigned char?
This is where it's useful to think of a char as a one byte int. When you want to cast char to an int (of any size), the rules for sign-extension may apply. In particular, if the MSb of a char is 1, then casting it to an int may cause this 1 to sign extend, which may be surprising if you're not expecting it.
Of course, you may observe "but how would the MSb get the value 1?". If you recall, char is one of the data types that you can manipulate with bitwise and bitshift operators. This means you can set or clear any bit of a char. In particular, you can set or clear the MSb of a char.
Another way you might get 1 for the MSb is casting an int down to a char. Usually, this means truncating off the upper bytes, leaving the least signficant byte. Since an int can have any bit pattern, there's a possibility that the least significant byte has a 1 in bit position b7.
You should think of a char as both an ASCII character as well as an 8 bit int. This duality is important because char is the only data type that is 1 byte. There are no other data types that are 1 byte.
Other Character Codes
While ASCII is still popularly used, another character representation that was used (especially at IBM) was EBCDIC, which stands for Extended Binary Coded Decimal Interchange Code (yes, the word "code" appears twice). This character set has mostly disappeared. EBCDIC does not store characters contiguously, so this can create problems alphabetizing "words".
One problem with ASCII is that it's biased to the English language. This generally creates some problems. One common solution is for people in other countries to write programs in ASCII.
Other countries have used different solutions, in particular, using 8 bits to represent their alphabets, giving up to 256 letters, which is plenty for most alphabet based languages (recall you also need to represent digits, punctuation, etc).
However, Asian languages, which are word-based, rather than character-based, often have more words than 8 bits can represent. In particular, 8 bits can only represent 256 words, which is far smaller than the number of words in natural languages.
Thus, a new character set called Unicode is now becoming more prevalent. This is a 16 bit code, which allows for about 65,000 different representations. This is enough to encode the popular Asian languages (Chinese, Korean, Japanese, etc.). It also turns out that ASCII codes are preserved. What does this mean? To convert ASCII to Unicode, take all one byte ASCII codes, and zero-extend them to 16 bits. That should be the Unicode version of the ASCII characters.
The biggest consequence of using Unicode from ASCII is that text files double in size. The second consequence is that endianness begins to matter again. With single bytes, there's no need to worry about endianness. However, you have to consider that with two byte quantities.
While C and C++ still primarily use ASCII, Java has already used Unicode. This means that Java must create a byte type, because char in Java is no longer a single byte. Instead, it's a 2 byte Unicode representation.
ASCII files
It's easy to fool yourself into thinking that numbers written in a file are actually the internal representation. For example, if you write 123 in a file using a text editor, is that really how the integer 123 is stored?
The file does NOT storing 123. Instead, it stores the ASCII code for the character '1', '2', and '3' (which is 31, 32, 33 in hex or 0011 0001, 0011 0010, 0011 0011 in binary).
ASCII files store bytes. Each byte is the ASCII code for some character in the character set. You can think of a text editor as a translator. It translates those binary numbers into symbols on the screen. Thus, when it sees 4116, that's the ASCII code for 'A', and thus 'A' gets displayed.
Some people think that if they type 0's and 1's in a text editor, they are writing out bits into a binary file. This is not true. The file contains the ASCII code for the character '0' and the character '1'.
There are hex editors which allow you to either type in binary or more commonly in hex. Those hex pairs are translated to binary. Thus, when you write F3, the binary number 1111 0011 is written to the file (the space is placed there only to make the binary number easy to read).
Representations of Integers
Representing a number in a computer
Representing (or encoding), a number means to express it in binary form. Representing numbers in a computer is necessary in order for it to be able to store and manipulate them. However, the problem is that a mathematical number can be infinite (as great as desired), but the representation of a number in a computer must occupy a predefined number of bits. The key, then, is being able to predefine a number of bits, and how to interpret them, so that they can represent the figure as efficiently as possible. For this reason, it would be foolish to use 16 bits to encode a character (65536 possibilities) when less than 256 bits are typically used.
Representation of a natural number
A natural number is a positive integer or zero. The choice of how many bits to use depends on the range of numbers to be used. To encode the natural numbers between 0 and 255, all that is needed is 8 bits (a byte) as 28=256. Generally speaking, n-bit encoding can be used for representing natural numbers between 0 and 2n-1.
To represent a natural number, having defined how many bits will be used to code it, arrange the bits into a binary cell, with each bit placed according to its binary weight from right to left, then "fill" the unused bits with zeroes.
Representation of an integer
An integer is a whole number which may be negative. The number must therefore be encoded in such a way as to tell if it is positive or negative, and to follow the rules of addition. The trick involves using an encoding method called twos complement.
      A positive integer or zero will be represented in binary (base 2) as a natural number, except that the highest-weighted bit (the bit on the far left) represents the plus or minus sign. So for a positive integer or zero, this bit must be set to 0 (which corresponds to a plus sign, as 1 is a minus sign). Thus, if a natural number is encoded using 4 bits, the largest number possible will be 0111 (or 7 in decimal).
Generally, the largest positive integer encoded using n bits will be 2n-1-1.
      A negative integer is encoded using twos complement.

The principle of twos complement:
 Choose a negative number.
      Take its absolute value (its positive equivalent)
      It is represented in base 2 using n-1 bits
      Each bit is switched with its complement (i.e. the zeroes are all replaced by ones and vice versa)
      Add 1
Note that by adding a number and its twos complement the result is 0
Let's see this demonstrated in an example:
 We want to encode the value -5 using 8 bits. To do so:
      write 5 in binary: 00000101
      switch it to its complement: 11111010
      add 1: 11111011
      the 8-bit binary representation of -5 is 11111011
Comments:
The highest-weighted bit is 1, so it is indeed a negative number.
 If you add 5 and -5 (00000101 and 11111011) the sum is 0 (with remainder 1).
Representation of a real number
The goal is to represent a number with a decimal point in binary (for example, 101.01, which is not read one hundred one point zero one because it is in fact a binary number, i.e. 5.25 in decimal) using the form 1.XXXXX... * 2n (in our example, 1.0101*22). IEEE standard 754 defines how to encode a real number.
This standard offers a way to code a number using 32 bits, and defines three components:
      the plus/minus sign is represented by one bit, the highest-weighted bit (furthest to the left)
      the exponent is encoded using 8 bits immediately after the sign
      the mantissa (the bits after the decimal point) with the remaining 23 bits
Thus, the coding follows the form:
 seeeeeeeemmmmmmmmmmmmmmmmmmmmmmm
      The s represents the sign bit.
      each e represents an exponent bit
      each m represents a mantissa bit
However, there are some restrictions for exponents:
      The exponent 00000000 is forbidden
      The exponent 11111111 is forbidden However, they are sometimes used to report errors. This numeric configuration is called NaN, for Not a number.
      127 (01111111) must be added to the exponent in order to convert the decimal to a real number in binary. The exponents, therefore, can range from -254 to 255
Thus, the formula for expressing real numbers is:
(-1)^S * 2^( E - 127 ) * ( 1 + F )
where:
      S is the sign bit and so 0 is understood as positive ( -1^0=1 ).
      E is the exponent to which 127 must be added to obtain the encoded equivalent.
      F is the fraction part, the only one which is expressed, and which is added to 1 to perform the calculation.
Here is an example:
The value 525.5 is to be encoded.
      525.5 is positive, so the first bit will be 0.
      Its representation in base 2 is: 1000001101.1
      By normalizing it, we get: 1.0000011011*2^9
      Adding 127 to the exponent, which is 9, gives 136, or in base 2: 10001000
      The mantissa is composed of the decimal part of 525.5 in normalised base 2, which is 0000011011.
      As the mantissa must take up 23 bits, zeroes must be added to complete it:
00000110110000000000000
      The binary representation of 525.5 under IEEE standard 754 is therefore:
0 1000 1000 00000110110000000000000
 0100 0100 0000 0011 0110 0000 0000 0000 (4403600 in hexadecimal)
Here is another example, this time using a negative real number :
 The value -0.625 is to be encoded.
      The s bit is 1, as 0.625 is negative.
      0.625 is written in base 2 as follows: 0.101
      We want to write it in the form 1.01 x 2-1
      Consequently, the exponent is worth 1111110 as 127 - 1 = 126 (or 1111110 in binary)
      The mantissa is 01000000000000000000000 (only the digits after the decimal point are represented, as the whole number is always equal to 1)
      The binary representation of the number 0.625 under IEEE standard 754 is:
1 1111 1110 01000000000000000000000
1111 1111 0010 0000 0000 0000 0000 0000 (FF 20 00 00 in hexadecimal) .

Representation of Fractions

A decimal fraction is a fraction whose denominator is not given explicitly, but is understood to be an integer power of ten. Decimal fractions are commonly expressed using decimal notation in which the implied denominator is determined by the number of digits to the right of a decimal separator, the appearance of which (e.g., a period, a raised period (•), a comma) depends on the locale (for examples, see decimal separator). Thus for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, viz. 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), the fractional part of the number is expressed by the digits to the right of the decimal (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction, 375/100, or as a mixed number,
Decimal fractions can also be expressed using scientific notation with negative exponents, such as 6.023×107, which represents 0.0000006023. The 107 represents a denominator of 107. Dividing by 107 moves the decimal point 7 places to the left.
Decimal fractions with infinitely many digits to the right of the decimal separator represent an infinite series. For example, 1/3 = 0.333... Represents the infinite series 3/10 + 3/100 + 3/1000 +... .
Another kind of fraction is the percentage (Latin per centum meaning "per hundred", represented by the symbol %), in which the implied denominator is always 100. Thus, 51% means 51/100. Percentages greater than 100 or less than zero are treated in the same way, e.g. 311% equals 311/100, and -27% equals -27/100.
The related concept of per mile or parts per thousand has an implied denominator of 1000, while the more general parts-per notation, as in 75 parts per million, means that the proportion is 75/1,000,000.
Whether common fractions or decimal fractions are used is often a matter of taste and context. Common fractions are used most often when the denominator is relatively small. By mental calculation, it is easier to multiply 16 by 3/16 than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is more accurate to multiply 15 by 1/3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are commonly expressed as decimal fractions, for example $3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example 3/6 (read "three and six") meaning 3 shillings and 6 pence, and having no relationship to the fraction 3/6.

Hexadecimal Representation of Numbers

In mathematics and computer science, hexadecimal (also base 16, or hex) is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 09 to represent values zero to nine, and A,B,C,D,E,F (or alternatively af) to represent values ten to fifteen. For example, the hexadecimal number 2AF3 is equal, in decimal, to (2×163) + (10×162) + (15×161) + (3×160), or 10995.
Each hexadecimal digit represents four binary digits (bits), and the primary use of hexadecimal notation is a human-friendly representation of binary-coded values in computing and digital electronics. One hexadecimal digit represents a nibble, which is half of an octet or byte (8 bits). For example, byte values can range from 0 to 255 (decimal), but may be more conveniently represented as two hexadecimal digits in the range 00 to FF. Hexadecimal is also commonly used to represent computer memory addresses.

Decimal to Binary Conversion

Converting Numbers from One Base into Another
Decimal to Binary or Hexadecimal
A decimal number can be converted easily into an equivalent number of any base by the use of the division/remainder technique. This twostep technique is illustrated in Figure 5. Follow these steps to convert decimal to binary.

Use the twoBstep division/remainder technique to convert a decimal number into an equivalent number of any bases.

Step 1 Divide the number (19, in this example) repeatedly by 2, and record the remainder of each division. In the first division, 2 goes into 19 nine times with a remainder of 1. The remainder is always one of the binary digits--0 or 1. In the last division you divide 1 by the base (2) and the remainder is 1.

Step 2 Rotate the remainders as shown in Figure 5; the result (10011) is the binary equivalent of a decimal 19.

Figure 6 illustrates how the same division/remainder technique is used to convert a decimal 453 into its hexadecimal equivalent (1C5). In a decimaltohex conversion, the remainder is always one of the 16 hex digits.


The two-step division/remainder technique is used to convert a decimal number to its hex equivalent.
Binary to Decimal and Hexadecimal
To convert from binary to decimal, multiply the 1s in a binary number by their position values, then sum the products (see Figure 7). In Figure 7, for example, binary 11010 is converted into its decimal equivalent (26).


Multiply the 1s in a binary number by their position values.
The easiest conversion is binary to hex. To convert binary to hex, simply begin with the 1s position on the right and segment the binary number into groups of four digits each (see Figure 8). Refer to the equivalence table in Figure 2, and assign each group of four binary digits a hex equivalent. Combine your result, and the conversion is complete.
Hexadecimal to Binary
To convert hex numbers into binary, perform the grouping procedure for converting binary to hex in reverse (see Figure 8).

Place the binary digits in groups of four, then convert the binary number directly to hexadecimal.
Hexadecimal to Decimal
Use the same procedure as that used for binarytodecimal conversions (see Figure 7) to convert hex to decimal.  Figure 9 demonstrates the conversion of a hex 3E7 into its decimal equivalent of 999.


Multiply the digits in a hexadecimal number by their position values.

Error-detecting codes
Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver
Error correction is the detection of errors and reconstruction of the original, error-free data.
Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided.
There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors).
Random-error-correcting codes based on minimum distance coding can provide a suitable alternative to hash functions when a strict guarantee on the minimum number of errors to be detected is desired. Repetition codes, described below, are special cases of error-correcting codes: although rather inefficient, they find applications for both error correction and detection due to their simplicity.
The most famous early systematic use of error detection was by Jewish scribes in the precise copying of the Jewish bible, beginning before Christ. An emphasis on minute details of words and spellings evolved into the idea of a perfect text in 135 CE, and with it increasingly forceful strictures that a deviation in even a single letter would make a Torah scroll invalid. The scribes used methods such as summing the number of words per line and per page (Numerical Masorah), and checking the middle paragraph, word and letter against the original. The page was thrown out if a single mistake was found, and three mistakes on a single page would result in the entire manuscript being destroyed (the equivalent of retransmission on a telecommunications channel). The effectiveness of their methods was verified by the accuracy of copying through the centuries demonstrated by discovery of the Dead Sea Scrolls in 1947–1956, dating from c.150 BCE-75 CE.

                                       


Input & Output Devices


Computer Input Devices

Keyboard
The keyboard is one of the most common input devices. In addition to keys that represent letters, numbers, and special characters, keyboards often contain function keys, to which the user can assign personalized commands, and special-purpose keys, such as the enter key. Some keyboards contain a numeric keypad (a set of keys in the form of a calculator keypad) that is used for entering numbers conveniently and easily. Keyboards may also have Internet keys, CD/DVD controls, and multimedia keys.  While most keyboards are connected to the computer by a cable, there are wireless keyboards that transmit their input by means of infrared signals.
Mouse
Connected to many modern microcomputers is a hand-sized input device called a mouse, or pointing device. A mechanical mouse contains a ball that rolls along a flat surface, moving a cursor (pointer) on the computer screen. An optical mouse uses light signals to detect the movement of the mouse and thereby move the pointer on the screen.  Through the use of one to four buttons on top of the mouse (and a wheel on some versions of the mouse), the user can select and manipulate objects represented on the screen as icons (symbols). The mouse is a convenient and effective way to interact with the graphical user interfaces (GUIs) that dominate the current computer environment.
Touchpad
This input device is a small rectangular surface that operates by means of pressure and motion, and allows the user to perform all of the functions of a mouse by touch. Moving the fingertip across the touch pad moves the cursor across the screen. Tapping the touch pad acts as clicks and double-clicks. Touchpad are most commonly found on laptop computers.
Pointing Stick
Like the touchpad, this pressure-sensitive input device fills the role of the mouse in some laptops. It is shaped like a pencil eraser and fits between keys in the keyboard. Moving the pointing stick moves the cursor. This device is mostly found on laptop computers.
Trackball
The trackball is basically an upside-down mouse. Instead of rolling the ball on a surface as with the mouse, the user rotates the trackball with the hand directly to position the cursor on the screen.
Touch Screen
This input device is often used in public places, such as shopping centers, hotels, and airports, where users may not have much experience working with computers. The user simply touches the screen for the desired menu item.
Light Pen. This light-sensitive device is used for entering data by writing on a computer screen. Engineers, designers, and architects use light pens to draw and modify shapes on the screen.
Graphics Tablet. A graphics tablet is a flat rectangular board used to input drawings or other graphical data. The user draws on the graphics tablet with an electronic pen or a mouse-like device (called a puck). The drawing movements are converted into digital signals that are sent to the computer as input. Architects and mapmakers commonly use this input device.
Other Input Methods

Speech Recognition Devices:  These devices recognize the words spoken by users. Sample voice patterns are analyzed, digitized, and stored in a database. Later, during operations, actual voice inputs are compared with the stored voice pattern. Speech or voice recognition technology is best suited to structured input applications in work environments where the user’s hands are occupied. For example, a receiving clerk may enter data about items received with the following phrases:
n                “Invoice 998810”
n                “Item 56213”
n                “Quantity 100”
n                “Condition Good”
From these inputs, the speech recognition system will create a record of items received by a receiving clerk.
          Audio Input Devices. In addition to speech recognition devices, sound may be entered into computers with audio input devices. These devices allow music, speech, and other sound effects to be used as input. Sound may be entered through a microphone, electronic piano keyboard, tape player, and CD player. For high quality sound recording, a sound card needs to be installed in the computer. For some electronic input devices a special serial port, called a musical instrument digital interface (MIDI), is required.
          Video Input Devices and Digital Cameras. Video input devices and digital cameras allow optical and graphical images to be entered into the computer for storage and processing. Digital cameras allow the user to store still-pictures as digital files (for example, .jpg’s and .gif’s) that can be used by the computer like other graphic images. They may be edited, e-mailed, printed, faxed, inserted into documents, and displayed on the monitor. Video input devices enable the user to store and process full-motion images generated from camcorders and VCRs.
Automatic Teller Machines (ATMs). ATMs have revolutionized banking by extending “bankers’ hours” to 24 hours a day. ATM users can borrow cash, transfer funds, and make deposits at their convenience. Most ATM devices employ some sort of card input that contains information about the user. The user inserts the card and is prompted to enter a personal identification number (PIN) to authorize access. If the user fails to enter the correct PIN after three attempts, the ATM will assume an unauthorized person is attempting an illegal access. It will then implement a predetermined action. This may be to lock the user out of the system or have the ATM destroy or confiscate the access card. In any case, the user will need to appear in person at the bank and reestablish access authority.
Point-of-Sale (POS) Devices. POS terminals are found in many kinds of retail businesses; for example, grocery stores. Most POS terminals use some type of optical scanning device, usually involving laser beams, for reading the product code on an item. The product code, sometimes called a bar code, is the primary key for retrieving the price and description data from the database.

Computer Output Units

Like the input system, the output system provides an interface through which computers and humans can interact with one another. However, now the human is on the receiving end of the process. Output devices are used to display input data that has been processed by the computer into useful output information. Through its output system, the computer presents the data it contains in a form that the user can understand.  The presentation is typically in one of four forms: graphics (e.g., charts and photographs), text (e.g., words and numbers), video (e.g., photographic images in motion), or audio (e.g., music and speech). Probably the most common output devices are monitors for producing temporary displays (soft copies) and printers for producing paper copies of documents (hard copies).
Monitors
 A monitor, or screen, can display graphics, text, and video output. To display video in the monitor, the computer needs a video card (or graphics card) for converting the digital signal (i.e., data in the form of 0s and 1s) from the processor to an analog signal (i.e., data in the form of a continuous electrical wave) that the monitor can interpret.
Monitors are described in two ways. The first is by the number of colors that the screen can display. This can range from 4 to 65,536. The second is by the quality, or resolution, of the screen, which is measured in picture elements (pixels). Pixels are tiny dots of light that combine to form images on the screen. Typical screens have 320 × 220 pixels, 640 × 480 pixels, or 1,024 × 768 pixels. The more pixels a screen has, the greater its resolution. Both color and resolution are important for determining the performance and quality of a computer monitor.
The IBM-compatible color/graphic monitors have evolved from the Color Graphic Adapter (CGA) technology, which featured 4 colors and a resolution of 320 × 220, through the Enhanced Graphic Adapter (EGA) and the Video Graphics Array (VGA) technologies. The current state-of-the-art for the personal computer is the Super Video Graphics Array (SVGA); for these monitors, the number of colors varies from 256 to 65,536, and the resolution varies from 800 × 600 to 1,600 × 1,200.
Monitors come in three main varieties: cathode-ray tubes (CRT), flat-screen liquid-crystal displays (LCD), and gas-plasma displays. All of these are now capable of either monochrome (black-and-white) or color displays. Desktop PCs usually come with CRTs, although flat-screen LCDs are becoming more common. Flat-screen LCDs are generally used in notebooks and laptops. The common desktop CRT sizes range from 15 to 21 inches. Laptop LCDs sizes are commonly 14.1 to 15.1 inches. Gas-plasma displays are capable of larger displays with good resolution and can be in excess of 42 inches.
Printers
 It was once thought that as computers became more common in business and in the home, we would move toward a “paperless society.” That is, all data would be circulated and used in electronic form, eliminating the need for printed material. This vision has yet to be realized. In businesses, homes, and most settings where computers are found, users still produce hard copy (information in printed form). For most applications, soft copy (information in electronic form) is not adequate.
To produce hard copy, a computer must be connected to a printer. This is typically done through a parallel port, which enables the computer to communicate with the printer one byte at a time. Microcomputers may have one or more parallel ports.
When microcomputers first invaded the workplace, the most popular type of printer was the dot-matrix printer, which can still be found in service today. These inexpensive printers are appropriate for tasks that do not require high-quality images. As its name implies, the dot-matrix printer produces a series of dots in a matrix to form both characters and images. A print head containing 9, 18, or 24 tiny pins strikes an inked ribbon to create the dots that make up the image on paper. The number of pins determines the quality of the output; the greater the number, the smoother the output.
In the 1980s, business people who wanted better-quality output than a dot-matrix printer could provide began to use daisy-wheel printers. The printing element of this type of printer is a round plastic disk that resembles a daisy, with petals protruding from the center. The “petals” each contain a letter, number, or other character. When printing, the daisy wheel rotates to bring the desired character into position, and a “hammer” strikes the wheel, forcing the character to press against an inked ribbon and mark the paper. Daisy-wheel printers do not produce graphic images.
Thanks to its decreasing cost and very high image quality, the laser printer has become the printer of choice for most microcomputer applications. The laser printer can produce both characters of any size and high-quality graphic images. A laser beam draws the desired image on the printer’s metal drum. The drum attracts toner (powdered ink) where the light strikes it, and the image is transferred from the drum to the paper using heat.
A fourth type of printer used with microcomputers is the ink-jet printer, which produces letter-quality images by spraying droplets of ink onto the surface of the paper. It has many of the advantages of the laser printer, including high-quality output and the ability to produce both characters and graphic images. In addition, ink-jet printers are often less expensive than laser printers and can produce color images of very good quality.
The resolution quality of both laser printers and ink-jet printers is measured in the number of dots per inch (dpi) the printer is able to produce when generating characters: the higher the dpi, the greater the resolution of the characters. The average PC printer today provides 300 to 600 dpi. High-quality laser printers go up to 1200 dpi, while high-quality ink-jet printers range up to 1440 dpi. 
Chain printers are often used with mainframe computers. Chain printers are extremely fast, producing up to 3,000 lines per minute. A chain with characters on it revolves in front of the paper; when the desired character passes by, the hammer in that position presses the paper and inked ribbon against the chain.
Other Output Devices
Plotters
 Plotters are used in some settings to produce architectural drawings, bar charts, and high-quality graphic images. They often use paper that is larger than most printers can handle. There are three types of plotters. Pen plotters use colored pens positioned so that a mechanical arm can access them. The mechanical arm draws the graphic image with one colored pen, then selects another, repeating the process until the image is complete. Pen plotters are widely used and produce very high quality images. Electrostatic plotters use electrostatic charges to create images on specially treated paper. This paper must then be developed like a photograph to produce high-resolution images. Direct-image plotters, also called thermal plotters, create images on heat sensitive paper. This process is limited to two-color output.
Speakers and headsets are the two most common devices used for audio output.  A voice output device produces voice commands by imitating phonic sounds. These prerecorded phonic sounds produce synthetic-sounding words. Some telephone companies use voice output to automate directory assistance, and some automobiles contain voice output devices that remind drivers to fasten their seat belts.
Projectors are used to display graphics, text, video output from a computer. In the case of projectors, the output is displayed onto a screen for an audience to view instead of into a monitor for a single user's viewing.  Like monitors, projectors are attached to the computer through a parallel port. Projector technology varies widely, but the two most common types are LCDs projectors and digital light processing (DLP) projectors. 

                             UNIT - 2


Computer Memory

Memory Cell

      T cells activate the immune system and directly attack pathogens, while B cells produce substances called antibodies, which can disable or kill pathogens. A memory cell starts its life in the bone marrow, where lymphocytes are made.
      B-cells are the cells that make antibodies. The antibodies neutralize antigens and make memory cells that are specific to each type of antigen. When the same type of antigen invades again, these cells remember the antigen and respond quicker.
T Memory Cell
      The function of these cells is characterized by the memory present in acquired immunity. Once a memory cell, either a T cell or a B cell, has been exposed to a specific pathogen, it will react much more rapidly when it encounters the same pathogen in the future.
      The cell division creates more T cells, which results in additional first responders to inform other immune response cells of the problem in the body. T helper cells also release substances called lymphokines and chemokines. Both substances are used to notify other immune response cells of the presence of a bacterium, virus, or other infection agent.
B Memory Cell
      The function of these cells is characterized by the memory present in acquired immunity. Once a memory cell, either a T cell or a B cell, has been exposed to a specific pathogen, it will react much more rapidly when it encounters the same pathogen in the future.
      Providing continued immunity long after the infection is gone, memory cells do not succumb to apoptosis, or programmed cell death, as plasma cells do. B memory cells have a gene activated within them, allowing them to live longer, so that if the particular microbe attempts to attack again later, the response will be faster.


Memory Organization


Read Only Memory (ROM)

The CPU can only fetch or read instruction from Read Only Memory (or ROM). ROM comes with instruction permanently store inside and these instruction cannot be over- written by the computer’s CPU. ROM is used for storing special sets of instructions which the computer needs when it starts up.

When we switch the computer off, the contents of the ROM do not get erased but remain stored permanently. Therefore it is non-volatile. Read Only Memory (ROM), also known as firmware, is an integrated circuit programmed with specific data when it is manufactured. ROM chips are used not only computers, but in most other electronic items as well.

We have learnt that everything in RAM is erased when the power is turned off, then, how does the computer start again?  The answer to this is read only memory (ROM). The ROM contains permanently recorded instructions that are vital for starting of a computer.

Read-only-memory (ROM) is a chip or a group of chips containing programs and instructions for the CPU. Only the CPU has access to this memory. For example, when we turn on the computer, the CPU gets its first instructions from the ROM. These instructions are burnt into the chip with the help of special equipment before the chip is installed inside the computer. Users do not have access to the ROM, so they can't change these instructions.

One set of Instructions found in ROM is called the ROM- BIOS which stands for Read-Only Memory Basic Input Output Services. This set of programs performs the most basic control and supervisory operations for computer.

Instructions in ROM can only be read and it is not possible to write fresh information to it, hence, the name Read-Only Memory.

Random Access Memory

RAM is often referred to as Main Memory or the Primary Storage.

It is that part of the computer's memory where information is stored temporarily, as long as you are working on it i.e. it holds the programs and their data till the computer is On.

The data can be read or modified, i.e. you can read from or write on to RAM. Hence, it is called Read/Write Memory. Ram can be considered as a collection of cells, we can access any cell at any given time, that's why the name Random Access Memory.  

The amount of RAM can vary from 640 KB to 128 MB. 

SIMMs

Ram is comprised of marginal chipsets called as SIMMs (Single Inline Memory Margins). It is a kind of a circuit board that holds a number of RAM chips. These SIMMs are designed to be easily removed or added in the computer. In case you need more RAM, you can add more SIMMs. A SIMM card is inserted into the SIMM socket found on the motherboard.

RAM stores the information as tiny electronic chargers that makes RAM work faster. But, when the power is switched off all the data stored on the RAM is lost.

Working of RAM

When the computer is switched on, automatically certain operating system files are loaded from the hard disk (storage device) into RAM. These files remain in RAM as long as the computer is running i.e. till the computer is switched ON. The processor then acts on data while it is in RAM. The contents of RAM are lost when the computer is switched off. Thus, the RAM is said to be volatile type of memory.  

Lets understand the concept with the help of an example.

    Suppose, we want to open a particular data file for some writing purposes. We will enter a command from the keyboard, and the Processor (CPU) processes the command.

    Next, the Processor calls for the data file to be copied from a storage device like hard disk, into memory or RAM, where it is stored temporarily till the CPU needs it.

    RAM then provides the data to the CPU more quickly than retrieving from the storage device. Remember, the data in RAM is not permanent. So, if you want the data to be kept for future use, you must save it from Ram into the Storage devices.

Cache Memory

Cache memory is very fast RAM. It is used to store frequently used instructions. Computers that have cache memory process data faster than computers without it because data travels faster from cache to the CPU than from RAM to the CPU. Some cache is built directly in the processor chip, while other cache is in a chip external to the processor chip.
Physical Devices

Flash memory devices come mainly in the form of small, portable devices often referred to as memory sticks or keychains, and as part of digital cameras. They are non-volatile devices which operate on the principle that electric charges are used to represent data in binary format. These charges can remain unchanged almost indefinitely, but changes (such as writing to the device) limit the life span of the device (100000 writes of 8MB each).
Memory sticks plug directly into a USB-port at the back of your computer. The power it needs is supplied by the USB-port. It is sometimes convenient to use a USB-extension cable to bring the device within easy reach.
Memory sticks usually have written protect switches which should be turned off if you want to use it as read-write device. If the switch is in the "on" position, the device is read-only. The devices are usually equipped with a led (light emitting diode), which indicates that the device is operational. The led also flickers while data is being transferred to or from the device.
Digital cameras use flash memory for image storage. These are powered by the camera and connected to the computer's USB-port via an electronic interface. To be able to connect to the computer, the camera's power must be turned on for the duration of the interfacing, and should only be turned off after the device had been unmounted from the computer. It is important to remember that the device takes power from the camera's battery. The operation should therefore be terminated as soon as possible.

Magnetic Hard Disks

A hard disk drive (HDD) is a data storage device used for storing and retrieving digital information using rapidly rotating discs (platters) coated with magnetic material. An HDD retains its data even when powered off. Data is read in a random-access manner, meaning individual blocks of data can be stored or retrieved in any order rather than just sequentially. An HDD consists of one or more rigid ("hard") rapidly rotating discs (platters) with magnetic heads arranged on a moving actuator arm to read and write data to the surfaces.
Introduced by IBM in 1956 HDDs became the dominant secondary storage device for general purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and PCs. More than 200 companies have produced HDD units, though most current units are manufactured by Seagate, Toshiba and Western Digital. Worldwide revenues for HDDs shipments are expected to reach $38 billion in 2012, up about 19% from $32 billion in 2011.
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is unavailable to the user due to use by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Performance is specified by the time to move the heads to a file (average access time) plus the time it takes for the file to move under its head (average latency, a function of the physical rotational speed in revolutions per minute) and the speed at which the file is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch in desktop computers and 2.5-inch in laptops. HDDs are connected to systems by standard interface cables such as SATA (Serial ATA), USB or SAS (Serial attached SCSI) cables.
As of 2012, the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSDs). HDDs are expected to remain the dominant medium for secondary storage due to predicted continuing advantages in recording capacity and price per unit of storage, but solid state drives are replacing rotating hard drives especially in portable electronics where speed, physical size, and durability are more important considerations than price and capacity.

Floppy Disk Drives

Originally introduced by IBM in the 8-in format as a paper tape replacement for loading microcode programs into the 370 mainframe computer. The device is also used as a punch card replacement for key-punch machines like the IBM 3740. In batch processing, the data is entered by key-punch operators at data entry stations. The data entered is captured and stored on the 8-in diskette. These are then loaded into a magazine, very much like a CD changer, which then picks up each diskette in turn, reads in the data and processes them. Alan Shugart was at that time an engineer at IBM. With the introduction of the first hobby 8 bit microcomputers like the Apple-II, he recognized the potential of the floppy disk as a random access storage device and formed Shugart Associates to address this market. For this early PC market, a smaller 5.25 inch format was introduced with a capacity of 90 KB. However the storage demands grew quickly and users were cutting a notch on the diskette
jacket and flipping the diskette over to use the other side. Subsequently, double sided and higher density drives were developed.



The diskette media is inserted into the drive through a slot in the front panel. Operating a manual lever clamps the Floppy Disk Drive 2 hub of the diskette to the drive and loads the read/write heads by placing them in contact with the surfaces of the diskette. To reduce wear and tear on the heads and media, the drive motor is not activated until a read or writes command is received. After a short delay following the last read/write operation the motor is stopped again. An open-loop head positioning mechanism is used. The head is usually driven by a stepper motor through either a screw driver assembly or by winding and unwinding a metal band. An optical zero track sensors is used to identify track 00 and the subsequent tracks is accessed based on the track count. Should there be a read, write or seek failure, the system resets, the head is retracted back to track 00 and the positioning of the head is re-executed. The Index LED/detector assembly provides the synchronization for the sector
Counts.
Compact Disk Read Only Memory

This section provides general information and operating instructions for the CD-ROM, including instructions for installing software, using the associated software, and accessing the conference proceedings. This section also contains the directory structure of the CDROM and contact information for technical assistance.

Introduction
This CD-ROM includes extended abstracts provided by conference presenters who participated in the 14th biennial Coastal Zone 05 conference, held in New Orleans, Louisiana, July 17 to 21, 2005. Information is also included for associated meetings, workshops, and plenary and luncheon sessions. This CD-ROM uses Adobe® Acrobat® Portable Document Format (PDF) files to view and print information.

Magnetic Tape Drives

A magnetic tape drive is used for storage and retrieval of data on tape. It has read/write heads. As the tape ribbon passes under the read/write heads data can be either read and transmitted to primary storage or transmitted from primary storage and written to the tape by giving suitable commands (read or write) to the tape drive. Like in the case of the audio tape recorder or videocassette recorder, a magnetic tape reel, cartridge, or cassette has to be first loaded on to a tape drive attachment to a computer for a processing. Once loaded, the magnetic tape is said to be on-line. That is, it can now be used for storage or retrieval of data by the computer. When processing is complete, the tape is removed from the tape drive for off-line storage. That is, it is stored away from the computer system, and data on it are not accessible until it is loaded again on the tape drive.  
Processor


Description of a Processor

Reduced instruction set computing, or RISC

Reduced instruction set computing, or RISC (pron.: /ˈrɪsk/), is a CPU design strategy based on the insight that simplified (as opposed to complex) instructions can provide higher performance if this simplicity enables much faster execution of each instruction. A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is known as complex instruction set computing, i.e. CISC.

Various suggestions have been made regarding a precise definition of RISC, but the general concept is that of a system that uses a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures. Another common trait is that RISC systems use the load/store architecture, where memory is normally accessed only through specific instructions, rather than accessed as part of other instructions like add.

Although a number of systems from the 1960s and 70s have been identified as being forerunners of RISC, the modern version of the design dates to the 1980s. In particular, two projects at Stanford University and University of California, Berkeley are most associated with the popularization of the concept. Stanford's design would go on to be commercialized as the successful MIPS architecture, while Berkeley's RISC gave its name to the entire concept, commercialized as the SPARC. Another success from this era were IBM's efforts that eventually lead to the Power Architecture. As these projects matured, a wide variety of similar designs flourished in the late 1980s and especially the early 1990s, representing a major force in the Unix workstation market as well as embedded processors in laser printers, routers and similar products.

Well known RISC families include DEC Alpha, AMD 29k, ARC, ARM, Atmel AVR, Blackfin, Intel i860 and i960, MIPS, Motorola 88000, PA-RISC, Power (including PowerPC), SuperH, and SPARC. In the 21st century, the use of ARM architecture processors in smart phones and tablet computers such as the iPad and Android tablets provided a wide user base for RISC-based systems. RISC processors are also used in supercomputers such as the K computer, the fastest on the TOP500 list in 2011, and the second at the 2012 list


A complex instruction set computer

A complex instruction set computer (CISC, pron.: /ˈsɪsk/) is a computer where single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC).

Examples of CISC instruction set architectures are System/360 through z/Architecture, PDP-11, VAX, Motorola 68k, and x86.



EPIC

Over the past two and a half decades, the computer industry has grown accustomed to, and has come to take for granted, the spectacular rate of increase of microprocessor performance, all of this without requiring a fundamental rewriting of the program in a parallel form, without using a different algorithm or language, and often without even recompiling the program. The continuation of this trend is the topic of discussion of this report. For the time being at least, instruction-level parallel processing has established itself as the only viable approach for achieving the goal of providing continuously increasing performance without having to fundamentally re-write applications. In this report, we introduce the Explicitly Parallel Instruction Computing (EPIC) style of architecture which was developed, starting eleven years ago, to enable higher levels of instruction-level parallelism without unacceptable hardware complexity. We explain the philosophy underlying EPIC as well as the challenges faced as a result of adopting this philosophy. We also describe and discuss the set of architectural features that together characterize the EPIC style of architecture, and which can be selectively included in some specific instance of an EPIC instruction set architecture.


Machine Language and Instruction set

Machine Language

Machine language is built up from discrete statements or instructions. On the processing architecture, a given instruction may specify:

Particular registers for arithmetic, addressing, or control functions                          Particular memory locations or offsets Particular addressing modes used to interpret the operands More complex operations are built up by combining these simple instructions, which (in a von Neumann architecture) are executed sequentially, or as otherwise directed by control flow instructions.

Instruction Set

Every CPU has built-in-ability to execute a set of machine instruction called its Instruction Set. Most CPUs have 200 or more instructions (such as add, subtract, and compare) in their instruction set. The machine language designed for a processor (CPU) is based on the list of instruction supported by the CPU in its instruction set. Since each processor has a unique instruction set, machine language programs written for one computer will generally not run on another computer with a different CPU.

CPUs made by different manufacturers have different instruction sets. In fact, different CPU models of the same manufacturer may have different instruction sets. However, manufacturers tend to group their CPUs into “families” having similar instruction sets. When a new CPU developed it is ensured that its instruction set includes all the instructions in the instruction set of the processors CPU plus some new once. 

Processors used in desktops and lap tops

The laptop processor is typically a lower-powered processor than those found in similar desktop machines, although some laptop makers have used desktop processors in their laptops to get a performance edge. Processors designed for mobile computing generally consume less power, and thus run cooler than desktop processors. This is important because you want your battery to last a long time and you also don’t want to burn a hole through your lap. Desktop PCs typically have massive heat-sinks and fans to cool them down, but it’s hard to fit that stuff into a tiny laptop, and fans drain laptop batteries quickly.
A useful term to understand is Thermal Design Power, or TDP. This indicates the maximum amount of wattage that the cooling solution for that processor will need to dissipate. It’s not like it will always dissipate that amount of wattage, but, generally, if you want a cooler processor, go for the lowest TDP you can get.
Performance characteristics that differentiate laptop CPUs include the amount of cores they contain, the cache memory, and bus speed–as well as the speed of the chip itself. Mainstream processors are moving toward containing dual cores. You can still get single core CPUs today, but there are performance benefits to dual-core, even if they are not huge. Cache memory is speedy local memory that allows the CPU to keep from accessing slower main memory as often, and bus speed determines how fast the processor talks to the rest of the computer. In addition to the raw processor performance characteristics, different cores have different abilities. To get a true understanding of performance you will have to look at specialized comparisons or benchmarks among different laptop models. This is often hard to do with laptops as it is not as easy to equalize other components in a test setup.
Words on Recommendations
If you want the best performance, Intel currently rules things with its Core 2 Duo processors; AMD is just not up to par. With that said, if you want a bargain, you may have to buy a laptop with an older Intel chip or an AMD chip. For laptop users performance usually isn’t the top concern. The CPU choice, however confusing (and it will be, certainly if you are looking to save some cash) may not really matter all that much unless you will be using your laptop as your primary machine and you are doing some multimedia or gaming on it. If so, stick with Intel’s Core 2 Due and go for the T7xxx series.
If you are looking to cheap out, or, to put it nicely, are more fiscally conservative, you will be faced with the prospect of going with AMD processors and/or outdated Intel chips that are not on Intel’s roadmap anymore. All of them will work fine, but you may not get the top-end performance that a Geek craves.

Specification of a desktop and Lap top computer currently available in the market
Specifications of processor

With advanced features for remote management capabilities and reliability, these development kits meet the requirements of a broad range of embedded applications, including retail and transaction solutions, signage, gaming platforms, industrial automation and medical equipment. Processors are based on the 2nd generation Intel® Core™
Micro architecture on 32nm process technology, featuring new levels of performance for embedded, compute-intensive applications. Paired with the Intel® QM67 Express chipset, these platforms provide excellent media, graphics and I/O flexibility, stunning and seamless visual experiences, and error correcting code (ECC) memory. The next-generation  graphics engine, Intel® HD Graphics 30001, doubles graphics performance compared with previous Intel® Core™ processor-based platforms. Full integration of the CPU, media/graphics capabilities and memory controller reduces overall platform footprint and provides faster performance as well as board real estate savings. Additionally, developers can create one board design and scale a product line between quad-, dual and single-core processors using the same socket. Thermal design power (TDP) options include 17W, 25W and 45W. This platform is part of Intel’s comprehensive validation process, enabling rapid deployment of next-generation platforms to help developers maximize competitive advantage and minimize development risks. This and other development kits from Intel provide a working system with a range of performance options that can be modified or used immediately for product development, and allow software vendors to test BIOS and operating system software.




AMD FX Processors

We call it the new AMD FX 8-Core Processor Black Edition and it’s unlocked for your over clocking pleasure.1 Experience unmatched multitasking and pure core performance with the industry’s first 32nm 8-core desktop processor. Get the speed you crave with AMD Turbo CORE Technology to push your core frequencies to the limit when you need it most. Go beyond the limits of maximum speed with easy-to-use AMD OverDrive™ and AMD Catalyst Control Center™ software suites. But the best part of all? You’ll get all this impressive performance at an unbelievable price. You’ll be asking yourself “what competition?” in no time.
AMD FX 8-Core Processors

    The industry’s first and only native 8-core desktop processor for unmatched multitasking and pure core performance with all-new “Bulldozer” architecture.
    New 32 nanometer die shrink designed to reduce leakage for improved efficiency, increased clock rate headroom and better thermals.

AMD Turbo CORE Technology

    The AMD FX Processors come equipped with AMD Turbo CORE Technology. AMD Turbo CORE Technology is a performance boosting technology that helps increase performance on the applications that need it the most.

New Instruction Capabilities

    AVX
        Advanced Vector Extensions increase parallelism tailored for scientific and 3D applications that use heavy floating point calculations
    FMA4 and XOP
        Floating Point Vector Multiply -Accumulate  improves throughput and performance on many vector functions (integer and floating point)
    AES
        Advanced Encryption Standard noticeably increase performance on the latest encryption applications like TrueCrypt and benchmarks like PCMark

AMD Balanced Smart Cache

    Shared L3 cache ( up to 8MB)
        Improved scheduling and pre-fetch capabilities
        64-ways (16-ways/sub-cache)
        Increased data queue sizes
        Coherency for 8-cores




                                                    UNIT - 3


Computer Architecture

Processor to Memory communication

A lot of papers concern about the distribution of instructions in typical RISC or ILP processor. Plenty of them are different although they were performed on the same tests like SPEC92 or SPEC95 benchmark suite due to different input data chosen for the test or different subset to produce the average result. Total instruction count for dynamically committed instructions and dynamically committed loads and stores were analyzed in all tests. Summary of average values is represented in Table 1. From these tests it follows that the percentage of loads ranges from 22% to 30% and for stores from 8% to 12% on average. We performed SPEC95 benchmark tests to exploit how processor and memory communicate on simple scalar tool set. These experiments result with average of 25 % for loads and 11 % for stores of total instructions committed. A subset of LINPAC and NASCE tests were performed on our own simulator.
Interrupt

When receiving data and change in status from I/O Ports, we have two methods available to us. We can Poll the port, which involves reading the status of the port at fixed intervals to determine
whether any data has been received or a change of status has occurred. If so, then we can branch to a routine to service the ports requests. As you could imagine, polling the port would consume quite some time. Time which could be used doing other things such refreshing the screen, displaying the time etc. A better alternative would be to use Interrupts. Here, the processor does your tasks such as refreshing the screen, displaying the time etc, and when a I/O Port/Device needs attention as a byte has been received or status has changed, then it sends a Interrupt Request (IRQ) to the processor. Once the processor receives an Interrupt Request, it finishes its current instruction, places a few things on the stack, and executes the appropriate Interrupt Service Routine (ISR) which can remove the byte from the port and place it in a buffer. Once the ISR has finished, the processor returns to where it left off. Using this method, the processor doesn't have to waste time, looking to see if your I/O Device is in need of attention, but rather the device will interrupt the processor when it needs attention.

The Programmable Interrupt Controller (PIC) handles hardware interrupts. Most PC's will have two of them located at different addresses. One handles IRQ's 0 to 7 and the other, IRQ's 8 to 15, giving a total of 15 individual IRQ lines, as the second PIC is cascaded into the first, using IRQ2. Most of the PIC's initialization is done by BIOS, thus we only have to worry about two instructions. The PIC has a facility available where we can mask individual IRQ's so that these requests will not reach the Processor. Thus the first instruction is to the Operation Control Word (OCW1) to set which IRQ's to mask and which IRQ's not too. As there are two PIC's located at different addresses, we must first determine which PIC we
Need to use. The first PIC, located at Base Address 0x20h controls IRQ 0 to IRQ 7.

Multiprogramming

Computer multiprogramming is the allocation of computing resources to more than one concurrent application, job or user ("program" in this nomenclature). It is distinguishable from monoprogramming (or uniprogramming) in which only a single program may be run at a time.

In this context, the root word "program" does not necessarily refer to a compiled application, rather, any set of commands submitted for execution by a user or operator. Such "program" could include a script or job control stream and any included calls to macro-instructions, system utilities or application program modules. An entire, interactive, logged-in user session can be thought of as a "program" in this sense.

Initially, multiprogramming was sought in order to optimize use of a computer system, since time and processing resources were often wasted when a single job waited for human interaction. Multiprogramming was developed as a feature of operating systems in the late 1950s and came into common use in mainframe computing in the mid- to late 1960s. This followed the development of hardware systems that possessed the requisite circuit logic and instruction sets to facilitate the transfer of control between the operating system and one or more independent applications, users or job streams. The use of multiprogramming was enhanced by the arrival of [virtual memory] and [virtual machine] technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, non-existent and invisible to them.
Processor Features

SAN FRANCISCO, Sept. 13, 2010 - Intel Corporation revealed today several significant design features for the company's 2011 2nd Generation Intel® Core™ processor family at the Intel Developer Forum. The new and enhanced features will further extend Intel's chip performance and battery life leadership, and add a number of visually related features built right into the chips.



Codenamed "Sandy Bridge," the chips will be based on Intel's first new "visibly smart" microarchitecture produced on the company's cutting-edge manufacturing factories, or "fabs," at 32-nanometer (nm is a billionth of a meter) process technology with second- generation high-k metal gate transistors.



"The way people and businesses are using computers is evolving at an explosive rate, fueling demand for an even more powerful and visually appealing experience," said Dadi Perlmutter, executive vice president and general manager of the Intel Architecture Group. "Our upcoming 2nd Generation Intel Core processor family represents the biggest advance in computing performance and capabilities over any previous generation. In addition to offering these features inside Intel-based laptops, we plan to scale these advances across our server data center and embedded computing product portfolio."



The processor family will include a new "ring" architecture that allows the built-in processor graphics engine to share resources such as cache, or a memory reservoir, with the processor's core to increase a device's computing and graphics performance while maintaining energy efficiency.



The 2nd Generation Intel Core processor also includes an enhanced version of Intel® Turbo Boost Technology. This feature automatically shifts or reallocates processor cores and processor graphics resources to accelerate performance, tailoring a workload to give users an immediate performance boost when needed.



Laptops and PCs powered by the 2nd Generation Intel Core processor family are expected to be available early next year.



New Visual Experience - Visibly Smart

Intel's new processor graphics delivers enhanced visual features focused on the areas where most users are computing today:  HD video, 3-D, mainstream gaming, multi-tasking and online socializing and multimedia.



To obtain and view video faster, Perlmutter demonstrated hardware accelerated video editing using the architecture's dedicated silicon for media processing, which allows users to quickly convert video to other formats.



The 2011 chips also come with Intel Advanced Vector Extensions (AVX). AVX delivers improved performance, rich functionality and the ability to better manage, rearrange and sort data. A new 256-bit instruction set accelerates floating point intensive applications such as digital photo editing and content creation.



Intel also demonstrated a dual processor, next-generation Intel® Xeon® processor server running Vidyo* video conferencing software that utilizes the 32 threads available on the system, and takes advantage of the AES New Instructions set (AESNI). Next-generation Xeon™ processors for 2 socket servers and workstations run 8 cores and 16 threads per processor and are on schedule for production in the second half of 2011.


Reduced Instruction, Set Computers (RISC)

RISC, or Reduced Instruction Set Computer is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures.

The first RISC projects came from IBM, Stanford, and UC-Berkeley in the late 70s and early 80s. The IBM 801, Stanford MIPS, and Berkeley RISC 1 and 2 were all designed with a similar philosophy which has become known as RISC. Certain design features have been characteristic of most RISC processors.
One Cycle Execution Time:
1. RISC processors have a CPI (clock per instruction) of one cycle. This is due to the optimization of each instruction on the CPU and a technique called ;
(2) Pipelining. A technique that allows for simultaneous execution of parts, or stages, of instructions to more efficiently process instructions;
(3) Large Number of Registers. The RISC design philosophy generally incorporates a larger number of registers to prevent in large amounts of interactions with memory.

Virtual Memory

Virtual memory is a technique that allows processes that may not be entirely in the memory to execute by means of automatic storage allocation upon request. The term virtual memory refers to the abstraction of separating LOGICAL memory—memory as seen by the process—from PHYSICAL memory—memory as seen by the processor. Because of this separation, the programmer needs to be aware of only the logical memory space while the operating system maintains two or more levels of physical memory space.
The virtual memory abstraction is implemented by using secondary storage to augment the processor’s main memory.
Data is transferred from secondary to main storage as and when
necessary and the data replaced is written back to the secondary
Storage according to a predetermined replacement algorithm. If the data swapped is designated a fixed size, this swapping is called paging; if variable sizes are permitted and the data is split along logical lines such as subroutines or matrices, it is called Segmentation. Some operating systems combine segmentation and paging. The diagram illustrates that a program generated address ( 1 ) or” logical address” consisting of a logical page number plus the location within that page (x) must be interpreted or “mapped” onto an actual (physical) main memory address by the operating System using an address translation function or mapper (2). If the page is present in the main memory, the mapper substitutes.
The physical page frame number for the logical number (3). If the mapper detects that the page requested is not present in main memory, a fault occurs and the page must be read into a frame in main memory from secondary storage (4, 5).



Chapter 2

Software Concepts

Software is the set of instructions (also called a program) that guided the hardware through its job. The following section explores additional aspects of software.

System Software

System Software is a set of programs that manage the resources of a computer system. System Software is a collection of system programs that perform a variety of functions. File Editing Resource Accounting I/O Management Storage, Memory Management access management. System Software can be broadly classified into three types as: System control programs controls the execution of programs, manage the storage & processing resources of the computer & perform other management & monitoring function. The most important of these programs is the operating system. Other examples are database management systems DBMS) & communication monitors. System support programs provide routine service functions to the other computer programs & computer users: E.g. Utilities, libraries, performance monitors & job accounting. System development programs assists in the creation of application programs. E.g., language translator, such as BASIC interpreter and application generators.


Application Software

Application software enables the user to direct the computer’s processing system in the tasks of manipulating and transforming input data into useful output information. Furthermore, it allows the user to alter the information generated by the processing system; e.g., how the information is presented. This is the type of software with which most users interact. It is the usual interface between user and computer.  Rarely do users directly manipulate systems software, especially the operating systems software.
Application software can be written for a specific user’s application (custom software), or it can be mass-produced for general use (commercial or packaged software). Naturally, custom software is usually far more expensive than commercial software. An accounting package written for a specific company might cost many thousands of dollars, whereas a commercial accounting package might cost only a few hundred dollars at a retail store. The advantage of custom software is that it is tailored to the user’s specific needs and can be seamlessly integrated into the user’s existing software. Not only is commercial software less costly, it is also available immediately, and the package can be evaluated before being purchased.
Application software comes in an incredible variety. It is available for business, personal, educational, communication, and graphic design purposes—to name the more usual categories. There is almost certainly a software package somewhere available to suit any need. If not, there are programmers ready to be hired to build it. For our purposes, we will limit our discussion to the four types of application software most likely to be useful to accounting and business students: word processing, spreadsheet, database, and presentation graphics.  These four applications are frequently sold together in a single software package. Three of the most popular packages are Corel WordPerfect Suite, Microsoft Office 2000, and Lotus SmartSuite. In addition to the four "standard" applications, these packages usually include email, Internet, video processing, and desktop publishing applications.  
          Word processing programs allow the user to quickly and easily create and revise texts on the computer screen. By using word processing applications, the user can format documents with ease, changing font size, margins, color, etc.  Different types of documents—e.g., letters, memos, and reports—are often preformatted in the application. PC-based word processing software is so capable and inexpensive that, in most businesses, it has become the usual tool for creating documents, even when more powerful mainframes and minicomputers are available.
          Spreadsheet programs are especially useful in business and accounting. The electronic spreadsheet consists of rows and columns of data, which the user can easily edit, copy, move, or print. Using numeric data entered in the spreadsheet, the computer can perform numerous mathematical calculations automatically, many of impressive sophistication (e.g., statistical, logical, and engineering functions). One of the spreadsheet program’s most powerful features for business purposes is that it enables the user to do “what-if” analyses on existing data and to input different data for various scenarios. Non-numeric data (e.g., names and dates) may also be entered in a spreadsheet. Spreadsheets can perform some non-mathematical operations (e.g., sorting and filtering) on this data, although this type of analysis is not a spreadsheet's strength.
          Database software allows the user to enter, store, maintain, retrieve, and manipulate data. In some ways, databases pickup where spreadsheets leave off, although a fairer assessment is probably that the relationship between the two types of software is reciprocal. Database software is certainly more efficient and effective at handling non-numeric data than is spreadsheet software. Conversely, numeric data is usually easier to manipulate in a spreadsheet. In most databases, data is entered to tables of rows and columns, similar to spreadsheets. Unlike spreadsheets, these tables can be connected into relationships that allow users incredible versatility in what they can do with that data. For example, data—both numeric and non-numeric—from several individual tables may be retrieved and used together in calculations, with the results presented in a business-style report.
          Presentation graphics software enable users to design professional-quality presentations for business and educational purposes. The presentations usually consist of formatted slides for projecting onto a screen from a computer projector or overhead projector, or for display on a large monitor.  These presentations may also be used for online meetings and Web broadcasts.  The slides can be designed to include backgrounds, graphic images, charts, clipart, shading, animation, and audio effects—and, of course, text, which can sometimes get lost in all of the embellishments. 

Programming Languages

Software programs must be written in Programming Languages. Programmers i.e. people trained in a use of Programming Language, write programs.

Programming Languages And Language Translators
Computer programmers use artificial languages, known as programming languages, to write the instructions that tell computers what to do. Programming languages have evolved over time to become more like the natural languages that human beings speak. This section traces the evolution from machine language to fifth-generation language.
Machine Language
 Programs for the first computers were written in strings of binary digits ("bits," consisting of 0s and 1s). Thus, this first programming language is often referred to as the first-generation language (or 1GL). It is also called the machine language because computers—past and present—require this type of instruction in order to perform their operations as machines. Instructions (and data) are represented ultimately as bits because these strings of 0s and 1s correspond to the actual binary on-off flow of electrical current through the computer's circuitry. Because machine language is so far removed from natural language, it has a number of inherent problems as a programming language. It is time-consuming and tedious for humans to work in machine language, and errors in machine-language programs are difficult to find.
Assembly Language
Assembly language (also referred to as the second-generation language or 2GL) was the next step in the evolution of programming languages. In assembly language, commands are written with mnemonic codes rather than numerical codes. These commands are translated from the source language (the programmer’s code) into an object module (machine language). The translation process can be done in two ways. Either an interpreter converts the program line by line as it is being run, or a compiler converts the entire program at one time before it is run. Interpreters are often used with beginning programmers who are learning a language for the first time. Compilers are used in professional settings where speed and security are important.
Interpreters and compilers are operating system programs that fall under the general category of language translators. Each programming language requires a specific language translator to convert it to machine language.
Assembly languages are specific to a particular processor and give the programmer control over the lower-level operations of the computer. Compared to third-generation languages, discussed next, assembly language requires more detail in programming.
Third-Generation Languages
The evolution of programming languages toward user-friendliness continued with the development of third-generation languages (3GL). Third-generation languages, such as FORTRAN, COBOL, Pascal, Java, PL/1, and C, are procedural languages. Program instructions are executed in a precise sequence to accomplish a task. These languages use recognizable statements like PRINT, INPUT, SORT, and IF, which must be compiled into detailed machine language instructions. The linkage editor inserts pre-written routines called library programs after compilation to produce an executable program called the load module.
Some of the most common third-generation programming languages are described next.
BASIC (Beginner’s All-purpose Symbolic Instruction Code) was designed as a programming language for novices. The language uses an interpreter that evaluates each line for syntax errors, which helps beginning programmers. The language became very popular for microcomputer use in the late 1970s and early 1980s.
FORTRAN (Formula Translation) was developed in 1956 to provide scientists, engineers, and mathematicians a programming language that is rich in scientific and mathematical operations.
COBOL (Common Business Oriented Language) was designed for such business applications as inputting records from a data file and manipulating, storing, and printing them. A tremendous number of programs have been written in COBOL since its inception in the early 1960s. COBOL still maintains a significant presence. Each business day, billions of lines of COBOL code are executed.
IBM developed PL/1 (Programming Language 1) in 1964. This language combines the mathematical features found in FORTRAN with the record-processing features found in COBOL.
Pascal was written to take advantage of the programming technique called structured programming, in which programs are divided into modules that are controlled by a main module. The language was very popular in the 1980s for teaching structured programming and advanced programming techniques in computer science courses.
In the 1970s, AT&T Bell Labs developed a programming language called C that could be run on various types of computers. Source code written for a microcomputer could thus easily be converted into source code for a mainframe.
Java was developed in the mid 1990s by Sun Microsystems. It is based on a new programming technique called object-oriented programming. Object-oriented programming allows the programmer to define not only the characteristics of data but also the data's associated procedures. This type of programming is especially beneficial in a networked environment because it allows computers to quickly transmit computations to each other, not just data requiring subsequent computation.
Fourth-Generation Languages
 The development of applications written in third-generation languages takes a considerable amount of time, often several months to several years. Increasingly users need software that allows them to develop simple applications quickly. Fourth-generation languages (4GL) were developed to meet this need. They are declarative, not procedural, languages. With the earlier generations of procedural languages, the user/programmer had to delineate the step-by-step procedures for the computer to follow to achieve a certain result. With fourth-generation language, however, the user simply tells the computer what end result is desired and the computer to decides the steps needed to achieve that goal. Also, fourth generation languages have been designed to be easy to learn and use. In addition, they relieve professional programmers from increasing demands to develop new programs and maintain existing ones.
Fourth-generation languages are found in a variety of applications, including statistical packages, data base management systems, and graphical packages. Statistical packages perform a full range of statistical analyses and enable the user to produce reports of the results. Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) are examples of powerful statistical packages that are available on mainframe computers, minicomputers, and microcomputers.
Data base management systems usually contain a 4GL query language that allows the user to retrieve data from and store data to the database. Relational data base management systems have been standardized on a query language called Structured Query Language (SQL). By using either a menu-driven interface or simple commands, the end user can develop advanced queries to the database without a programmer’s assistance.
Fifth-Generation Languages
 Fifth-generation languages (5GL) are attempting to make the task of programming even user-friendlier than did the 4GLs. This is achieved by removing most of the verbal aspects from programming. Instead, 5GLs use a visual or graphical environment that allows the user to design the program with minimal use of programming words. For example, visual programming allows the user to drag icons together in a windows environment in order to assemble a program component. The 5GL development interface then automatically creates the source language that is typically compiled with a 3GL or 4GL language compiler. Enabling users to design something as complex as a computer program by means of graphical symbols is a difficult undertaking. Not all attempts at developing a workable 5GL have been successful. Currently, however, Microsoft, Borland and IBM make 5GL visual programming products for developing Java applications that appear successful. The amazing evolution of computer languages from strings of 0s and 1s to graphical icons says a lot about the ability of computers to inspire us with creativity and genius.


{ 1 comments ... read them below or add one }

  1. tnxs sir!!!!!
    this is very usefull information and best notes!!!!

    ReplyDelete

All Notes on BCA

All Notes  on BCA
BCA all subjects notes

Total Pageviews

Translate

Powered by Blogger.

Copyright © All Notes on BCA