UNIT - 4



Operating System

History and evolution of operating systems

The earliest computers (large, expensive mainframes) had primitive operating systems and were difficult to communicate width.  Input to the machines consisted of strange devices such as punched cards and paper tape, primitive consoles and switches etc.  Output was similar.
There are still large mainframes used today (by banks, airlines etc) – they use systems such as MVS, VME which are very complicated and costly (more than the computer!)
Because computers were then much too expensive to be used by just one person, the problem was to produce multi-user, multi-access systems so that many users could share the system.
This resulted in the MULTICS system in the 1960’s.
However, this system proved to be too unreliable and complex, so a ‘sawn-off’ version of MULTICS called UNIX was produced in the early 1970’s
UNIX could support several users, had good utilities and fairly good security, and was ideal for the new ‘minicomputers’ then being produced.  It was very successful and continues in use to this day.  Its other advantage was that it was an academically produced non-commercial product that was essentially free and available for users to modify, as they wanted.  It was written in, and is closely associated with, the C programming language.
However it is largely a text/command-based system without any graphical component and with its strange-sounding commands, is quite difficult to learn.
In the 1990’s, a simpler version of UNIX called LINUX was produced.  It emulates the ‘user-friendly’ graphical interface of Windows and is today a popular (and basically only) alternative to Windows.  It is also good for today’s client-server networks, It has however a rather ‘nerdy’ image.
In the late 1970’s, the first microcomputers (desk-tops) appeared.  They used a system called CP/M, which allowed the user to perform basic functions such as saving files, running programs etc.
When IBM introduced the PC in 1981, they developed DOS in cooperation with the then new Microsoft Corporation.  DOS was a simple text/command based system for a single user, with little security, but was hugely popular, and is still used today.
IBM later produced a more sophisticated graphical system called OS/2 but this failed in competition with Windows (see below).
In the 1980’s the Apple computer company introduced a very innovative graphical user interface, easy to use, with features still used today, such as icons, folders, mouse actions etc.  The current Apple operating system is OS/X.
The Microsoft Corporation ‘stole’ this idea and started producing versions of Windows, imitating the ‘look and feel’ of the Apple interface, which became accepted as the standard PC operating system with version 3.1 in 1992. With the later versions of Windows are – 95, 98, 2000, XP and now Vista, Windows now totally monopolies the market for PC operating systems (more than 95%).

Main Function of Operating System
An operating system includes various programs and utilities required to make your computer work.  Operating systems are "exclusive". If your computer system uses the Microsoft Windows operating system, for example, it will not run software specifically designed for Macintosh computers, and vice versa. 
Today, most popular operating systems rely on a graphics-based design that lets you use a mouse and point-and-click procedures to perform almost all operations.
Most operating systems perform the function given bellow. A separate module of operating system software performs each of these functions:
      Process Management: process management module takes care of creation and deletion of processes, scheduling of system resources to different processes requesting them, and providing mechanisms for synchronization and communication among processes.
      Memory Management: memory management module takes care of allocation and de-allocation of memory space to programs in need of this resource.
      File Management: file management module takes care of file-related activities such as organization, storage, retrieval, naming, sharing, and protection of files.
      Security: security module protects the resources and information of a computer system against destruction and unauthorized access.
      Command Interpretation: command interpretation module takes care of interpreting user commands, and directory system resources to process the commands. With this mode of interaction with a system, users are not concerned about hardware details of the system.
Multitasking
Multitasking: Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time sharing, which means that each program uses a share of the computer's time to execute.
An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. At a later time control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP, (since Windows NT was targeted at professionals.)

Multiprocessing
All processes share the same Lisp address space, sometimes called the Lisp world. Each process has its own execution stack (i.e. "call" stack) and dynamic variable bindings. All processes share everything else in the Lisp world, including packages, streams, global function bindings, and global values of special variables. (The global value of a special variable is its outermost value, outside any dynamic binding.) The Allegro CL compiler, interpreter, top level, and other subsystems all lambda bind their necessary special variables, and Allegro CL code itself is reentrant. Therefore, multiple processes can correctly maintain any number of correct and independent ongoing computations simultaneously within Multiprocessing the single Lisp world.
There are two implementations of multiprocessing in Allegro CL, the native threads model and the virtual threads model. : Os-threads appears on the *features* list of the implementation using the native threads model and does not appear on the *features* list of the implementation using the virtual threads model. In earlier releases, the programming interface of the two models had significant differences. These differences are almost all gone in release 7.0. In the native threads model, each process within Lisp is implemented by an operating system thread and management tasks such as scheduling are done by the operating system. In the virtual threads model, all processes are implemented internally by Lisp. Although there are many subtle differences between these two implementations, in actual practice it is very rare for actual application code to have any dependency on these differences. Most code developed on one will run without modification on the other. This has been verified during the porting of several preexisting multiprocess-intensive subsystems.

Time Sharing
Time Sharing Operating System, or TSOS, was an operating system for RCA mainframe computers of the Spectra 70 series.
RCA was in the computer business until 1971. Then it sold its computer business to Sperry Corporation; Sperry offered TSOS renaming it to VS/9. In the mid seventies, an enhanced version of TSOS called BS2000 was offered by the German company Siemens.
While Sperry (Unisys after the company was renamed) discontinued VS/9 in the early 80's, BS2000, now called BS2000/OSD is still offered by Fujitsu Siemens Computers and used on their mainframe customers primarily in Europe.
TSOS was the first operating system that supported virtual addressing of the main storage. As the name suggests, it provided time sharing features. Beyond that it provided a unique user interface for both time sharing and batch which was a big advantage over IBM's OS/360 or their successors MVSOS/390 and z/OS.

Real-time OS with Examples
A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
These are the systems that respond to input immediately. This category includes operating system substantially for the purpose of controlling and monitoring external activities with timing constraints. These are used for such tasks and navigation, in which the computer must react to a steady flow of new information without interruption. Most general-purpose operating system like DOS and UNIX are not real-time because these can take a few seconds, or even minutes, to react. Some examples of real-time operating systems are C Executive, CCP, CTRON, and DESRT etc.


Database Management Systems

Data Base Management System

Data base is a collection of raw facts and figure users which are arranged for in a proper manner so than an appropriate record can be maintain. This proper arrangement is known as DBMS. In each and every CBMS table are maintained to recorded data. Data is always represented through relations. There are various kind of relations can be established between the tables.
A table is a collection of related f information stored so that it is available to many users for different purpose. The content of data base is obtained by combing data from the entire difference source in an organization so that data available to all the users is redundant. The data base fundament system is a major software component of data base system.

Capabilities of data base system
      Creation of files, addition of data , modification and deletion these are the important function of data base should perform.
      Retrieval, collection and deletion of data should be simple.
      The data should be shorted in form of table and in dense according to the user.
      Various reports can be provided form system in a standardized way or in a specific way.
      Mathematical functions can be performed and can be manipulated easily.
      There must be data integrity is DB accuracy and consistency.

BASIC CONCEPT OF DBMS
DATA: Data is a collection of meaning fact and figure expressed in a form user could understand it can be resources and processed for further usage.

INFORMATION: Refined data is known as in formation. Any input that foes into process generate information

PROCESS: Any method of reefing data is information data is to information is known as process.

DATA BASE:  A collection of related set of data along with necessary details is known as data base.


Database Models

Database models are categorized as conceptual models and implementation models.

The conceptual model focuses on the logical nature of the data representation. Therefore, the conceptual model is concerned with what is represented in the database, rather than with how it is represented.

Data modeling under conceptual model is achieved in two levels - the first level builds the conceptual model of the data, using E-R Modeling/diagram. The second level organizes this model better, by removing redundancies, through a process called Normalization. The normalized model is then converted into the Physical Database.

Conceptual Model - First level

Entity-Relationship Model:  Entity-Relationship modeling is a technique for analysis and logical modeling of a system's data requirement.

The ER Diagram helps to identify the database's main entities and their relationships. Because the ER Diagram components are graphically represented, their role is more easily understood. Using the ER Diagram, it's easy to map it to the relational database model's tables and attributes. This mapping process uses a series of well-defined steps to generate all the required database structures.

It uses three basic concepts, namely, entities, their attributes and the relationships that exist between the entities. It uses graphical notations for representing these.



      Entity:  An entity is any object, place, person, concept, activity about which an enterprise records data. It is an object, which can have instances or occurrences. Each instance should be capable of being uniquely identified. Each entity has certain properties or attributes associated with it and operations applicable to it. An entity is represented by a rectangle in the E-R Model.
Categories of objects under which an entity is identified:

Category
Examples
Physical object
Employee, Machine, Book, Client, Student, Item
Abstract object
Account, Department
Event
Application, Reservation, Invoice, Contract
Location
Building, City, State

      Attributes:  Attributes are data elements that describe an entity. If the attributes of an entity have more attributes that describe it, then it is not an attribute of that entity, but another entity. Attributes can either be listed next to the entities, or placed in circles and attached to the entities.
Similarly, attributes can also describe properties of relations.

Example
Entity
Attributes
Customer
Name, Address, Status
Book
ISBN, Title, Author, Price
Order
Order Number, Order Date, Order Placed by

      Relationship:  This is an association between entities. It is represented by a diamond in the ER Diagram. For example, there is a relationship between student and course. The relationship represents the fact that a student enrolls in a course.

Entity
Attributes
Course
Code, Duration, Start Date, Room No.
Student
ID, Name, Address, DOB

Degree of Relationship

The Degree of Relationship indicates the link between the two entities for a specified occurrence of each. The degree of relationship is also called "Cardinality". Conceptual model use three types of relationships to describe associations among data: one-to-one, one-to-many, many-to-many
      One to one (1:1)
      One to many (1:N) (N stands for many)
      Many to many (M:N)
One to one Relationship (1:1):  One order requisition raises one purchase order. One purchase order is raised by one order requisition.

For one occurrence of the first entity, there can be at most one related occurrence of the second entity and vice-versa.

One to many relationship (1:N):  One employee works in at most one department. One department can have many employees. 

For one occurrence of the first entity, there can exist many related occurrences of the second entity and for every occurrences of the second entity there exists only one associated occurrence of the first.

Many to Many relationship (M:N):  One order may contain many items. One item can be contained in many orders.


For one occurrence of the first entity, there exists many related occurrences of the second entity and for every occurrence of the second entity, there may exist many associated occurrences of the first.

Example

An organization has employees assigned to specific departments. The employees may work on several projects at the same time.

The project uses parts, which are supplied by the different suppliers, and stored, in various warehouses.


Conceptual Model - Second Level

Normalization:  Normalization is the process of refining the data model built by the Entity Relationship Diagram. The normalization technique, logically groups the data over a number of tables, which are independent and contain no duplicate data. The entities or tables resulting from normalization contain simple data item, with relationships being represented by replication of key data item(s).

The normalization process when properly executed eliminates uncontrolled data redundancies, thus eliminating the data anomalies and the data integrity problems that are produced by such redundancies.

Normalization produces a carefully controlled redundancy that lets us properly link database tables.

The process of structuring data is to minimize duplication and inconsistencies. The process usually involves breaking down a single table into two or more tables and defining relationships between those tables. Normalization is usually done in stages, with each stage applying more rigorous rules to the types of information, which can be stored in a table. Full adherence to normalization principles increases the efficiency of a particular database.

The first three levels in normalizing a database are:

First Normal Form (1NF):  First Normal Forms are groups of records such as Students lists in which each field (column) contains unique and non-repeating information. 1NFs are the least structured. For example, we have a student’s table with the following structure:

Student ID
Name
Date of Birth
Course Advisor
Course Advisor's Telephone
Student
Course ID 1
Course Description 1
Course Instructor 1
Course ID 2
Course Description 2
Course Instructor 2


The repeating course fields are in conflict with first normal form. To fix the problems created by such repeating fields, we should place the course information in a separate course table, and then provide a linking field (most likely Student ID) between the Students Table and the Course Table.

Second Normal Form (2NF):  Second Normal Forms break down First Normal Forms, separating them into different tables by defining successively finer interrelationships between fields. 2NFs do not include fields that are subsets of fields other than primary (key) field.

For example, say we create a course table with the structure:

Student ID
Course ID
Course Fee
Course Description
Course Instructor

We can create a unique primary key by combining StudentID + CourseID (StudentID is not unique in itself, as one student may take multiple courses: similarly, CourseID is not unique in itself as many students may take the same course: however, each student will only be taking a particular course once at any one time, so the combination of StudentID + CourseID gives us a unique primary key).

Now, in 2NF, no non-key fields (course description, course instructor) may depend on a portion of the primary key. That, however, is exactly what we have here: the course instructor and course description are the same for any course, regardless of the student taking the course.

To fix this and put the database in second normal form, we create a third table, so our database structure now looks like this (with key fields in italics):
Student
Student Courses
Courses
student ID
student ID
course ID
name
course ID
course Description
date of birth

course Instructor
course advisor


course advisor's telephone



Third Normal Form (3FN):  Like the Second Normal Forms, the Third Normal Forms also break down the First Normal Forms, separating them into different tables by defining successively finer interrelationships between fields. Third Normal Forms do not include fields that provide information about fields other than the key field.

In our students table, for example, each field should provide information about the particular student referred to by the key field, studentID. That certainly applies to the student's name and date of birth. But the advisor's name and telephone doesn't change depending on the student. So, to put this database in third normal form, we need to place the advisor's information in a separate table.

Students
Student Courses
Courses
Course Advisor
Student ID
Student ID
Course ID
Advisor ID
Name
Course ID
Course Description
Advisor Name
Date of Birth

Course Instructor
Advisor's Telephone
Course Advisor ID




Boyce-Codd Normal Form (BCNF):  A table is in Boyce-Codd Normal Form (BCNF), when it is in 3NF and every determinant in the table is a candidate key. A determinant is any attribute whose value determines other values within a row. For example, if the table is in 3NF and it contains a nonprime attribute that determines a prime attribute, the BCNF requirements are not met. This description clearly yields the following conclusions:

This description clearly yields the following conclusions:
      If a table is in 3NF and it contains only one candidate key, 3NF and BCNF are equivalent.
      BCNF can be violated only if the table contains more than one candidate key. Putting it another way, there is no way that the BCNF requirement can be violated if there is only one candidate key.



 Computer Generation & Classifications


First Generation (1940-1956) Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Below we will see more detailed the development of this generation:
This era of modern computing began with a flurry of development before and during World War II, as electronic circuit elements replaced mechanical equivalents, and digital calculations replaced analog calculations. Machines such as the Z3, the Atanasoff HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer"HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer"– HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer"HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer" HYPERLINK "http://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry_Computer"Berry Computer, the Colossus computers, and the ENIAC were built by hand using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium. This computer was defined the "first computer”.
There were three parallel streams of computer development in the World War II era; the first stream largely ignored, and the second stream deliberately kept secret. The first was the German work of Konrad Zuse. The second was the secret development of the Colossus computers in the UK. Neither of these had much influence on the various computing projects in the United States. The third stream of computer development, Eckert and Mauchly's ENIAC and EDVAC, was widely publicized.
Alan Turing's 1936 proved enormously influential in computing and computer science the sequential process of the machines. Turing provided a definition of a universal computer which executes a program stored on tape. This construct came to be called a Turing machine. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. (This process is related with the executed process)
This computing machine to be a practical general-purpose computer, there must be some convenient read-write mechanism, punched tape, for example. With knowledge of Alan Turing's theoretical 'universal computing machine' John von Neumann defined an architecture which uses the same memory both to store programs and data: virtually all contemporary computers use this architecture (or some variant). While it is theoretically possible to implement a full computer entirely mechanically electronics made possible the speed and later the miniaturization that characterize modern computers.
In 1939, development began at IBM's Endicott laboratories on the Harvard the first machine developed was Mark I. Known officially as the Automatic Sequence Controlled Calculator, the Mark I was a general purpose electro-mechanical computer built with IBM financing and with assistance from IBM personnel, under the direction of Harvard mathematician Howard Aiken. Its design was influenced by Babbage's Analytical Engine, using decimal arithmetic and storage wheels and rotary switches in addition to electromagnetic relays. It was programmable via punched paper tape, and contained several calculation units working in parallel. Later versions contained several paper tape readers and the machine could switch between readers based on a condition. The Mark I was moved to Harvard University and began operation in May 1944.
After the Mark I, other machines developed as:
George Stibitz is internationally recognized as one of the fathers of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator that he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to calculate using binary form.
The Atanasoff-Berry Computer was the world's first electronic digital computer. The design used over 300 vacuum tubes and employed capacitors fixed in a mechanically rotating drum for memory. Though the ABC machine was not programmable, it was the first to use electronic tubes in an added.
But one the machine most representative of this generation was the US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic general-purpose computer. It combined, for the first time, the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine.  High speed memory was limited to 20 words (about 80 bytes).
ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, and contained over 18,000 vacuum tubes. One of the major engineering feats was to minimize tube burnout, which was a common problem at that time. The machine was in almost constant use for the next ten years.

Even before the ENIAC was finished, Eckert and Mauchly recognized its limitations and started the design of a stored-program computer, EDVAC (Electronic Discrete Variable Automatic Computer). John von Neumann was credited with a widely circulated report describing the EDVAC design in which both the programs and working data were stored in a single, unified store. This basic design, denoted the von Neumann architecture, would serve as the foundation for the worldwide development of ENIAC's successors. In this generation of equipment, temporary or working storage was provided by acoustic delay lines, which used the propagation time of sound through a medium such as liquid mercury (or through a wire) to briefly store data. A series of acoustic pulses is sent along a tube; after a time, as the pulse reached the end of the tube, the circuitry detected whether the pulse represented a 1 or 0 and caused the oscillator to re-send the pulse. Others used Williams’s tubes, which use the ability of a small cathode-ray tube (CRT) to store and retrieve data as charged areas on the phosphor screen. By 1954, magnetic core memory was rapidly displacing most other forms of temporary storage, and dominated the field through the mid-1970s.
EDVAC was the first stored-program computer designed; however it was not the first to run. In 1949 by the Manchester Mark 1 computer, a complete system, using Williams’s tube and magnetic drum memory, and introducing index registers.
 With EDVAC the first generation of computers ends and then starts the second generation using transistors.

Second Generation (1956-1963) Transistors
From 1955 the transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers.
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. Also the first computers of this generation were developed for the atomic energy industry.
The Second-generation of computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
 The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
                                              
The first transistorised computer was built at the University of Manchester and was operational by 1953; a second version was completed there in April 1955. The later machine used 200 transistors and 1,300 solid-state diodes and had a power consumption of 150 watts. However, it still required valves to generate the clock waveforms at 125 kHz and to read and write on the magnetic drum memory.
Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost.
Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The IBM 350 RAMAC was introduced in 1956 and was the world's first disk drive. The second generation disk data storage units.
A removable disk stack can be easily exchanged with another stack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks,' their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.
Many second generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions.

During the second generation remote terminal units like telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the internet.  These developments were the precedent for what we now know as internet
Third Generation (1964-1971) Integrated Circuits

The development of the 
integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
The explosion in the use of computers began with "third-generation" computers, making use of Jack St. Clair Kilby's and Robert Noyce's independent invention of the integrated circuit (or microchip), which later led to the invention of the microprocessor, by Ted HoffFederico Faggin, and Stanley Mazor at Intel.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
The computer became small, low-cost and that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond. Steve Wozniak, co-founder of Apple Computer, is sometimes erroneously credited with developing the first mass-market home computers.
Fourth Generation (1971-80’s) Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer—from the central processing unit and memory to input/output controls—on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of, the mouse and handheld devices, etc. 
Fifth Generation (Present and Beyond) Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

Moore’s Law

Moore's law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster).
The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. The paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue "for at least ten years". His prediction has proven to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.
The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speedmemory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well (see other formulations and similar laws). This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy. Moore's law describes a driving force of technological and social change in the late 20th and early 21st centuries.
This trend has continued for more than half a century. Sources in 2005 expected it to continue until at least 2015 or 2020. However, the 2010 update to the International Technology Roadmap for Semiconductors has growth slowing at the end of 2013, after which time transistor counts and densities are to double only every three years.

Classification of computers


Types of Computers

      Analog Computers

Analog computers are used to process continuous data. Analog computers represent variables by physical quantities. Thus any computer which solve problem by translating physical conditions such as flow, temperature, pressure, angular position or voltage into related mechanical or electrical related circuits as an analog for the physical phenomenon being investigated in general it is a computer which uses an analog quantity and produces analog values as output. Thus an analog computer measures continuously. Analog computers are very much speedy. They produce their results very fast. But their results are approximately correct. All the analog computers are special purpose computers.
2. Digital Computers

Digital computer represents physical quantities with the help of digits or numbers. These numbers are used to perform Arithmetic calculations and also make logical decision to reach a conclusion, depending on, the data they receive from the user.

3. Hybrid Computers

Various specifically designed computers are with both digital and analog characteristics combining the advantages of analog and digital computers when working as a system. Hybrid computers are being used extensively in process control system where it is necessary to have a close representation with the physical world.

The hybrid system provides the good precision that can be attained with analog computers and the greater control that is possible with digital computers, plus the ability to accept the input data in either form.

Classification of Computers According to Size

      Super Computers

Large scientific and research laboratories as well as the government organizations have extra ordinary demand for processing data which required tremendous processing speed, memory and other services which may not be provided with any other category to meet their needs. Therefore very large computers used are called Super Computers. These computers are extremely expensive and the speed is measured in billions of instructions per seconds.
2. Main Frame Computers

The most expensive, largest and the most quickest or speedy computer are called mainframe computers. These computers are used in large companies, factories, organizations etc. the mainframe computers are the most expensive computers; they cost more than 20 million rupees. In this computer 150 users are able to work on one C.P.U. The mainframes are able to process 1 to 8 bits at a time. They have several hundreds of megabytes of primary storage and operate at a speed measured in nano second.

3. Mini Computers

Mini computers are smaller than mainframes, both in size and other facilities such as speed, storage capacity and other services. They are versatile that they can be fitted where ever they are needed. Their speeds are rated between one and fifty million instructions per second (MIPS). They have primary storage in hundred to three hundred megabytes range with direct access storage device.
4. Micro Computers

These are the smallest range of computers. They were introduced in the early 70’s having less storing space and processing speed. Micro computers of today’s are equivalent to the mini computers of yesterday in terms of performing and processing. They are also called “computer of a chip” because its entire circuitry is contained in one tiny chip. The micro computers have a wide range of applications including uses as portable computer that can be plugged into any wall.

            5. Laptop Computers

The smallest computer in size has been developed. This type of small computers look like an office brief case and called "LAPTOP" computer. The laptops are also termed as "PORTABLE COMPUTERS." Due to the small size and light weight, they become popular among the computer users. The businessmen found laptop very useful, during traveling and when they are far away from their desktop computers. A typical laptop computer has all the facilities available in microcomputer. The smallest laptops are called "PALMTOP".

Distributed Computer System

Ideas derived from social simulation models can directly inform the design of distributed computer systems. This is particularly the case when systems are “open”, in the sense of having no centralised control, where traditional design approaches struggle. In this chapter we indicate the key features of social simulation work that are valuable for distributed systems design. We also discuss the differences between social and biological models in this respect. We give examples of socially inspired systems from the currently active area of peer-to-peer systems and finally we discuss open areas for future research in the area.

Introduction

Massive and open distributed computer systems provide a major application area for ideas and techniques developed within social simulation and complex systems modelling. In the early years of the 21st century there has been an explosion in global networking infrastructure in the form of wired and wireless broadband connections to the internet encompassing both traditional general purpose computer systems, mobile devices and specialist appliances and services. The challenge is to utilise such diverse infrastructure to provide novel services that satisfy user needs reliably. Traditional
methods of software design and testing are not always applicable to this challenge. Why is this? And what can social simulation and complexity perspectives bring to addressing the challenge? This chapter answers these questions by providing a general overview of some of the major benefits of approaching design from socially inspired perspective in addition to examples of applications in the area of peer-to-peer (P2P) systems and protocols. Finally we will speculate on possible future directions in the area.


This chapter is not an exhaustive survey of the area, for example, we have not discussed the application of social network analysis techniques to web graphs and social networks that are constructed within, or facilitated by, distributed software systems (Staab et al 2005). Both these are active areas. Further, we have not discussed the active research area based on randomized “Gossiping” approaches, where information is diffused over networks through randomized copying of information between adjacent nodes (Wang et al 2007).

Parallel computing

Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-levelinstruction leveldata, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors.
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clustersMPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.
Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.
The maximum possible speed-up of a program as a result of parallelization is known as Amdahl's law.


                                 UNIT - 5

Chapter 1
 Computer Communication

Introduction to Computer Communication
Data communication deal with men’s and method of data transferred to one location to another. A computer network is a network of computers that are geographically distributed but connected in a manner to enable meaningful transmission and exchange of data, sharing of information resources (hardware and software) and processing load is the main objective of a computer network.
Basic elements of a communication are process of transferring information to one point to another. There are three basic elements of any communication system are following:
      Source: senders that creates and send a message.
      Transmission Medium: a medium that carries a message.
      Sink (Receiver): a receiver that received the message.
Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols include two Ethernet, a hardware and link layer standard that is ubiquitous in local area networks, and the Internet protocol suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats.

Introduction to Computer Network
computer network, or simply a network, is a collection of computers and other hardware interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. A network is a group of devices connected to each other. Networks may be classified into a wide variety of characteristics, such as the medium used to transport the data, communications protocol used, scale, topology, benefit, and organizational scope.
Computer networking is sometimes considered a sub-discipline of electrical engineeringtelecommunicationscomputer scienceinformation technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines.
computer network is a group of computers connected to each other electronically. This means that the computers can "talk" to each other and that every computer in the network can send information to the others. Usually, this means that the speed of the connection is fast - faster than a normal connection to the Internet. Some basic types of computer networks include:
      local area network (often called a LAN) connects two or more computers, and may be called a corporate network in an office or business setting.
      An "internetwork", sometimes called a Wide Area Network (because of the wide distance between networks) connects two or more smaller networks together. The largest internetwork is called the Internet.
Computers can be part of several different networks. Networks can also be parts of bigger networks. The local area network in a small business is usually connected to the corporate network of the larger company. Any connected machine at any level of the organization may be able to access the Internet, for example to demonstrate computers in the store, display its catalogue through a web server, or convert received orders into shipping instructions.
Microsoft WindowsLinux and most other operating systems use TCP/IP for networking. Apple Macintosh computers used AppleTalk in the past, but it uses TCP/IP now.
To set up a network an appropriate media is required. This can be wired or wireless. Twisted-pair, co-axial or fiber-optic are examples of cable and infra-red, blue-tooth, radio-wave, micro-wave etc. are wireless media used for networking. When you are working with a mere LAN, computers, media and peripherals are sufficient. But when you are working with a wider range you have use some additional devices like bridge, gateway or router to connect different small or large networks. And obviously a protocol must be maintained.
To set up a network you have to select an appropriate topology to arrange the hardware devices using the media. Topologies generally used are bus-topology, ring-topology, star-topology, tree-topology, object-oriented topology etc. Among these star-topology and tree-topology are most popular nowadays.
Why do we need Networks?

Networks were created to cut down costs by sharing resources. Any basic network has two components a server and a client. A server is typically a computer of a higher configuration who distributes or father shares various resources i.e., printer, scanner, files, with its various clients. The clients are computer of lower configurations who sometimes rely totally on the server for their working such clients are called Dumb Terminals/Nodes, clients that don't require the server for their basic functioning are called Smart Terminals/Workstations and can operate on their own.
Types of Networks

Different types of (private) networks are distinguished based on their size (in terms of the number of machines), their data transfer speed, and their reach. Private networks are networks that belong to a single organization. There are usually said to be three categories of networks:
·         LAN (local area network)
·         MAN (metropolitan area network)
·         WAN (wide area network)
There are two other types of networks: TANs (Tiny Area Network), which are the same as LANs but smaller (2 to 3 machines), and CANs (Campus Area Networks), which are the same as MANs (with bandwidth limited between each of the network's LANs).

LAN
LAN stands for Local Area Network. It's a group of computers which all belong to the same organization, and which are linked within a small geographic area using a network, and often the same technology (the most widespread being Ethernet).
A local area network is a network in its simplest form. Data transfer speeds over a local area network can reach up to 10 Mbps (such as for an Ethernet network) and 1 Gbps (as with FDDI or Gigabit Ethernet). A local area network can reach as many as 100, or even 1000, users.
By expanding the definition of a LAN to the services that it provides, two different operating modes can be defined:
· In a "peer-to-peer" network, in which communication is carried out from one computer to another, without a central computer, and where each computer has the same role.
· In a "client/server" environment, in which a central computer provides network services to users.

MAN
MANs (Metropolitan Area Networks) connect multiple geographically nearby LANs to one another (over an area of up to a few dozen kilometers) at high speeds. Thus, a MAN lets two remote nodes communicate as if they were part of the same local area network.
A MAN is made from switches or routers connected to one another with high-speed links (usually fiber optic cables).

WAN
A WAN (Wide Area Network or extended network) connects multiple LANs to one another over great geographic distances.
The speed available on a WAN varies depending on the cost of the connections (which increases with distance) and may be low.
WANs operate using routers, which can "choose" the most appropriate path for data to take to reach a network node.
The most well-known WAN is the Internet.
Network Models and Their Importance
The network communication technology as one complete would be very difficult to implement. To simplify it, we must implement various layered network models such as ISO/OSI. All network models are based on the same principle. The upper layer uses services of the lower one and provides services for the upper one. The specific layer can only communicate with the same layer on the destination device.

OSI Reference Model


Open system Interconnection (OSI) Reference Model was created to have certain standards for data communication, so that irrespective of the network two peers should be able to communicate with each other.

The OSI model divides a communication session into 7 distinct layers.
      Application layer                     Layer 7
      Presentation layer                   Layer 6
      Session layer                          Layer 5   
      Transport                              Layer 4   
      Network                                Layer 3   
      Data Link layer                       Layer 2
      Physical layer                         Layer 1
At the sender's peer the data is passed from top-most layer to bottom-most and at the receiver's end the data is passed from the bottom-most layer to the top-most.
      Layer 1 - Physical Layer:  The Physical layer is responsible for transmitting and receiving the frames of data, which is created and given to the physical layer by the Data Link layer.
      Layer 2 - Data Link Layer:  The Data Link Layer has two basic responsibilities. One, on the transmitting side the Data Link Layer is responsible for breaking the data to be sent into smaller pieces called frames. Frames also contain information about the destination client so as to ensure that the data reaches its correct destination. Two, on the receiver's side it’s responsible for receiving the frames and send back an acknowledgement ensuring secure transmission of the data, after checking the contents of the frame received.
      Layer 3 - Network Layer:  The Network Layer is responsible for establishing the connection path between the sending and the receiving peer. This layer cannot detect any transmission errors that might occur.
      Layer 4 - Transport Layer:  The Transport Layer is responsible for the integrity of the transmission and is capable of doing the same across LAN Segments. Transport Layer can detect packets/frames discarded by the routers and generate a retransmit request.
      Layer 5 - Session Layer:  Session Layer basically manages the flow of communication, which can be uni-directional or bi-directional. This flow of communication is called a session.
      Layer 6 - Presentation Layer:  The Presentation Layer is responsible for the encoding of data. All systems do not follow the same encoding scheme, so it's the Presentation Layer's responsibility to translate the data between these encoding schemes. Presentation Layer basically used ASCII; pronounced as 'As-Sky' (American Standard Code for Information Interchange) and EBCDIC (Extended Binary Coded Decimal Interchange Code).
      Layer 7 - Application Layer:  Application Layer is the top most layers in the OSI model. It is responsible to provide the interface between the applications and the computers network services.
Protocols

The following are the Different types of protocol:
Internetwork Protocol:  This protocol is used for routing and it is a connectionless datagram (packet) oriented.
Routing Information Protocol:  Used to keep routing tables updated in routers and hosts by periodic broadcasts from routers.
Address Resolution Protocol (ARP): Broadcasts on subnet by hosts or routers seeking physical node addresses given IP addresses.
Internet Control Message Protocol:  Messages for flow control, echo and flow redirection. "Ping" is a popular use of ICMP echo messages.
Transport Control Protocol:  Provides connection oriented transport services host-to-host (end -to-end) across the internet. Sequencing, acknowledgement and flow control are some of these services. Support FTP, SMTP and TELNET process services.
User Datagram Protocol: Provides connectionless transport services (no sequencing, acknowledgement or flow control) for those process protocols that do not require these services. Supports SNMP and NFS processes, for example.
File Transfer protocol:  Copy (files) from one host to another. User must work through established accounts on both hosts.
Single Mail Transfer Protocol (SMTP):  Host to host electronic mail transfer. Connect locally established on both hosts.
Simple Network Management Protocol (SNTP): This protocol specifies the management of network nodes that have agents running in them managed from nodes acting as network managers. Data is kept in Management Information Base (MIB) database.
Network File System (NFS): Logically attaches portions of file systems on remote NFS server to local file system. Specification licensed by Sun. Uses RPC and XDR.
Remote Procedure Calls: A redirector that filters calls by processes on one host to be executed on another. Local calls are passed to local operating system, network calls are sent via TCP/IP to remote system for execution.
External Data Representation: C language routines that allow machine-independent formatting of data allowing sharing of information.
Transport Layer Interface: A library of functions that provide an interface to the transport layer of the OSI model, complies with ISO Transport Service Definition. Consider an application Programming Interface. It is not the transport provider but an interface.
An Application Programming Interface (API): API provides network I/O (open, read, write, close) to remote file systems. Typically runs over TCP and IP.
LAN (Local Area Network)
LAN stands for Local Area NetworksLAN (Local Area Network) is a network which is established to meet the needs of small areas. It can be applied to home, offices, airports and in small college computer labs. The idea of developing LAN is to utilize the fast data transfer over a small geographical area. LAN (Local Area Network) was developed with a view to establish better interconnections within the individual computer units. LAN (Local Area Network) network is preferred due to its fast access and affordability. LAN (Local Area Network) is employed to build successful sophisticated network for data sharing.
LAN Technologies – Ethernet
Ethernet, Fast Ethernet and Gigabit Ethernet are the LAN technologies most commonly used today. Ethernet Version 1 was developed by Xerox Corporation during the early 1970s. Later in 1982 Xerox, Intel and DEC (Digital Equipment Corporation) together released Ethernet Version 2. Since then, Ethernet is the most popular LAN technology used in networking.
Advantages of Ethernet are
• Low cost components
• Easy to install
• Easy to troubleshoot
All the devices (Servers, Workstations, Printers, and Scanners etc) connected in an Ethernet network share a common transmission medium. Ethernet uses Carrier Sense Multiple Access/Collision Detection (CSMA/CD) for determining when a computer is free to transmit data on to the access medium. Using Carrier Sense Multiple Access/Collision Detection (CSMA/CD), all computers monitor the transmission medium and wait until the medium is free before transmitting. If two computers try to transmit at the same time, a collision occurs. The computers then stop, wait for a random time interval, and attempt to transmit again.
Collisions are common in Ethernet network and network infrastructure devices like hubs usually have a small light on their front panel that blinks when collisions happen in your network.
Original Ethernet operate at a speed of 10 Mbps (Mega bits per second). Ethernet is capable of using a variety of media. Ethernet networks typically operate at baseband speeds of either 100Mbps (Fast Ethernet), 1000Mbps (Gigabit Ethernet).
Fast Ethernet cannot operate on network infrastructure devices like hubs, switches and network cards designed for a 10Mbps Ethernet network. Many network infrastructure devices like hubs, switches, and Ethernet network cards are capable to operate at speed of 10Mbps or 100Mbps (10/100).
Another faster version of Ethernet, which is even faster than Fast Ethernet, is Gigabit Ethernet. Gigabit Ethernet provides a data transmission speed of 1,000Mbps. Gigabit Ethernet was first designed and developed as a high-speed backbone medium for large LANs. But almost all latest LANs are Gigabit Ethernet capable and Category 5e and Category 6 UTP cable can be used as the Gigabit Ethernet medium.
Even a faster version of Gigabit Ethernet, 10Gigabit Ethernet is now available. It works well with both fiber optic and copper media.


How LAN Works
Early LAN (Local Area Network) networks were formed using coaxial cable, coax is an electric cable and it is used to carry radio signals. LAN (Local Area Network) setup is developed by connecting two or more than two computers with each other using a physical connection in order to share files and data overtime. The basic application of this cable is to connect the radio transmitters with their antennas. However it is also used for developing internet connections and for distributing cable channels. Most common type of developing LAN (Local Area Network) network is the Ethernet. Ethernet is usually termed as a major data and physical link layer technology. Ethernet is often associated with the name of engineer Robert Metcalfe. Ethernet today can give the maximum data rate of up to 100 Mbps. Ethernet has proved to be an effective way to serve as a data transfer medium  for internet protocols. Usually Ethernet cables are extended to the length of 100 meters, but this is not the ultimate limit. Ethernet cables can be extended according to the need. When you decide to develop a LAN (Local Area Network), try to use the specialized software like the Microsoft windows. Microsoft widows provide special internet sharing features which help LAN (Local Area Network).
Active Network Components
Network Switches:
Network Switches are the basic components of an IP Network. All the network endpoints (like PC’s, Laptops, Printers, etc) connect to these switches. As the name goes, they switch (distribute) the data received from one node to any other node in the network. The network switches come in a variety of configurations, and the popular ones are mentioned below:
8 Ports – 10/100/1000 Mbps
16 Ports – 10/100/1000 Mbps
24 Ports – 10/100/1000 Mbps
48 Ports – 10/100/1000 Mbps
Network switches could also have 10/100 Mbps and POE/Non-POE Port combinations. They could also have variations in terms of functionalities – Manageable, Semi-Manageable and Unmanaged Switches. There are even 24/ 48 Port Optical Switches which connect as many optical connections in addition to the 2/4 ports of the optical connections that normal switches have.
The numbers (8, 16 etc.) in the above list refers to the number of Copper UTP Connectors the switch has, and the switches can connect to as many network devices. Each such port supports a maximum speed of 10(or)100 / 10(or)100(or)1000 Mbps depending on the end-point connecting to it (it can auto negotiate to the highest speed supported by the endpoint). Some ports support POE (Power over Ethernet) which is a technology to carry the power as well as data to the endpoints, so that the endpoints need not connect to a separate power source (In the case of Wireless Access Points, IP Phones etc).
Some network switches are of Un-managed type – You can just connect the computers to them, connect them to neighboring switches and extend the network, but beyond that function, not much functionalities/ management features are supported by them. The advantages of unmanaged switches are their cost – they are inexpensive.
Some network switches are of Semi-Managed type – They come with a web browser-based management interface, limited QoS configurations, VLAN configuration, 802.1x support and other such limited management features required for the management of the critical functionalities of the network. But these management features are limited to what is determined by the manufacturer. These switches are slightly more expensive than the unmanaged variety but less expensive than fully manageable switches.
Some network switches are Fully Manageable – They allow the configurations of VLAN’s per port, allow VLAN trunking, support web-based management functionalities, support SNMP/RMON protocols so that each port can be monitored by an SNMP based network management system, support RSTP (Rapid Spanning Tree Protocol) so that alternates cabling paths can be created for uplinking, support Link Aggregation so that couple of cables from individual ports can connect to the uplink switch with double the speed, support port mirroring for management/ call recording, support stacking and many other such useful features which help in maintaining a network.
Layer 3 Switches:
As the network becomes bigger and bigger, it becomes difficult to manage all the nodes using a single layer 2 network segments. One of the main problems with such unsegmented networks are broadcasts which can create performance bottlenecks on large networks. Another issue is the spreading of virus and botnets – with a segmented network; these remain mostly within their segments. That’s why VLAN’s are advocated on large networks which segment the network based on the location/ department/ application etc.
But the devices in one segment of the network would need to invariably communicate with the other segments – Especially in centralized networks where all the servers are designed to be in a common VLAN and the nodes communicating with them are from different VLAN’s. In such cases, there needs to be a Layer 3 network device that performs seamless Inter-VLAN routing without affecting the performance of the network – This is exactly the reason why Layer-3 switches are required. They are capable of performing both the Layer 2 Switching and Layer 3 Routing at Line Rate. They also allow to configure flexible network wide security policies and perform Layer 3 QoS functionalities which are critical in converged networks which carry a substantial amount of real-time traffic that require low latency.
Passive Network Components
Structured Cabling has become quite common for inter-connecting the various active devices in an IP network. So the following passive components are commonly utilized in an IP Network for Structured Cabling:
      Cat 6 UTP (Un-shielded Twisted Pair) Copper Cables – These are the network cables that connect a PC/ endpoint to a network switch. Sometimes, they are also used to provide inter-connectivity between switches as long as the distance is not greater than 90 meters, which is the distance they support for transmitting data without using any repeater (repeater function is provided by using network switches).
      Cat 6 UTP Patch Cords – These are one meter/ 2 meter factory crimped cables with RJ-45 connectors installed at both ends. Actually, the Cat 6 Cables are not recommended to be directly terminated in either the network switch or the PC/endpoint. Only the patch cords terminate on both devices and connect to the Cat 6 Network cable through an I/O Box and UTP patch panel.
      Network Rack – Network Racks are either wall mounted or Floor Standing types depending upon their size. Common sizes of network racks range from 6U to 42U. All the network equipments are designed in multiples of 1U so as to be accommodated in to these racks with standard fittings. They generally have a width of 19”. The network racks come with a glass door, lock and key, fans required for cooling, trays, power supplies, cable managers and all other accessories.
      I/O Box and Face Plate: The I/O Box and Face Plate are kept near the computers and a UTP patch cord is used to connect the Face Plate with the network port in the PC. The Cat 6 UTP cable which comes from the switch terminates in to a permanent connection behind the I/O Box.
      UTP Patch Panel: The UTP Patch Panel is used for terminating all the Cat 6 Cables that come from various PC’s/endpoints in the network (Actually I/O Box) to the rack. These Cables are permanently connected behind the UTP Patch Panel and UTP Patch Cords connect from the respective ports in front to the network switches. This allows for flexible moves, adds and changes without disturbing the switch ports. All the ports in the patch panel are labelled for easy identification of which node they are connected to.
      Optical Fiber Cables: For carrying data over 90 meters, Optical Fiber Cables are used. These cables use light for transmission of data instead of the electrical signals used by the UTP cables. They can carry data for longer distances – even to a few kilo meters without having to repeat the signals in between. There are two types of cables – Single Mode (Used for higher bandwidth requirements for longer distances) and Multi Mode (Used for shorter distances). They connect directly to the Fiber Patch Panel at either end. Usually they come in multiples of 6 Cores – 6 Core, 12 Core, 24 Core being common. For each connection, two cores are used – one for transmit and another for receive.
      Fiber Patch Panel/ Patch Cords: The Optical Fiber Cables are terminated on either end using the Fiber Patch Panel, Pigtails and Coupler assembly. Actually each core of the Fiber Cable is spliced to fit in to the Fiber Patch Panel. A Fiber Patch Cord connects to the Patch Panel and the Fiber interface of the Network Switch. The Fiber interface is usually an SFP Port over which a Fiber Module is inserted (Mini-Gbic interface). This Fiber Module can connect to the fiber patch cord directly.

Internet

The internet is the network of computers linking many different types of computer all over the world. 
The internet is a new communication technology that influences us on a large scale. The internet also called as the Net, is the world’s largest network. It is the not a single network but the global interconnected network of networks providing free exchange of information.
Network
A network is basically a collection of computers that allows various users to communicate and share resources.

Origin of Internet
ARPANET (Advanced Research Project Administration NET) is the ancestor of the internet. The project was an experiment in reliable networking and to link defense and military research contractors. The funds were provided by the department of defense (DOD) (present in United States of America) in 1969. The ARPA agency is now called DARPA (D stands for Defense). As it was too vast to manage, it was broken into MILNET – military sites and smaller ARPANET with nonmilitary sites.
National Science Foundation
Unable to use the ARPANET, the NFS built its own NSFNET. This NSFNET was connected to the regional networks which is turn was connected to users with different region. The NSFNET is the United State provides service for education and research only.
Client and Server
A computer which provides service that can be used by other computer is called a Server. Services like FTP, Archie, and World Wide Web are being provided by the servers. If we connect our computer to these services, our computer becomes a Client which requests for the services. This request travels over the network to the server, which interprets the requests and takes the desired action.
Server: A server is a computer in a network that shared by multiple users. There are many kinds of servers in dedicated server networks. For example, file servers, printer servers, and communication servers are they.
Client Computer: In a communications network, the client computer is the requesting machine from the supplying machine, server.
Host and Terminals
          The Host command displays the internet protocol (IP) address. This command is one of the important commands. The internet address is stored in the form of numbers. The numbers are not substituted for the characters in the address.
Terminals
computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input, but as the technology improved and video displays were introduced, terminals pushed these older forms of interaction from the industry. A related development was timesharing systems, which evolved in parallel and made up for any inefficiencies of the user's typing ability with the ability to support multiple users on the same machine, each at their own terminal.
The function of a terminal is confined to display and input of data; a device with significant local programmable data processing capability may be called a "smart terminal" or fat client. A terminal that depends on the host computer for its processing power is called a dumb terminal or thin client. A personal computer can run terminal emulator software that replicates the function of a terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system.

TCP/IP
TCP/IP stands for Transmission Control Protocol/Internet Protocol. It is the family of protocols that are used to organize computers and communication devices into a network. Data is sent in the form of packets and this is advantageous. The main job of the internet protocol is to get raw data and transmit it to the destination area. The transmission control protocol takes care of the data transmission. It is responsible for checking whether the transmitted data has reached correctly. The protocol has checked the reliability, sequencing and flow control.
      Reliability – The data sent has to be received at the other end. We should be prepared to send the data till we get a confirmation from the receiving end specifying that the data has reached.
      Sequencing – Data should reach the destination in a proper order. The protocol assigns a number to each data byte and each packet identifies the sequence numbers.
      Flow control – If a computer is sending data at a faster rate to the second computer, the data is sent faster but data reaches at a much slower rate.

Set of rules has been established to take care of the communication happening between two computers connected to Internet. These sets to rules contain a wide range of functions that are grouped into Protocol. Such groups of Protocols are called Internet Protocol Suites (IP's) or sometimes referred as TCP/IP.

Four Layers of TCP/IP
      The link layer, or the data-link layer or network interface layer, includes the device driver in the operating system and corresponding network interface card in the computer.
      The network layer or internet layer) handles the movement of packets around the network.
      The transport layer provides a flow of data between two hosts, for the application layer above.
      The application layer handles the details of the particular application.

Introduction to World Wide Web

 The Internet Application that is drawing the most attention is the WWW or World Wide Web. It is a series of servers that are interconnected through Hypertext.

The World Wide Web enables you to access information on the Internet without the use of complicated commands. By linking resources throughout the Internet, the Web brings a world of information to your doorstep.

WWW is an information retrieval system based on Hypertext. In hypertext selected words or areas on the screen are expandable, leading to more details about a subject.

World Wide Web uses Hyperlinks to display and retrieve information. WWW hypertext documents contain embedded links to information spread throughout the world.

The concept of the World Wide Web is similar to the concept of the Internet. If you recall, Internet is basically a bunch of computers connected to each other through communication lines. Each computer on the Internet stores information that can be stored on the computer in different ways. One way information is stored is on a Web Page.

Hypertext and Hyperlink
            Hyperlinks are also known as links. Each link in a web page refers to another web page. When we click on a link, the browser fetches the page connected to the link which in turn cans contain more links to other places over Net. A hyperlink can be a word, a group of words, a picture or a part of a picture. It helps in retrieving information from different places and is easy in retracing paths as each page having a link contains a reference to the previous page. In this system, all the related information can be listed under the main topic. The system of interlinked documents is known as Hypertext. Hypertext is the method of presenting information through highlighted items. Hypertext system allows creation of links from two topics to the same document effecting connections between the pieces of information, thereby giving related information. The links between the pieces of information enable visualizing the web created. 

Uniform Resource Locator
            Uniform Resource Locator (URL) is a way of naming resources and is used mainly to link pages in the WWW. This gives us the type and the location of the resource. A Uniform Resource Locator is meant for a particular web page. These are the important parts of the web as they are the secret codes that name all the pages in the web.
Web Browsers
            Web browser is a client program that enables to search, through the information provided by a specific type of server.
ExampleInternet Explorer
            Internet Explorer is a commercial World Wide Web browser distributed by Microsoft Corporation. It is a full-featured graphical browser that provides access to multimedia resources on the World Wide Web.
Like other Web browsers, Internet Explorer displays HTML files containing links to resources throughout the Internet and enables you to access those resources by selecting Hypertext links.
It offers an easy-to-use, customizable browser-the latest in communications tools, access to all the popular multimedia formats, and support for ActiveX and Java.
To use Internet Explorer, you must have domain access to the Internet. Once the connection is established, launch Internet Explorer by double-clicking the icon present on your desktop.

And other: Mozilla Firefox, Google Chrome, Opera, and Safari etc.       

IP Address
IP stands for Internet Protocol, so an IP address is an Internet Protocol address. What does that mean? An Internet Protocol is a set of rules that govern Internet activity and facilitate completion of a variety of actions on the World Wide Web. Therefore an Internet Protocol address is part of the systematically laid out interconnected grid that governs online communication by identifying both initiating devices and various Internet destinations, thereby making two-way communication possible.
An IP address consists of four numbers, each of which contains one to three digits, with a single dot (.) separating each number or set of digits. Each of the four numbers can range from 0 to 255. Here's an example of what an IP address might look like: 78.125.0.209. This innocuous-looking group of four numbers is the key that empowers you and me to send and retrieve data over our Internet connections, ensuring that our messages, as well as our requests for data and the data we've requested, will reach their correct Internet destinations. Without this numeric protocol, sending and receiving data over the World Wide Web would be impossible.
IP addresses can be either static or dynamic. Static IP addresses never change. They serve as a permanent Internet address and provide a simple and reliable way for remote computers to contact you. Static IP addresses reveal such information as the continent, country, region, and city in which a computer is located; the ISP (Internet Service Provider) that services that particular computer; and such technical information as the precise latitude and longitude of the country, as well as the locale, of the computer. Many websites provide IP address look-up services to their visitors, free of charge. If you're curious about your own IP address, you can locate these websites by performing a Google search.
Dynamic IP addresses are temporary and are assigned each time a computer accesses the Internet. They are, in effect, borrowed from a pool of IP addresses that are shared among various computers. Since a limited number of static IP addresses are available, many ISPs reserve a portion of their assigned addresses for sharing among their subscribers in this way. This lowers costs and allows them to service far more subscribers than they otherwise could.
Static IP addresses are generally preferable for such uses as VOIP (Voice over Internet Protocol), online gaming, or any other purpose where users need to make it easy for other computers to locate and connect to them. Easy access can also be facilitated when using a dynamic IP address through the use of a dynamic DNS service, which enables other computers to find you even though you may be using a temporary, one-time IP address. This often entails an extra charge, however, so check with your ISP.
Static IP addresses are considered somewhat less secure than dynamic IP addresses, since they are easier to track for data mining purposes. However, following safe Internet practices can help mitigate this potential problem and keep your computer secure no matter what type of IP address you use.

Domain Name
              Domain Names consist of a number of sub domain names, with the most general information to the right and the most specific information to the left. The sub domain names are separated by a dot. 
A domain name server translates a domain name into an IP address. Their user IDs associated with a computer identifies people who use the Internet. 
Tips on Selecting a Domain Name:
      Make Domain Names smaller and easier to remember.
      Be Creative and look for a Domain Name that relates most to what your website deals with.
      Do not go for names that read like personal or hobby sites.
      Add Keywords that best explains to what your site caters for.
      If there is a Network of sites that you would like to build, think of a keyword at the suffix or at the Prefix.

Internet Service Provider
              The Internet Service Provider (ISP) is a company that offers a connection to the Internet via an Internet account. ISPs generally do not provide information on the Internet nor do they own the Internet. The ISP has a computer network that is connected to the Internet and they give the user an account on the network. There are some is connected to the Internet and they give the user an account on the network. There are some features provided by the ISP. To distinguish one user from another, the Internet Service Provider gives every user a use rid and a password. The use rid is generally the user's name and should be easily identifiable as belonging to the user.

The password should be from six to twelve characters in length and should ideally contain at least one number or punctuation character. The password should never be the user's name. This minimizes the chance that others will be able to break into the account. The provider may assign the user a temporary password which can be changed to a permanent one at a later date. The domain is the name of the Internet provider's network. It also defines how an e-mail is being sent. For example, the internet mail address may look like userid@domain.com. Some persons prefer to have a custom domain name instead of the name of the provider's domain. In such cases, the ISP searches for names that have not already been taken by others (the user cannot choose ibm.com) and then applies the name. The domain name then points to the provider's computers unless the user moves to another provider.

The factors that determine the choice of service providers are:
      Service area
      Types of connections offered
      Target market of provider
      Support
      Training
      Reliability
      Security
      Cost
Internet Security
            Today the world is becoming web centric. Everything available in the market is just a mouse click away. Thus, as the Web transactions grow, merchants concern regarding fraud on the part of the buyers is increasing. To deal with this problem, the solution lies in a three party protocol, with a trusted third party certifying the identity of the buyer and the seller through techniques that validate and authenticate both parties. There are two possibilities in this regard:
      The SET (SET Protocol), which includes a method for authenticating all parties with third-party certification or
      SSL could be extended to include trusted third-party certificate-base authorization mechanisms.
      In the coming lessons I’ll discuss SSL vs. SET in detail.
SET
            Now that we know a little about Secure Electronic Transaction, let’s examine how it came into being.

In 1995, leading credit-card associations, Master Card and Visa, were faced with a dilemma: Software vendors in the E-commerce market were creating competing standards for accepting credit cards over the Internet.  In one camp was Microsoft, and in the other camp, IBM.  Seeing this as a potential block to widespread adoption of credit-card use over the Internet, the credit card associations agreed to put competition aside and persuaded the software vendors to do likewise.  The result was SET Consortium, an international initiative which eventually included other credit-card companies such as American Express and Novus-Discover as well as nearly every major E-commerce software and security vendor.

SSL
            Secure Socket Layer Protocol was developed by Netscape and is the most popular process in use to protect sensitive information such as payment data. SSL encrypts data between the customer and the merchant. The data gets encrypted before being transmitted over Internet with a communication connection constructed by SSL. This entire process is called handshake. When SSL is invoked, handshake identifies information between the two parties, selects one of several encryption algorithms to be used and creates necessary session- specific encryption keys, for this the customer’s browser must locate the merchant’s public key, which is stored at merchant’s web commerce site. Using these public key customers browser creates an encrypted message, which only the merchant can read. Here the handshake process is complete.

 Now the customer’s browsers and merchant’s server exchange data encrypted, using conventional secret key encryption before being transported over the insecure network. During the entire process any third person browsing net is not exposed to the encrypted data because of communication connection constructed by SSL. Now we have covered both SET & SSL and in the next topic we will draw a line between SET & SSL.

            After knowing what SET & SSL are, now we are in a position to draw a line in between SET and SSL.
      Certification characters In SSL certification is possible pair-wise and consumer may not have certificates. Where as in SET certification of all parties is done by trusted 3rd parties.
      Coming to authentication: We don’t have any mechanism in SSL to authentic parties, whereas in SET both customer and merchants are authenticated.
      Talking about Non reputation: SSL has no mechanism to capture customer’s commitment, whereas by SET customer digitally signs commitment to purchase and pay.
      Risk of merchant fraud: In SSL customer gives key financial data to the merchant for risk of merchant fraud, but in SET customer gives key financial data to payment gateway.
      Liability for customer fraud: Merchant is liable in case of fraud in SSL, where as financial institution is liable under similar case in SET.
      Infrastructure: Infrastructure in SSL is in place in business and WEB servers and in SET it’s proven in pilot demonstrations and is also available as packed Software or Software tool and is also not widely deployed.
      For anonymity versus audit ability: SSL allows each party to certify itself at the onset of a transaction, but assurance of identity is weak SET requires all parties to be certified repeatedly throughout the transaction.
      User acceptability: SSL standardize by the IETF for interoperability whereas SET standardizes by SET Co. SSL is widely used by customers on the web and SET is not yet in widespread use.
Internet Requirement
Identifying the Various Hardware Components

To setup a network of computers, each computer must have at least two components.
      Access device/Network Interface card
      Cables to transmit the data
      Repeaters and Routers
The Access Device/Network Interface Card (NIC):  The Network Interface Card is the basic component that will allow a computer to communicate over a network. The NIC's main working is to get the broken up data given by the software and send them across the network over cables.

The Cables: The cable is another integral part of the network since they are responsible for the actual transportation of data. There are various types of cables that can be used in setting up a network.
      Co-axial Cables
      Twisted Pair Cables
      Fiber Optic Cables
Identifying the Software Components

Software also plays a vital role in setting up a network as much as the hardware. The basic software components required are:
Multi-user Operating Systems:  For general communication between computers, we require software that can support multiple users and also have the ability of establishing a communication between them. Operating systems specifically for networking are Windows NT, SCO UNIX, Linux and Sun Solaris. Operating System's that can support minimum networking requirements are Windows 95/98, Windows 3.11 for Workgroups.
Device Drivers:  Device drivers are lists of commands that can be used by the Operating Systems to use specific hardware device. Hence the device drivers for the Network Interface Card are required so that the Operating Systems can use them with ease.
Protocol Suites:  Protocols are the main software component of a network. A protocol basically provides certain rules and specification for communication between computers and also provides unique identities to each computer in the network for identification purposes so that any data sent to a particular computer doesn't go to any different one. The most widely used protocol suites today are TCP/IP, IPX/SPX, AppleTalk and NetBIEU.

Search Engine
Internet Search Engines are powerful online resources that help you find very specific pieces of information. When you pose a query to a search engine, it matches your query words versus the records it has in its database to present a listing of possible documents meeting your request.
 Most popular search engine is Google.   
They are usually characterized by very large indexed databases, which contain pointers to millions of URL’s.

Engines use two methods to rank Web pages: 
      The Text of the page and
      The META Tags.

META tags are specific HTML codes that briefly describe the page and give keywords for the search engine to use.
                                                                                       
Many Search Engines develop their vast databases by using a software application to automate the exploration of the Internet.  
      These applications (known as robotsspiders, and Web Crawlers) visit Web pages, copy them into a local database, and then explore all the links referenced in the freshly copied page. 
      You can also "invite" the search engine to explore your site.  As the web page information is completed, it is indexed and made available to the Internet through a search interface. 
      Each Search engine may contain different options and parameters that can be used to search its database. It creates a "search results” web page of locations that contain information on the specific topic
      A Search Engine locates database entries that match your designated keywords.
      Each search engine has specific parameters that affect how it responds to keyword search inquiries.

Net Surfing
NET SURFING means Internetwork Searching for your web site or information.
For getting the required info on internet there are many search engines maintained by many websites like yahoo, Google, Rediff, msn, Indiatimes and many more others.
All these services are free of cost what u has to pay is for your computer and internet connection [ISP]
On search option of say yahoo or Rediff or Google you can type the words about which u r searching for example computers and networks and type it on search space provided by search engine web sites and click go or search button it will pop up lot many related links for and the same click any one link which you feel satisfies your query. This way u can get the information

Internet Services
           
      Electronic Mail
            It is the most used feature on the Internet and all service providers give you an E-mail account as soon as you sign up. It lets you send messages, documents, even voice and video to people anywhere in the world.  Many businesses are finding that Email is becoming more important than the Fax machine for them to stay in contact with their customers.
Electronic Mail (E-mail) is a service for exchanging messages among computers. It has become very popular because, unlike in regular mail, there is no delay. Postage is not required.
Most Internet access providers provide Email accounts for their dial-up customers. However, corporations with leased lines usually provide their own Email gateways for their employees.
      File Transfer Protocol
File Transfer Protocol (FTP) is the program that uses the protocol to transfer the files among computers on the internet. A protocol is the set of conventions for copying files from one place to another on the Net. Programs that implement the protocol are called FTP. The protocol is used to check if the information is sent and received correctly. We can download programs, pictures, and text to the computer by using FTP.
      Telnet
Telnet is a utility, which allows you to logon to another system and use various services available on that host.
It acts like an Internet Tool that lets you travel from your own workstation out into cyberspace. People who are in one location frequently want to use a computer in another location. For e.g. they may want to read their emails on a business trip, access data in branch or main office or even look at library catalogue to see newly published books.
Telnet lets you log in to another computer on the Internet and issue commands from your computer. You can log in to almost any computer on the Internet that provides public access, including a remote computer on which you have an account. Whether you Telnet using domain access or terminal emulation, your computer is emulating a terminal.
      When you use Domain Access to connect to the Internet, an application program normally executes on your computer.
      When you access the Internet using Terminal Emulation, the command you enter at your terminal executes an application program on your host computer.
Domain Access
The term refers to a computer that is connected to the Internet in such a way that it can run Internet client applications directly. With domain access, your computer normally has a unique IP address. It does not rely on a host computer as it does in terminal emulation access.
Terminal Emulation
It is an operating mode in which your computer acts as a terminal rather than an intelligent device. In terminal emulation, you can input commands through your computer's keyboard and view the output on the screen. However, another computer, called the host, actually runs the programs you require.
      Usenet
Usenet is a system comprising of thousands of distributed bulletin boards called newsgroup that is similar to mailing lists. In the Usenet newsgroup articles are distributed to the interested groups. The messages are read by a program called newsreader. The internet services like SLIP/PPP account, shell account, America on Line, CompuServe and Microsoft Network (MSN) supports the Usenet group. Depending on the Internet provider or the commercial account, the Usenet access type and information varies.
Case Study
A case study is a puzzle that has to be solved. The first thing to remember about writing a case study is that the case should have a problem for the readers to solve. The case should have enough information in it that readers can understand what the problem is and, after thinking about it and analyzing the information; the readers should be able to come up with a proposed solution. Writing an interesting case study is a bit like writing a detective story. You want to keep your readers very interested in the situation. A good case is more than just a description. It is information arranged in such a way that the reader is put in the same position as the case writer was at the beginning when he or she was faced with a new situation and asked to figure out what was going on. A description, on the other hand, arranges all the information, comes to conclusions, tells the reader everything, and the reader really doesn't have to work very hard. When you write a case, here are some hints on how to do it so that your readers will be challenged, will "experience" the same things you did when you started your investigation, and will have enough information to come to some answers.

Intranet
Intranet is a network, similar to the Internet, within a company or organization. Intranets offer many of the features and services available on the Internet, such as a Web system, e-mail, newsgroups and chat. Intranets allow company information and resources to be shared among employees. Intranets are particularly useful for companies that have offices in multiple locations or whose employees often work in groups.
Extranet
Many companies are now making parts of their private intranet available on the Internet, creating an extranet. An extranet allows users outside the company, such as customers and other businesses, to access specific company information. Extranets are not for the general public and security features, such as firewalls and encryption software, are often used to protect extranet information from unauthorized access.


Cyber Law


Introduction to Cyber Law

Cyber law (also referred to as Cyber law) is a term used to describe the legal issues related to use of communications technology, particularly "cyberspace", i.e. the Internet. It is less a distinct field of law in the way that property or contract are, as it is an intersection of many legal fields, including intellectual property, privacy, freedom of expression, and jurisdiction. In essence, cyber law is an attempt to apply laws designed for the physical world to human activity on the Internet.
Issues of jurisdiction and sovereignty have quickly come to the fore in the era of the Internet. The Internet does not tend to make geographical and jurisdictional boundaries clear, but Internet users remain in physical jurisdictions and are subject to laws independent of their presence on the Internet. As such, a single transaction may involve the laws of at least three jurisdictions: 1) the laws of the state/nation in which the user resides, 2) the laws of the state/nation that apply where the server hosting the transaction is located, and 3) the laws of the state/nation which apply to the person or business with whom the transaction takes place. So a user in one of the United States conducting a transaction with another user in Britain through a server in Canada could theoretically be subject to the laws of all three countries as they relate to the transaction at hand.

Jurisdiction is an aspect of state sovereignty and it refers to judicial, legislative and administrative competence. Although jurisdiction is an aspect of sovereignty, it is not coextensive with it. The laws of a nation may have extra-territorial impact extending the jurisdiction beyond the sovereign and territorial limits of that nation. This is particularly problematic as the medium of the Internet does not explicitly recognize sovereignty and territorial limitations. There is no uniform, international jurisdictional law of universal application, and such questions are generally a matter of conflict of laws, particularly private international law. An example would be where the contents of a web site are legal in one country and illegal in another. In the absence of a uniform jurisdictional code, legal practitioners are generally left with a conflict of law issue.

Another major problem of cyber law lies in whether to treat the Internet as if it were physical space (and thus subject to a given jurisdiction's laws) or to act as if the Internet is a world unto itself (and therefore free of such restraints). Those who favor the latter view often feel that government should leave the Internet community to self-regulate. John Perry Barlow, for example, has addressed the governments of the world and stated, "Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different “(Barlow, A Declaration of the Independence of Cyberspace [1]). A more balanced alternative is the Declaration of Cyber secession: "Human beings possess a mind, which they are absolutely free to inhabit with no legal constraints. Human civilization is developing its own (collective) mind. All we want is to be free to inhabit it with no legal constraints. Since you make sure we cannot harm you, you have no ethical right to intrude our lives. So stop intruding!" [2]. Other scholars argue for more of a compromise between the two notions, such as Lawrence Lessing’s argument that "The problem for law is to work out how the norms of the two communities are to apply given that the subject to whom they apply may be in both places at once" (Lessing, Code 190).

Though rhetorically attractive, cyber secession initiatives have had little real impact on the Internet or the laws governing it. In practical terms, a user of the Internet is subject to the laws of the state or nation within which he or she goes online. Thus, in the U.S., Jake Baker faced criminal charges for his e-conduct (see Free Speech), and numerous users of peer-to-peer file-sharing software were subject to civil lawsuits for copyright infringement. This system runs into conflicts, however, when these suits are international in nature. Simply put, legal conduct in one nation may be decidedly illegal in another. In fact, even different standards concerning the burden of proof in a civil case can cause jurisdictional problems. For example, an American celebrity, claiming to be insulted by an online American magazine, faces a difficult task of winning a lawsuit against that magazine for libel. But if the celebrity has ties, economic or otherwise, to England, he or she can sue for libel in the British court system, where the standard of “libelous speech” is far lower.

2. Architecture: West Coast Code: these mechanisms concern the parameters of how information can and cannot be transmitted across the internet. Everything from internet filtering software (which searches for keywords or specific URLs and blocks them before they can even appears on the computer requesting them), to encryption programs, to the very basic architecture of TCP/IP protocol, falls within this category of regulation. It is arguable that all other modes of regulation either rely on, or are significantly supported by, regulation via West Coast Code.

3. Norms: As in all other modes of social interaction, conduct is regulated by social norms and conventions in significant ways. While certain activities or kinds of conduct online may not be specifically prohibited by the code architecture of the internet, or expressly prohibited by applicable law, nevertheless these activities or conduct will be invisibly regulated by the inherent standards of the community, in this case the internet “users.” And just as certain patterns of conduct will cause an individual to be ostracized from our real world society, so too certain actions will be censored or self-regulated by the norms of whatever community one chooses to associate with on the internet.

4. Markets: Closely allied with regulation by virtue of social norms, markets also regulate certain patterns of conduct on the internet. While economic markets will have limited influence over non-commercial portions of the internet, the internet also creates a virtual marketplace for information, and such information affects everything from the comparative valuation of services to the traditional valuation of stocks. In addition, the increase in popularity of the internet as a means for transacting all forms of commercial activity, and as a forum for advertisement, has brought the laws of supply and demand in cyberspace.
While there is some United States law that does restrict access to materials on the internet, it does not truly filter the internet. Many Asian and Middle Eastern nations use any number of combinations of code-based regulation (one of Lessing’s four methods of net regulation) to block material that their governments have deemed inappropriate for their citizens to view. China and Saudi Arabia are two excellent examples of nations that have achieved high degrees of success in regulating their citizens access to the internet (for further reading, please see ONI's studies on both.

Cyber Crime

            Cyber crime encompasses any criminal act dealing with computers and networks (called hacking). Additionally, cyber crime also includes traditional crimes conducted through the Internet. For example; hate crimes, telemarketing and Internet fraud, identity theft, and credit card account thefts are considered to be cyber crimes when the illegal activities are committed through the use of a computer and the Internet.
Cyber Contract

A legal contract made between parties using the cyber medium covering e-commerce, e-services, and e-governance, and so forth, to operate in an online mode, is  known as a cyber contract. In India, not much attention is given towards drafting of a proper contract framework appropriate to the transaction (Vivek, S. Ed., 2005). From time to time, attempt has been made by various researchers to make a globally enforceable cyber contract. Robert L. Percial (Renault, O., & Percival, R. L., 2004) has tried to learn from the increasing wealth of judicial decisions in both Canada and the United States, considering the creation and enforceability of electronic contracts, coupled with electronic commerce legislative direction. This has resulted in the identification of certain steps and measures that drafters of electronic and online agreements can do or avoid in order to achieve greater certainty that such electronic agreements will stand as enforceable contracCyber Contract and Indian Law tual instruments. Governments are challenging fundamental legal concepts such as contracts, to develop flexible frameworks to protect traditional contract law while recognizing and expanding it to include technology's borderless capabilities and maintain integrity for all legal players like judges, lawyers, legislators, and business people (Hill, J. E., 2003). In the current scenario, the business and legal limitation revolve around paper technology, which reflects a challenge to conducting business in today's information economy. The electronic contracts with their uniqueness, create tremendous uncertainty in international legal and business environments because the law is slow to respond to new technology (Sabet, R.V., 1996). The status of law to monitor the cyber commerce is often inadequate.
Cyber Privacy

Cyber-Privacy is mostly defined by the services used by individuals. Whatever agreements individuals agree to when they sign up for a service usually signs away their right to privacy. In other cases the reasonable expectation of privacy is not reasonable if the behaviors or communications in question were knowingly exposed to public view.

IT Act

The Information Technology Act 2000 (also known as ITA-2000, or the IT Act) is an Act of the Indian Parliament (No 21 of 2000) notified on October 17, 2000. This act is being opposed by Save Your Voice campaign and other civil society organizations in India.

The United Nations General Assembly by resolution A/RES/51/162, dated the 30 January 1997 has adopted the Model Law on Electronic Commerce adopted by the United Nations Commission on International Trade Law. This is referred to as the UNCITRAL Model Law on E-Commerce.

Following the UN Resolution India passed the Information Technology Act 2000 in May 2000 and notified it for effectiveness on October 17, 2000.

The Information technology Act 2000 has been substantially amended through the Information Technology (Amendment) Act 2008 which was passed by the two houses of the Indian Parliament on December 23, and 24, 2008. It got the Presidential assent on February 5, 2009 and was notified for effectiveness on October 27, 2009.

A complete history of how the current version of the Information Technology Act -2008 version evolved over a period of time between 1998 to 2009 is available at the reference link given under external links below.'



Information technology Act 2000 consisted of 94 sections segregated into 13 chapters. Four schedules form part of the Act.

In the 2008 version of the Act, there are 124 sections (excluding 5 sections that have been omitted from the earlier version) and 14 chapters. Schedule I and II have been replaced. Schedules III and IV are de

Leave a Reply

Subscribe to Posts | Subscribe to Comments

All Notes on BCA

All Notes  on BCA
BCA all subjects notes

Total Pageviews

Translate

Powered by Blogger.

Copyright © All Notes on BCA