Pages

DNS Server and Name Space distribution

The information contained in the domain name space should be stored. However, it is not efficient as well as unreliable to store information on a single computer system. Inefficient because requests for all over the world would put heavy load on a system. It is not reliable because any failiure makes the data unavailable.

DNS Servers
The solution of above problem is to distribute the information among many computers called DNS Servers. One way to do this is to divide the whole name space into many domains based on the first level. In other words, we let the root stand alone and create as many domains (subtrees) as there are first-level nodes. Because a domain created this way could be very large, DNS allows domains to be divided further into smaller domains (subdomains). Each server can be responsible (authoritative) for either a large or small domain. In other words, we have a hierarchy of servers in the same way that we have a hierarchy of names.

Zone
What a server is responsible for or has authority over it is called a zone. If a server accepts responsibility for a domain and does not divide the domain into smaller domains (subdomains), the "domain" and the 'zone" refers to the same thing. the server makes a database called a zone file and keeps all the information for every noe under that domain. However, if a server divides its domain into subdomains and delegates part of its authority to other servers, "domain" and "zone" refer to different things. The information about the nodes in the subdomains is stored in the servers at the lower levels, with the original server keeping some sort of reference to these lower-level servers. Of course the original server does not free itself from responsibility totally: It still has a zone, but the detailed information is kept by lower level servers.

A server can also dividepart of its domain and delegate responsibility but still keep part of domain for itself. In this case, its zone is made of detailed information for the part of the domain that is not delegated  and references to those parts that are delegated.

Root Server
A root server is a server whose zone consists of whole tree. A root server usually does not store any information abotdomains but delegates its authority to other servers , keeping references to those servers. Currently there are more than 13 root serverscovering th whole domain space. The servers are distributed all around the world.

Primary and Secondary Servers
DNS defines two types of servers, primary and secondary. A primary server is a server taht stores the file about the zone for whih it is an authority. It is responsible for creating, maintaining, and updating the zone file. It store sthe zone file on a local disk.

A secondary server is a server that transfers the complete information about a zone from another server (primary or secondary) and stores the file on its local disk. the secondary server neither creates nor updates the zone files. If udating is required, it must be done by primary server, which sends the updated version to secondary.

The primary and secondary servers are both authoritative for the zone they serve. The idea is not to put the seconadary server at alower level of authority but to create redundancy for data so that if one server fails, the other can continue serving clients. Note also that a server can be a primary server for a specific zone and a secondary server for another zone

Zone Transfer
A primary server loads all information from the disk file; the secondary DNS Server loads all information from the primary server. When the primary DNS server downloads information from the secondary, it is called zone transfer.

Related Posts:
  • Domain Name Space
  • DNS In The Internet

Domain Name Space

Thousands of Domain names are required to be arranged in a Hierarchical structure for proper administration and Domain name space is devised for this purpose. Technically, in is this design the domain names are arranged in an inberted-tree structure with the root at the top. The tree can have only 128 levels: level 0 (root) to level 127. Whereas the root glues the whole tree together, each level of tree defines a hierarchical level.

Domain Name
Each node in the tree has a domain name. A full domain name is a sequence of labels separated by dots (.). the domain names are always read from the the node upto the root.

Label
Each node in the Domain name space tree has a label, which is a string with a maximium of 63 characters. The root label is a null string (empty string). DNS requires that children of a node (nodes that branch from the same node) have different labels, which guarentees the uniqueness of domain names.

Domain
A domain is a subtree of the domain name space. The name of the domain is the domain name of the node at the top of the subtree. A domain may itself be divided into domains (or subdomains as they are sometimes called).

The Domain can be broadly classified as one of two types:
  • Fully Qualified Domain Name (FQDN).
  • Partially Qualified Domain Name (PQDN)
Fully Qualified Domain Name
If a label is terminated by a null string, it is called a fully qualified domain name (FQDN). An FQDN is a domain name that contains th full name of the host. It contains all the labels, from most specific to the most general, that uniquely defines the name of the host. For example, the domain name
challenger.atc.fhda.edu.
is the FQDN of a computer named challenger and installed at the Advanced Technology Center (ATC) at De Anza College. A DNS server can only match and FQDN to an address. Note that the name msut end iwth a null label (string), but because null here means nothing, the label ends with a Dot (.).

Partially Qualified Domain Name (PQDN)
If a label is not terminated by a null string, it is called Partially Qualified Domain Name (PQDN). A PQDN strats from anode, but it does not reach the root. It is used when the name to be resolved belongs to the same site as the client. Here the resolver can supply the missing part, called the suffix to create an FQDN. For example, if a user at the fhda.edu site wants to get the IP address of Challenger computer, he or she can define the partial name
challenger

The DNS Client adds the suffix atc.fhda.edu. before passing the address to the DNS server.
The DNS Client normally holds a list of suffices. This suffix is added when the user defines an FQDN.

Related Posts:
  • DNS Servers and Distribution Of Name Space
  • DNS in the Internet.

Domain Name System (DNS)

To identify any entity, a computer or any other resource like printer or a website hosted on a server, connected to an Internet, the Domain Name System is used. Each resource connected to the network is assigned a unique IP address to identify it. This uniquely identifies the connection of a host to the Internet. However, people prefer to use names instead of addresses. Just imagine If you have to type 192.168.010.010, instead of your website name to access it. Not only it will be difficult to remember but also is difficult to maintain. There for a mapping system is created to convert physical IP addresses to a more convenient Logical Names. This system is called Domain Name System (DNS). The logical name for a website is called Domain Name.

When Internet was small, mapping was done using a host file. The host wile had only two columns comprising name and address. Every host could store host file on a disk and update it periodically from a master host file. When a program or a user wanted to map a name to an address, the host consulted the host file and found the mapping.

Today, however, it is impossible to have one single host file to relate every address with a name and vice versa. the host file would be too large to store in every host. In addition, it would be impossible to update all the host files in the world every time there is a change.

One solution would be to store the entire host file in a single computer and allow access to this centralized information to every computer that needs mapping. But we know that this would create a  huge amount of traffic on the IInternet.
Another solution, the one used today, is to divide this huge amount of information into smaller parts and store each part on a different computer. In this method, the host that needs mapping can contact the closest computer (in a data center) holding the mapping information.

Name Space
To be unambiguous, the names assigned to the resources (machines and websites), should be carefully selected from a name space with complete control over the binding between the names and IP addresses. In other words, the names should be unique because addresses are unique. A name space that maps each address to a unique name an be organized i two ways: flat and hierarchical.

Flat Name Space
In a Flat name space, a name is assigned to an address. A name in this space is a sequence of characters without structure. The names may or may not have a common section; if they do, it has no meaning. the main disadvantage of a flat name space is that it cannot be used in large system such as the Internet because it must be centrally controlled to avoid ambiguity and duplication.

Hierarchical Name Space
In a Hierarchical name space, each name is made up of several parts. the first part can define the nature of organization, the second part can define the name of an organization, the third part can define departments in an organization, and so on. In this case, the authority to control and assign the namespaces can be decentralized. A central authority can assign the part of the name that defines the nature of organization and the name of the organization. The responsibility of rest of the name can be given to organization itself. The organization can add suffixes (or prefixes) to the name to define its host or resources.
The management of the organization need not to worry that the prefix chosen for a host is taken by another oorganization because, even if part of an address is the same, the whole address is different. For example, assume two colleges and company calls one of their computers challenger. The first college is given a name by central authority such as fhda.edu, the second college is given the name berkeley.edu , and the company is given the name smart.com. When each of these organizations add the name challenger to the name they have already been given, the end result is three distinguishable names: challenger.fhda.edu, challenger.berkeley.edu, and challenger.smart.com. The names are unique without the need to be assigned by a central authority. The central authority controls only the part of the name, not the whole.

Related Post:
  • Domain Name Space

Protocols and Standards, Organizations, Forums

Network Protocols and Standards, that is what I will be describing in this article. First we define protocol, which is synonymous with "rule". Then we discuss standards, which are agreed-upon rules.

Protocols
In computer networks, communication occurs between entities in different systems. An entity is anything capable of sending or receiving information. However, two entities cannot simply send bit streams to each other and expect to be understood. For communication to occur, the entities must agree on a protocol.
A protocol is a set of  rules that governs data communication. A protocol defines what is communicated, how it is communicated and what is communicated. The Key elements of a protocol are syntax, semantics and timing.
  • Syntax. Syntax refers to the structure or format of the data, meaning the order in which they are presented. For example, a simple protocol might expect the first 8 bits of data to be the address of the sender, the second 8 bits to the address of the receiver, and the rest of the stream to the message itself.
  • Semantics. Semantics refers to the meaning of each section of bits. How is a particular pattern to be interpreted, and what action is to be taken based on that interpretation? For example, does an address identify the route to be taken or the final destination of the message?
  • Timing. Timing refers to two characteristics: when data should be sent and how fast it can be sent. For example, if a sender produces data at 100 Megabits per second (Mbps) but the receiver can process data at only 1 Mbps, the transmission will overload the receiver and data will be largely lost.
Standards
Standards are essential in creating and maintaining an open and competitive market for equipment manufacturers and also in guaranteeing national and international interoperability of data and telecommunications technology and processes. They provide guidelines to manufacturers, vendors, government agencies, and other service providers to ensure the kind of interconnectivity necessary in today's marketplace and in international communication.
Data communication standards fall into two categories: de facto ( meaning "by fact" or "by convention") and de jure (meaning "by law" and "by regulation").
  • De facto. Standards that have not been approved by an organized body but have been adopted as standards through widespread use are de facto standards. De facto standards are often established originally by manufacturers that seek to define the functionality of a new product or technology.
  • De jure. De jure standards are those that have been legislated by an oficially recognized body.

Standards and Organizations
standards are developed through cooperation of standards creation committees, forums and government regulatory agencies. Some of the standards establishment Organizations are:
  • International Standards Organisation (ISO) http://www.iso.org/
  • International Telecommuniations Union-Telecommunication Standards Sector (ITU-T). http://www.itu.int/ITU-T  
  • American National Standard Institute (ANSI).
  • Institute of Electrical and Electronics Engineers (IEEE). http://www.ieee.gov/
  • Electronic Industries Association (EIA).

Forums
Telecommunications technology development is moving faster than the ability of standards committee to ratify standards. Standards committees are procedural bodies and by nature slow moving. to accommodate the need fro working models and agreements  and to facilitate the standardization process, many special-interest groups have developed forums made up of representatives from interested corporations. The forums work with universities and users to test, evaluate and standardize new technologies.  By concentrating their efforts on a particular technology, the forums are able to speed acceptance and use of those technologies in the telecommunications community. The forums present their  conclusions to the standards bodies. Some important forums for the telecommunications industry include the following:
  • Frame Relay Forum. The Frame Relay Forum was formed by digital equipment Corporation, Northern Telecom, Cisco, and StrataCom to promote the acceptance and implementation of frame relay. Today, it has around 40 members representing North America, Europe, and the Pacific rim. Issues under Review include flow control. encapsulation, translation, and multicasting. the forum's results are submitted to the ISO.
  • ATM Forum. http://www.atmforum.com/ The ATM Forum provides acceptance and use of Asynchronous Transfer Mode (ATM) technology. The ATM Forum is made up of Customer Premises Equipment (e.g., PBX systems ) vendors and Central Office (e.g., telephone exchange) providers. It is concerned with the standardization of service to ensure interoperability.

Regulatory Agencies
All communications technology is subject to regulation by government agencies such as Federal Communication Commission in the United States. The purpose of these agencies is to protect the public interest by regulating radio, television, and wire/cable communications.

  • Federal Communications Commission (FCC). http://www.fcc.gov/  The Federal Communications Commission (FCC) has authority over interstate and international commerce as it relates to communications.

Related Post:
  • Internet Standards

Internet-History,Timeline and important events, ARPANET

ARPANET
In the mid-1960s, mainframe computers used in research organizations were unable to communicate with each other because of different manufacturers. The Advanced Research Projects Agency (ARPA) in the Department of Defence (DOD) was interested in finding the way the computers could connect to each other so that research work could be shared among researchers , thereby reducing costs and duplication of effort.

In 1967, at an Associate for Computing Machinery (ACM) meeting, ARPA presented its ideas for ARPANET, a small network of connected computers. The idea was that each host computer (not necessarily from same manufacturer) would be attached to a specialized computer, called an interface message processor (IMP). The IMPs, in turn, would be connected to each other. Each IMP had to be able to communicate with other IMPs as well as with its own attached host.

By 1969, ARPANET was a reality. Four nodes, at the University of California at Santa Barbara (UCSB), Stanford Research Institute (SRI), and the University of UTAH were connected via the IMPs to form a Network. Software called Network Control Protocol (NCP) provided communication between the hosts.

Birth of the Internet (Timeline)
In 1972, Vint Cerf and Bob Kahn, both of them who were part of core ARPANET group, collaborated on what they called Internetting Project. They wanted to link different networks together so that a host on one network can communicate with a host on a second different network. There were many problems to overcome: diverse packet sizes, diverse interfaces, and diverse transmission rates, as well as different reliability requirements. Cerf and Kahn devised the idea of a device called gateway to serve as the intermediary hardware to transfer packets from one network to another.

Cerf and Kahn's landmark 1973 paper outlined the protocols to achieve end-to-end delivery of packets. This was a new version of NCP ( network control protocol). This paper on Transmission control Protocol (TCP) included concepts such as encapsulation, the datagram, and the functions of a gateway. A radical idea was the transfer of responsibility for error correction from IMP to the host Machine. Around this Time the responsibility of the ARPANET was handed over to the Defence Communication Agency (DCA).

In October 1977, an internet consisting of three different networks ( ARPANET, packet radio, and packet satellite) was successfully demonstrated. Communication between networks was now possible.

Shortly thereafter, the authorities made a decison to split TCP into two protocols:  Transmission Control Protocol (TCP) and Internetworking Protocol (IP). IP would handle datagram routing while TCP would be responsible for higher level functions such as segmentation, reassembly, and error detection. The internetworking protocol became known as TCP/IP.

In 1981, under a DARBA contract, UC Berkely modified the UNIX operating system to include TCP/IP. this inclusion of network software along with a popular operating system did much to further the popularity of networking. The open (non-manufacturer-specific) implementation of Berkeley UNIX gave every manufacturer a working code base on which they could build their products.

In 1983, authorities abolished the original ARPANET protocols, and TCP/IP became the official protocol for the ARPANET. Those who wanted to use the Internet to access a computer on a different network had to be running TCP/IP.

MILNET
In 1983, ARPANET was split into two networks: MILNET for military users and ARPANET for non military users.

CSNET
Another milestone in Internet History was the creation of CSNET in 1981. CSNET was a network sponsored by national science foundation (NSF). The network was coneived by universities that were ineligible to join ARPANET due to an absence of defense ties to DARPA. CSNET was a less expensive network; there were no redundant links and the transmission rate was slower. It featured connections to ARPANET and Telnet, the first commercial packet data service.
By the middle, 1980s, most U.S. universities with computer science departments were part of CSNET. Other institutions and companies were also forming their own networks and using TCP/IP to interconnect. The term Internet, originally associated with government-funded connected networks, now referred to the connected networks using TCP/IP protocols.

NSFNET
With the success of CSNET, the NSF, in 1986, sponsored NSFNET, a backbone that connected five supercomputer centres located throughout the United States. Community networks were allowed access to this backbone, a T1 line with a 1.544 Mbps data rate, thus providing connectivity throughout the United States.
In 1990, ARPANET was officially retired and replaced by NSFNET. In 1995, NSFNET reverted back to its original concept of research network.

ANSNET
In 1991, the U.S government decided taht NSFNET was not capable of supporting the rapidly increasing Internet traffic. Three companies, IBM, Merit, and MCI, filled the void by forming a nonprofit organization called Advanced Network and Services (ANS) to build a new, high-speed Internet backbone called ANSNET

The Internet today is not a simple architecture. It is made up of many wide and local area networks (WANs and LANs) joined by connecting devices and switching stations (nodes). The Internet is continuously evolving and many new users are added each day. today most end users who want Internet connection use the services of Internet Service Providers (ISPs). There are International Service Providers, National Service Providers (SprintLink, PSINet, UUNet Technology, AGIS, and Internet MCI providing Internet at network access points NAPs), regional service providers and local service providers.


For your help here is the complete summary and list of important events of Internet History/Timeline
  • 1969. Four Node ARPANET established.
  • 1970. ARPA hosts implement NCP.
  • 1973. Development of TCP/IP suite begins.
  • 1977. An Internet tested using TCP/IP.
  • 1978. UNIX distributed to academic/research sites.
  • 1981. CSNET established.
  • 1983. TCP/IP becomes the official protocol for ARPANET.
  • 1983. MILNET was Born.
  • 1986. NSFNET established.
  • 1990. ARPANET decommissioned and replaced by NSFNET.
  • 1995. NSFNET goes back to being a research network.
  • 1995. Companies known as Internet Service Providers (ISPs) started

Related Post:

Network-Internet,What it is? Definition and Brief History

A network is a group of connected, communicating devices such as computers and printers. An internet (not lowercase i ) is two or more networks that can communicate with each other. The network has a certain Architecture called Topology of the network

The most notable internet is called Internet (uppercase I ), a collaboration of more than hundreds of thousand of interconnected networks. Private individuals as well as various organizations such as government agencies, schools, research facilities, corporations, and libraries in more than 100 countries use Internet. Millions of people are users. Yet this extraordinary communication system came into being in 1969.

The Internet has revolutionized many aspects of our daily lives. It has affected the way we do business as well as the way we spend our leisure time. Count the ways you have used the internet recently. Perhaps you have sent an electronic mail (email) to a business associate, paid a utility bill, read a newspaper from a distant city, or looked up a local movie schedule - all by using the internet. Or, may be you have researched a medical topic, booked a hotel reservation, chatted with a fellow Trekkie, or comparison-shopped for a car.

The internet is a communication system taht has brought a wealth of information to our fingertips and organized it for our use. The internet is a structured, organized system. Before we discuss how it works and its relationship to TCP/IP, we first give a brief history of the Internet and important events in internet history .

Related Post:
Birth of Internet, How it was conceived
Network Topology