|This document is available in: English Français
by Fahad Saeed, Fahad Ali Arshad and Salman Javed
About the author:
The authors are undergraduate students at the Department of Electrical Engineering, UET, Lahore, Pakistan. Their research interests include parallel & distributed computing, distributed OS and databases, wireless ad-hoc and sensor networks.
Design and Implementation of a Linux based 802.11 ad-hoc wireless network
The design and development of an ad-hoc
wireless network for a computer laboratory is described.
The objective of this laboratory is to permit students to sit on any machine they wish but still be able to work with their personal files, as would be the case in a conventional setup. However, all this is done without employing a central file server or network information system. The objective of this work is to show that an ad-hoc network can be used for a conventional laboratory in which nodes may or may not be mobile. By eliminating central file servers we eliminate single points of failure and make the system very reliable. The use of wireless networking also eliminates the cost and disruption of wiring, hubs, switches, sockets and their associated problems. We describe how administration and software updates are performed using wormlike propagation.
Is there any real need of servers? Can a computer network laboratory work without any centralized co-ordination? Can an ad-hoc network provide all the services a client demands? With the increase in computing power, networks are becoming intelligent. Intelligent end nodes can collaborate in a distributed environment to run applications more efficiently than the traditional client/server model.
In this article we will demonstrate how Linux can be used to develop an 802.11 ad-hoc wireless network for a large computer laboratory. Instead of the traditional client/server model, we have developed an autonomous wireless laboratory that works without any servers. This concept removes the much dreaded single point of failure in the network. We will include details of how careful Linux scripting (Perl and Bash), programming and configurability can be used to develop a distributed network with features similar to the client/server model. One of the major concerns in a laboratory in ad-hoc mode is the problem of administration. In a distributed topology there is no single point of control, which makes life difficult for the administrator. We will show how Linux scripts can be used to address various administrative tasks.
Network-based applications may execute on a single machine or be distributed over multiple machines. Client/server computing is an example of a distributed arrangement in which part of an application (the front end) executes on the workstation to provide an interface for the user, and another part (the back end) executes on a server to do the actual work, such as searching a database, processing programs/files etc.
An ad-hoc network is a collection of autonomous nodes or terminals that communicate with each other maintaining connectivity in a decentralized manner. Each network node acts as a server when it is providing services to other nodes, and as a client when it is receiving services from others. Each node is in itself an entity, which can enter or leave an ad-hoc network without disrupting any network behavior. Each node is a host as well as a router and the nodes collaborate and control the network in a distributed manner.
Wireless ad-hoc networks are formed when an ad-hoc collection of devices equipped with wireless communication capabilities happen to be in proximity to each other. Clearly, each pair of such devices whose distance is less than their transmission range can communicate directly with each other. Moreover, if some devices occasionally volunteer to act as forwarders, it is possible to form a multiple hop ad-hoc network. An important distinguishing element of these networks from “standard” networks is that they do not rely on any pre-existing infrastructure or centralized control. They operate in a dynamic environment with possible mobile wireless nodes. These nodes communicate using radio interfaces and may be subject to noise, fading and interference.
The basic difference between the client/server model and the ad-hoc model is that in an ad-hoc model the interacting processes can be client, server, or both while in a C/S model one process assumes the role of a service provider while the other assumes the role of a service consumer. Also, in the client/server architecture, some of the processing is always done by the server which is the central point of control. If the server goes down the network goes down.
We have set up a wireless ad-hoc lab at the Electrical Engineering Department UET Lahore. Its main purpose is to give a working environment for the students so that they can work on their programming assignments/projects. There is no central NIS server that maintains the user accounts, yet a student is not confined to any single machine.
In this ad-hoc network the data is imported to the local machine via NFS and the computing is done locally. The network traffic is considerably reduced as the nodes (the clients) do not need to transfer all the raw data to a centralized processing center (the server) as in traditional client/server model. This ad-hoc network can provide all the services that are required for the working of this laboratory.
Our assumptions are typical of remote execution systems and are not overly restrictive or extensive.
On each terminal we created four user accounts for home-user, backup account, admin and student. Furthermore three folders are created, namely /home/script, /home/script1 and /home/localwork. /home/script contains the scripts scriptm4, database, testlogin, unmount and back. /home/script1 contains remote1, remote2, and loggedip as shown in Fig. 3. The /home/localwork folder is where the remote files are mounted. We elucidate by considering an example of home-user as u02026 (with ip 192.168.0.26) implying that the backup account on this terminal is u02025.
Edit the /home/student/.bash_profile of the student account as follows.
cd /home/script bash /home/script/scriptm4
The script scriptm4 is run whenever the user account student is logged in.
Now edit the /home/student/.bash_logout
bash /home/script/unmount clear
The script unmount runs on logout.
The /home/u02026/.bash_profile looks like this:
bash /home/script1/remote1 exit
And /home/u02025/.bash_profile looks like that:
bash /home/script1/remote2 exit
The rdiff-backup utility is used for the backup procedures. We added the cron job for the backup by editing /var/spool/cron/u02026
0 13 * * * rdiff-backup /home/u02026 192.168.0.27::/home/u02026 0 13 * * * rdiff-backup 192.168.0.27::/home/u02026 /home/u02026
This configuration file editing on a large number of terminals is laborious so we have developed a script (autoset1) that automates the configuration tasks./home/auto/autoset1
This script is run once as root and it is placed in /home/auto along with all the above mentioned scripts. It interactively asks for various information like the home username, backup username, cron job timings etc. that has to be configured for the terminal. Apart from the above mentioned settings, a Perl subscript ipinrc determines the neighboring terminal ip addresses that will later be used for key generation/sharing and other administrative tasks./home/auto/ipinrc
With the initial configurations set on all the terminals, we now explain how the ad-hoc nodes will use the scripts to set up a working environment for the user.
Any student who wishes to do his work logs in the account “student” which is an open account. This login to “student” initiates the script scriptm4 which prompts the student to enter the particular user id assigned to him or her. The subscript testlogin corroborates the userid from the database file./home/script/testlogin
The following listing shows the sample database file present on each terminal./home/script/database
The IP address corresponding to a valid userid is extracted from the database file. This script then authenticates the user by doing ssh to the home machine (ssh userid@user_home_ip). A valid userís password will allow the user to log into his/her home account. This in turn initiates the script remote1 present in .bash_profile of the home account on the remote/home machine.
A subscript named loggedip runs in remote1. It extracts the IP address of the machine that last successfully logged in./home/script1/loggedip
After this the script remote1 edits /etc/exports to export the home directory to the IP address where the user is sitting, and the ssh connection is terminated with the control back to the script scriptm4 on the local machine./home/script1/remote1
Finally with the correct export permissions on the remote machine, the home directory is mounted via NFS and a console opens for the user to start his work. The user is completely oblivious to these background processes and the whole process is transparent to the user./home/script/scriptm4
At logout the script unmount is invoked. It makes sure that all the NFS mounts are unmounted./home/script/unmount
Each machine maintains a backup copy of userís home directory on a neighboring machine called the backup terminal for this machine. The distributed backup procedure uses cron and rdiff-backup utilities.
rdiff-backup backs up one directory to another, possibly over a network. The target directory ends up to be a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. The idea is to combine the best features of a mirror and an incremental backup. Rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership, and modification times. Also, rdiff-backup can operate in a bandwidth efficient manner over a pipe, like rsync. Thus you can use rdiff-backup and ssh to securely backup any folder to a remote location, and only the differences will be transmitted. Finally, rdiff-backup is easy to use and settings have sensible defaults.
cron is a powerful task scheduler present in Linux that allows for the execution of commands at times specified by the user. Configuration file /var/spool/cron/user is set up to specify the time at which the required command is to be executed. Users can setup this file using the command crontab. There are normally seven fields in one entry.
The fields are:
minute hour dom month dow user cmd e.g. 0 13 * * * rdiff-backup 192.168.0.27::/home/u02026 /home/u02026
The backup procedure is carried out at times the lab is not in use. The cronjob timings are set to be unique to ensure that backup procedures are bandwidth efficient. To achieve this, terminals are backed up with only one or two terminals performing the backup procedures simultaneously.
An automatically backed-up copy of the home directory is kept on a second distant machine. If the first home fails, the second home is mounted. The use of autofs ensures mounting/unmounting of the appropriate file systems on demand.
One of the major concerns in the development of a laboratory in ad-hoc mode is the problem of administration. In a distributed topology there is no single point of control. There is a need to develop an elegant way that assists the administrator to carry out the important administrative tasks, e.g. transfer of a single file to all the nodes, execution of a command on all the nodes etc.
We have devised a distributive scheme to carry out the administrative tasks. On each terminal, an admin account has been created as mentioned previously.
SSH is the replacement of standard unencrypted utilities such as telnet, rlogin and rsh. It allows the user to remotely execute commands. There are numerous authentication methods for ssh, e.g. rhosts-RSA-Authentication, RSA Authentication, Password Authentication using /etc/passwd. The RSA authentication does not require the user to enter the passwords but the session is still encrypted. RSA public key cryptography is used during handshake between the client and the server to authenticate the client machine.
Each terminalís admin account shares the ssh-key with the admin account of its neighbors as shown in Fig. 4. The key generation and sharing with the neighboring terminals has to be done once, after the terminals have been prepared as mentioned above. This script keygen2 generates a ssh-key for a terminal, makes necessary modifications to the .ssh directory of the admin user and shares the key with the neighboring terminals. This allows the admin account to access its neighboring admin accounts without any passwords, yet providing encrypted communication necessary for wireless networks. This concept of accessing the neighboring terminals without password is essential for the working of the fileworm and commandworm scripts discussed below./home/admin/keygenerator
In our scenario, the instructor often needs to transfer a file over the network for the student to work on. This could have been accomplished in a variety of ways, e.g. the file could have been transferred one by one to all the nodes from a single terminal, utilizing the bandwidth inefficiently.
We have developed a method that transfers the file in minimum time and with optimum bandwidth usage. This works like a worm and replicates the file over the network. The file transfer can be initiated from any node in the ad-hoc network. The initiating node transfers the file to its neighboring nodes and these nodes in turn transfer to their neighboring nodes. Before transferring the file, the neighboring nodes are checked for the presence of the file in the particular directory. The nodes that initiate the fileworm script are not checked for the file's availability. If the file is not present, it is transferred. Otherwise no action is taken. This process continues till the file has been transferred to all terminals as shown in Fig. 5./home/admin/fileworm
It executes a single command on all the nodes. It works on the same principle as does the fileworm. But instead of transferring files it executes commands on all the nodes, e.g. shutting down all the nodes from one terminal. The execution can be initiated from any of the nodes. The initiating node executes the command on its neighbors which in turn execute on their neighbors. This continues till the command is executed on all the nodes./home/admin/commandworm
With files shared among a large number of workstations, it becomes imperative that machines have their clocks synchronized so that file time stamps are globally comparable. Time synchronization helps in maintaining logs, implementing backup procedures etc. We simply set up all the machines to match their time with a single reference.
This work is carried out under the supervision of Prof. Shahid Bokhari in the Computer Communications Laboratory at the Department of Electrical Engineering, UET, Lahore, Pakistan. This laboratory was set up with a grant from the Government of Punjab. Wireless networking of this laboratory was made possible by the generous support of the alumni of 86EE and 93EE.
Webpages maintained by the LinuxFocus Editor team
© Fahad Saeed, Fahad Ali Arshad and Salman Javed
"some rights reserved" see linuxfocus.org/license/
2006-02-13, generated by lfparser version 2.54