Tuesday, November 5, 2013

Logo Designing

Some more Logos from me
















Monday, October 28, 2013

Logo Designing

The below is some of the Logos design by me.















Wednesday, July 31, 2013

Acl (Access Control List)

Acl (Access Control List)

Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented.

The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba.

Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information.

The cp and mv commands copy or move any ACLs associated with files and directories.

 Mounting File Systems

Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command:
mount -t ext3 -o acl device-name partition
For example:
mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work

Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option:

# vi /etc/fstab

LABEL=/        /     ext3      defaults,acl     1  1

:wq (save and exit)

# mount -o remount,rw /

# mkdir work

# cd work

# cat >aclwork.txt

Ctrl + D

Login with other user and open the file.

# getfacl /root/work/aclwork.txt  (Command will show the permission on the file.)

# setfacl -m u:user1:r-x /root/work/aclwork.txt (Will set the permission on the file.)

# getfacl /root/work/aclwork.txt

Login with user1 and try to open the file.

# setfacl -x u:user1 /root/work/aclwork.txt  (command will remove the acl permission from the file).







How to Use the vi Editor

How to Use the vi Editor

The vi editor is available on almost all Linux/Unix systems. vi can be used from any type of terminal because it does not depend on arrow keys and function keys--it uses the standard alphabetic keys for commands.
vi is short form for visual editor. It displays a window into the file being edited that shows 24 lines of text. vi is a text editor, not a "what you see is what you get" word processor. vi lets you add, change, and delete text, but does not provide such formatting capabilities as centering lines or indenting paragraphs.

This explains the basics of vi:
  • opening and closing a file
  • moving around in a file
  • elementary editing

You may use vi to open an already existing file by typing
      vi filename
where "filename" is the name of the existing file. If the file is not in your current directory, you must use the full pathname.
Or you may create a new file by typing
      vi newname
where "newname" is the name you wish to give the new file.
To open a new file called "test," enter
      vi test
On-screen, you will see blank lines, each with a tilde (~) at the left, and a line at the bottom giving the name and status of the new file:

~
      "test" [New file]

vi Modes

vi has two modes:
  • command mode
  • insert mode
In command mode, the letters of the keyboard perform editing functions (like moving the cursor, deleting text, etc.). To enter command mode, press the escape key.
In insert mode, the letters you type form words and sentences. Unlike many word processors, vi starts up in command mode.

Entering Text

In order to begin entering text in this empty file, you must change from command mode to insert mode. To do this, type
      i
Nothing appears to change, but you are now in insert mode and can begin typing text. In general, vi's commands do not display on the screen and do not require the Return key to be pressed.
Type a few short lines and press at the end of each line. If you type a long line, you will notice the vi does not word wrap, it merely breaks the line unceremoniously at the edge of the screen.
If you make a mistake, pressing or may remove the error, depending on your terminal type.

Moving the Cursor

To move the cursor to another position, you must be in command mode. If you have just finished typing text, you are still in insert mode. Go back to command mode by pressing . If you are not sure which mode you are in, press once or twice until you hear a beep. When you hear the beep, you are in command mode.
The cursor is controlled with four keys: h, j, k, l.
     Key        Cursor Movement
     ---        ---------------
     h        left one space
     j        down one line
     k        up one line
     l        right one space
When you have gone as far as possible in one direction, the cursor stops moving and you hear a beep. For example, you cannot use l to move right and wrap around to the next line, you must use j to move down a line. See the section entitled "Moving Around in a File" for ways to move more quickly through a file.

Basic Editing

Editing commands require that you be command mode. Many of the editing commands have a different function depending on whether they are typed as upper- or lowercase. Often, editing commands can be preceded by a number to indicate a repetition of the command.

Deleting Characters

To delete a character from a file, move the cursor until it is on the incorrect letter, then type
      x
The character under the cursor disappears. To remove four characters (the one under the cursor and the next three) type
     4x
To delete the character before the cursor, type
      X (uppercase)

Deleting Words

To delete a word, move the cursor to the first letter of the word, and type
      dw
This command deletes the word and the space following it.
To delete three words type
       3dw

Deleting Lines

To delete a whole line, type
       dd
The cursor does not have to be at the beginning of the line. Typing dd deletes the entire line containing the cursor and places the cursor at the start of the next line. To delete two lines, type
       2dd
To delete from the cursor position to the end of the line, type
       D (uppercase)

Replacing Characters

To replace one character with another:
  1. Move the cursor to the character to be replaced.
  2. Type r
  3. Type the replacement character.
The new character will appear, and you will still be in command mode.

Replacing Words

To replace one word with another, move to the start of the incorrect word and type
     cw
The last letter of the word to be replaced will turn into a $. You are now in insert mode and may type the replacement. The new text does not need to be the same length as the original. Press to get back to command mode. To replace three words, type
     3cw

Replacing Lines

To change text from the cursor position to the end of the line:
  1. Type C (uppercase).
  2. Type the replacement text.
  3. Press .

Inserting Text

To insert text in a line:
  1. Position the cursor where the new text should go.
  2. Type i
  3. Enter the new text.
The text is inserted BEFORE the cursor.
4. Press to get back to command mode.

Appending Text

To add text to the end of a line:
  1. Position the cursor on the last letter of the line.
  2. Type a
  3. Enter the new text.
This adds text AFTER the cursor.
4. Press to get back to command mode.

Opening a Blank Line

To insert a blank line below the current line, type
  • (lowercase)
To insert a blank line above the current line, type
     O (uppercase)

Joining Lines

To join two lines together:
  1. Put the cursor on the first line to be joined.
  2. Type J
To join three lines together:
  1. Put the cursor on the first line to be joined.
  2. Type 3J

Undoing

To undo your most recent edit, type
     u
To undo all the edits on a single line, type
     U (uppercase)
Undoing all edits on a single line only works as long as the cursor stays on that line. Once you move the cursor off a line, you cannot use U to restore the line.

Moving Around in a File

There are shortcuts to move more quickly though a file. All these work in command mode.
     Key            Movement
     ---            --------
     w            forward word by word
     b            backward word by word
     $            to end of line
     0 (zero)     to beginning of line
     H            to top line of screen
     M            to middle line of screen
     L            to last line of screen
     G            to last line of file
     1G           to first line of file
     f   scroll forward one screen
     b   scroll backward one screen
     d   scroll down one-half screen
     u   scroll up one-half screen

Moving by Searching

To move quickly by searching for text, while in command mode:
  1. Type / (slash).
  2. Enter the text to search for.
  3. Press .
The cursor moves to the first occurrence of that text.
To repeat the search in a forward direction, type
     n
To repeat the search in a backward direction, type
     N

Closing and Saving a File

With vi, you edit a copy of the file, rather than the original file. Changes are made to the original only when you save your edits.
To save the file and quit vi, type
     ZZ
The vi editor editor is built on an earler Unix text editor called ex. ex commands can be used within vi. ex commands begin with a : (colon) and end with a . The command is displayed on the status line as you type. Some ex commands are useful when saving and closing files.
To save the edits you have made, but leave vi running and your file open:
  1. Press .
  2. Type :w
  3. Press .
To quit vi, and discard any changes your have made since last saving:
  1. Press .
  2. Type :q!
  3. Press .

Command Summary

STARTING vi

     vi filename    edit a file named "filename"
     vi newfile     create a new file named "newfile"

ENTERING TEXT

     i            insert text left of cursor
     a            append text right of cursor

MOVING THE CURSOR
     h            left one space
     j            down one line
     k            up one line
     l            right one space

BASIC EDITING
     x         delete character
     nx        delete n characters
     X         delete character before cursor
     dw        delete word
     ndw       delete n words
     dd        delete line
     ndd       delete n lines
     D         delete characters from cursor to end of line
     r         replace character under cursor
     cw        replace a word
     ncw       replace n words
     C         change text from cursor to end of line
     o         insert blank line below cursor
                  (ready for insertion)
     O         insert blank line above cursor
                  (ready for insertion)
     J         join succeeding line to current cursor line
     nJ        join n succeeding lines to current cursor line
     u         undo last change
     U         restore current line

MOVING AROUND IN A FILE
     w            forward word by word
     b            backward word by word
     $            to end of line
     0 (zero)     to beginning of line
     H            to top line of screen
     M            to middle line of screen
     L            to last line of screen
     G            to last line of file
     1G           to first line of file
     f   scroll forward one screen
     b   scroll backward one screen
     d   scroll down one-half screen
     u   scroll up one-half screen
     n            repeat last search in same direction
     N            repeat last search in opposite direction

CLOSING AND SAVING A FILE
     ZZ            save file and then quit
     :w            save file
     :q!            discard changes and quit file


Tuesday, July 30, 2013

LVM (Logical Volume Manager)

LVM (Logical Volume Manager)

LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can be placed on other block devices which might span two or more disks.
The physical volumes are combined into logical volumes, with the exception of the /boot/ partition. The/boot/ partition cannot be on a logical volume group because the boot loader cannot read it. If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group.
Since a physical volume cannot span over multiple drives, to span over more than one drive, create one or more physical volumes per drive.



The volume groups can be divided into logical volumes, which are assigned mount points, such as /homeand / and file system types, such as ext2 or ext3. When "partitions" reach their full capacity, free space from the volume group can be added to the logical volume to increase the size of the partition. When a new hard drive is added to the system, it can be added to the volume group, and partitions that are logical volumes can be increased in size.
  

On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to another hard drive, the original hard drive space has to be reallocated as a different partition or not used.

Create 3 partitions for implementing RAID using fdisk command.

e.g. #fdisk /dev/sda

Press n to create the 3 new partitions each of 100Mb in size.

Press p to see the partition table.

Press t to change the partition id of all the three partitions created by you to 8e (Linux LVM).

Press wq to save and exit from fdisk utility in linux.

Use fdisk -l to list the partition table.

Creating LVM

# pvcreate /dev/sda6 /dev/sda7 /dev/sda8
# pvdisplay

#vgcreate vg /dev/sda6 /dev/sda7 /dev/sda8
#vgdisplay vg

#lvcreate -L +10M -n data vg

-L is used to define size.
-n is used to define the name.

#mkfs.ext3 /dev/vg/data
#lvdisplay /dev/vg/data

#mkdir disk
#mount /dev/vg/data disk
#df -h disk

#lvextend -L +10M /dev/vg/data
#ext2online /dev/vg/data

#df -h disk

#umount disk

#vgchange -an vg  (optional) -a control the avability of the logical volume in the volume group for input and output.

#lvremove /dev/vg/data
Press y to continue
#lvdisplay




#vgremove /dev/vg
#vgdisplay

#pvremove /dev/sda6 /dev/sda7 /dev/sda8
#pvdisplay











Linux Tutorials

RAID (Redundant Array of Inexpensive Disks)
configuring RAID

Create 3 partitions for implementing RAID using fdisk command.

e.g. #fdisk /dev/sda

Press n to create the 3 new partitions each of 100Mb in size.

Press p to see the partition table.

Press t to change the partition id of all the three partitions created by you to fd (linux raid auto).

Press wq to save and exit from fdisk utility in linux.

#partprobe

Use fdisk -l to list the partition table.

Creating RAID

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sda6 /dev/sda7 /dev/sda8

Press y to create the arrays.


To see the details of raid use the following command: -

# cat /proc/mdstat

# mdadm --detail /dev/md0

Creating the file system for your RAID devices

#mkfs.ext3 /dev/md0

Mounting the RAID partition

#mkdir data

# mount /dev/md0 data

#df -h /root/data (Command is used to see the space allocation).

Crashing the raid devices

# mdadm --manage /dev/md0 --fail /dev/sda8

Removing raid devices

# mdadm --manage /dev/md0 --remove /dev/sda8


Adding raid devices

# mdadm --manage /dev/md0 --add /dev/sda8

View failed and working raid devices

# cat /proc/mdstat

# mdadm --detail /dev/md0

# tail /var/log/messages

To remove the RAID follow these steps: -

1) unmount the mounted directory where raid is mounted.
e.g. umount data
2) Stop the device
e.g. mdadm --stop /dev/md0
3)  View the details of your raid level using following command:  -
#cat /proc/mdstat
#mdadm --detail /dev/md0










Saturday, July 27, 2013

B.Sc(IT) Linux Question Paper with Answer

What is RAID? What are its different types? What are different levels of RAID?
Ans:-RAID is an acronym for Redundant Array of Inexpensive, or Independent (depending on who you ask), Disks. There are two types of RAID that can be used on computer systems. These types are hardware RAID and software RAID. In addition, there are six different RAID levels commonly used regardless of whether hardware or software RAID is used. A brief explanation of hardware and software RAID is in order. Following this explanation is a description of the six RAID levels.


  • Hardware Raid-With hardware RAID, because the processing work is done on a discrete controller card in the server or at the level of the storage subsystem, there's no added load to the server processor and buses. There will likely be more advanced features, such as drives being hot-swappable in case of failure. Hardware RAID is more expensive than software RAID, but offers better performance and inter operability.

Whether software RAID vs hardware RAID is the one for you depends on what you need to do and how much you want to pay. Hardware RAID will cost more, but it will also be free of software RAID's performance limitations.

  • Software Raid—Disks attached to servers can be turned into RAID arrays using built-in features on a number of operating systems. This is software RAID. All you need to do is connect the drives and configure the RAID level you want.

Software RAID does its processing on the server motherboard. This adds to the processing load and could slow down the RAID calculations and other operations carried out on that device. RAID 0 and RAID 1 place the lowest overhead on software RAID, but adding the parity calculations present in other RAID levels is likely to create a bigger impact on performance.
Numerous server OSes support RAID configuration, including those from Apple, Microsoft, various Linux flavours as well as OpenBSD, FreeBSD, NetBSD and Solaris Unix.
Software RAID is often specific to the OS being used, so it can't generally be used for partitions that are shared between operating systems.
The three most commonly used RAID levels are:

RAID level 0 — This RAID level requires at least two disks and uses a method called striping that writes data across both drives. There is no redundancy provided by this level of RAID, since the loss of either drive makes it impossible to recover the data. This level of RAID does

give a speed increase in writing to the disks.

RAID level 1 — This RAID level requires at least two disks and uses a method called mirroring. With mirroring, the data is written to both of the drives. So, each drive is an exact mirror of the other one, and if one fails the other still holds all the data. There are two variants to level 1 with one variant using a single disk controller that writes to both disks as described above. The other variant uses two disk controllers, one for each disk. This variant of RAID level 1 is known as duplexing.

RAID level 5 — This RAID level, which is the most widely used, requires at least three disks and uses striping to write the data across the two disks similarly to RAID level 1. But unlike RAID level 1, this level of RAID uses the third disk to hold parity information that can be used to reconstruct the data from either, but not both, of the two disks after a single disk failure.


b] Write the name and purpose of any five services that can be started from xinetd.

chargen — Random character generator that sends its traffic over TCP

  • daytime-udp — Gives you the time over UDP
  • finger — User information lookup program
  • kshell — Restricts user access to the shell
  • rlogin — Service similar to Telnet, but enables trust relationships
  • between machines
  • swat — Samba Web Administration tool
  • time— Gives you the time
  • chargen-udp — Random character generator that sends its traffic
  • over udp
  • echo— Echoes back all characters sent to it over TCP
  • gssftp — Kerberized FTP server
  • rsh — Remote shell
  • talk — A talk (real-time chat) server
  • time-udp — Gives you the time over UDP
  • comsat — Notifies users if they have new mail
  • echo-udp — Echoes back all characters sent to it over UDP
  • klogin — Kerberos’s answer to rlogin
  • ntalk — A talk (real-time chat) server
  • rsync — Remote file transfer protocol
  • telnet — Telnet server
  • wu-ftpd — An FTP server
  • daytime — Gives you the time over TCP
  • eklogin — Encrypting kerberized rlogin server
  • krb5-telnet — Kerberized Telnet server
  • rexec — Provides remote execution facilities
  • sgi_fam — File-monitoring daemon
  • tftp — Trivial File Transfer Program


c.List and explain different types of domain name servers.

A top-level domain server, one that provides information about the domains shown in Table 20-1, is typically referred to as a root name server. A search for www.muhlenberg.edu looks to the root name server for .edu for information. The root name server then directs
the search to a lower-level domain name server until the information is found. You can see an example of this by using the dig command to search for the root name server for .edu, 

The figure shows the root name server that provides information for the .edu domain. You can continue the search for the second-level domain by adding the name of the domain you are looking for, as shown in Figure 20-4.

After you have found the domain you are looking for, information about that domain is provided by its local domain name servers. The three types of local domain name servers are master, or primary, slave, or secondary, and caching-only servers.

Master — The master contains all the information about the domain and supplies this information when requested. A master server is listed as an authoritative server when it contains the information you are seeking and it can provide that information.

Slave— The slave is intended as a backup in case the master server goes down or is not available. This server contains the same information as the master and provides it when requested if the master server cannot be contacted.

Caching— A caching server does not provide information to outside sources; it is used to provide domain information to other servers and workstations on the local network. The caching server remembers the domains that have been accessed. Use of a caching server speeds up searches since the domain information is already stored in memory, and the server knows exactly where to go rather than having to send out a request for domain information. Where does the information that the master and slave servers provide come from? The server(s) have been configured to provide it when asked.

d]How can rc scripts be managed using chkconfig? Explain.

Fedora Core and Red Hat Enterprise Linux come with a useful tool called chkconfig. It helps the system administrator manage rc scripts and xinetd configuration files without having to manipulate them directly. It is inspired by the chkconfig command included in the IRIX operating system.

Type chkconfig --list to see all the services chkconfig knows about, and whether they are stopped or started in each runlevel.

The first column is the name of the installed service. The next seven columns each represent a runlevel, and tell you whether that service is turned on or off in that runlevel.
Since xinetd is started on the system whose chkconfig output is excerpted, at the end of chkconfig’s report is a listing of what xinetd started services are configured to begin at boot time.

To turn a service off or on using chkconfig, use this syntax:

chkconfig -level[0-6](you must choose the runlevel) servicename off|on|reset

So, to turn off the gpm daemon turned on previously, type:

chkconfig --level 2 gpm off

To turn on xinetd, type:

chkconfig xinetd on

Run chkconfig --list again to see if the service you changed has been set to the state you desire. Changes you make with chkconfig take place the next time you boot up the system. You can always start, stop, or restart a service by running service (service name) from a terminal prompt.

II.Answer any three of the following:


Enumerate the duties of Linux System Administrator.

1. Installing and Configuring Servers
2. Installing and Configuring Application Software
3. Creating and Maintaining User Accounts
4. Backing Up and Restoring Files
5. Monitoring and Tuning Performance
6. Configuring a Secure System
7. Using Tools to Monitor Security

Explain each of the above duties.


b.What is GRUB loader? How does it differ from Linux Loader?

Ans:-GRUB stands for GRand Unified Bootloader and is a GNU Bootloader that can boot a variety of operating systems from Linux, Mach4, vSTA, DOS, NT 3.51, the *BSD variants and any generic OS you can add to it with its easy-to-use boot menu.
GRUB takes away all the complexities out of trying to boot your OS by doing it for you. GRUB can handle a variety of filesystems from MS-DOS FAT systems over Linux ext2fs (Second Extended File System) to BSD FFS, and can load kernels in various binary formats, including generic• ELF, FreeBSD a.out, flat binary, and generic• executable (any file with a valid multiboot header). One major advantage of GRUB (or more precisely, a multiboot standard bootloader) is that the kernel will be entered in a known state, which includes the A20Line having been enabled, and ProtectedMode having been entered. This takes a lot of the pain out of writing a kernel, rendering GRUB a very useful tool for the amateur, or anyone who wants to spend more time on the intricacies of the kernel rather than worrying about these generic start-up procedures.

Its config file is usually in /boot/grub and might be called grub.conf or menu.lst. When you change the config file, you merely reboot to read the changes. The structure of the config file is very different than that of LILO, even though they each convery essentially the same information. When you boot with GRUB, you have a vast array of options available to you. You can actually build a test startup config dynamically and boot from it, very handy if you have somehow really messed up your actual config file. GRUB can read most of the current filesystem types.

All boot loaders work in a similar way to fulfill a common purpose. But LILO and GRUB do have a number of differences:
* LILO has no interactive command interface, whereas GRUB does.
* LILO does not support booting from a network, whereas GRUB does.
* LILO stores information regarding the location of the operating systems it can to load physically on the MBR. If you change your LILO config file, you have to rewrite the LILO stage one boot loader to the MBR. Compared with GRUB, this is a much more risky option since a misconfigured MBR could leave the system unbootable. With GRUB, if the configuration file is configured incorrectly, it will simply default to the GRUB command-line interface.
* LILO only loads linux and other boot loaders. and GRUB loads a large number of OSs.
* LILO works by loading itself into a space that will fit on the MBR. Grub has two stages (because it’s too overcomplicated to work as well, err I mean as easily as lilo). It loads stage 1 off the MBR (usually) and stage 2 out of /boot, along with its config.

c.Explain bootstrapping in Linux.

Linux kernel itself is a program and the first bootstrapping task is to get this program into the memory so that it can be executed. Linux implements a two stage booting process. During the first stage, the system ROM loads a small boot program into the memory from the disk which in turn loads the kernel. Then, the kernel performs the memory tests to find out how much RAM is available. Some of the kernel’s internal data structures are statically sized, so the kernel sets aside a fixed amount of real memory for itself when it starts which is used only by the kernel [Kernel space] and the users cannot use it. Then the kernel prints on the console the total amount of physical memory and the amount available for the user processes.
One of the kernel’s first task is to check out the hardwares connected to it. When you design a kernel, you have to inform it which drivers are to be found and it will find them and prints a criptic information of each of the devices on the console. The kernel probes the bus and try to locate the drivers, if not found, they are disabled and are enabled when found afterwards.Then, after the basic initialization, the kernel creates several spontaneous processes (as they are not created using fork()).init() [PID 1] is accompanied by several kernel and memory handling processes like kflushd, kupdate, kpiod, kswapd. Of these, only init() is a full-fledged process, others are part of the kernel that have been dressed up to look like processes for scheduling and architectural reasons.
Once the spontaneous processes have been created, then we can say that bootstrapping is complete. The remaining processes that handle other operations are started by init().
To enter into single-user mode, we have notify init() by setting a command-line flag. init() eventually turns the control over to sulogin, and prompts for password. You can continue wih the multi-user mode by pressing Ctrl-D. A shell appears in the single-user mode, where we can use all the commands used in the multi-user mode. Usually, most of the daemons wont run in the single-user mode. We have to manually mount the filesystems that are not in /bin, /sbin, /etc.
The fsck command is normally run during an automatic boot to check and repair

4] filesystems. In the single-user mode, you need to run the fsck commannd by hand. When the single-user mode exists, the system will attempt to boot into multi-user mode.

Finally, many of the start-up scripts are called by init(). getty is called as the last process by init(), using which we can login to the system.

d.Explain the different directories in Linux.

The / directory is called the root directory and is at the top of the file system structure. In many systems, the / directory is the only partition on the system, and all other directories are mounted under it. The / directory mounted as the only partition, with all other directories contained within it. The primary purpose of the / directory is booting the system and correcting any problems that might be preventing the system from booting. According to the FHS, the / directory must contain, or have links to, the following directories:
■■ bin — This directory contains command files for use by the system administrator or other users. The bin directory cannot contain subdirectories.
■■ boot— On Red Hat systems, this is the directory containing the kernel, the core of the operating system. Also in this directory are files related to booting the system, such as the boot loader and the initial ramdisk.
■■ dev — This directory contains device nodes through which the operating system can access hardware and software devices on the system.
■■ etc — This directory and its subdirectories contain most of the system configuration files. If you have the X Window System installed on your system, the X11 subdirectory is located here. Networking and systemrelated files are in the subdirectory sysconfig. Another subdirectory of etc is the skel directory, which holds files used as templates used to create files in users’ home directories when the users are created.
■■ home— This directory contains the directories of users on the system. Subdirectories of home will be named for the user to whom they belong.
■■ initrd — This directory is used as a mount point when the system is booting. It doesn’t contain any data, but it is very important that it be there. This directory is not part of the FHS.
■■ lib — The shared system files and kernel modules are contained in this directory and its subdirectories.
■■ media — This directory contains the mount points for removable media such as floppy drives, CD-ROM drives, and USB devices such as flash memory sticks, which are typically automounted by the system.
■■ mnt — This directory is the location of the mount point for temporary file systems, such as those on floppies or CDs, which traditionally have been manually mounted.
■■ opt — This directory and its subdirectories are often used to hold applications installed on the system.
■■ proc— This directory is a mount point for virtual information about currently running system processes. This directory is empty until the proc file system is mounted.
■■ root— This is the home directory of the root user. Don’t confuse this with the / directory, which has the same name.
■■ sbin— Contained in this directory are system binaries used by the system administrator or the root user.
■■ selinux — This directory is similar to the /proc directory in that it contains information about the selinux stored in the memory of the running kernel.
■■ srv — This directory is intended to hold site-specific data for system provided services.
■■ sys — This directory is the mount point for a virtual file system of type sysfs that is used to hold information about the system and devices.
■■ tmp — This directory contains temporary files used by the system.
■■ usr — This directory is often mounted on its own partition. It contains shareable, read-only data. Subdirectories can be used for applications,
typically under /usr/local.
■■ var — Subdirectories and files under var contain variable information, such as system logs and print queues.

e.Write a short note on ext3 file system.

The extended 3 file system is a new file system introduced in Red Hat 7.2. ext3 provides all the features of ext2, and also features journaling and backward compatibility with ext2. The backward compatibility enables you to still run kernels that are only ext2-aware with ext3 partitions. You can also use all of the ext2 file system tuning, repair, and recovery tools with ext3. You can upgrade an ext2 file system to an ext3 file system without losing any of your data. This upgrade can be done during an update to the operating system.
ext3 support comes in kernels provided with the latest Fedora and Red Hat distributions. If you download a kernel from somewhere else, you need to patch the kernel to make it ext3 aware, with the kernel patches that come from the Red Hat FTP site. It is much easier to just stick with kernels from Red Hat.

ext3’s journaling feature speeds up the amount of time it takes to bring the file system back to a sane state if it’s not been cleanly unmounted (that is, in the event of a power outage or a system crash).Under ext2, when a file system is uncleanly mounted, the whole file system must be checked. This takes a long time on large file systems. On an ext3 system, the system keeps a record of uncommitted file transactions and applies only those transactions when the system is brought back up. So, a complete file system check is not required, and the system will come back up much faster.


A cleanly unmounted ext3 file system can be mounted and used as an ext2 file system. This capability can come in handy if you need to revert to an older kernel that is not aware of ext3. The kernel sees the ext3 file system as an ext2 file system. ext3’s journaling feature involves a small performance hit to maintain the file system transaction journal. Therefore, it’s recommended that you use ext3 mostly for your larger file systems, where the ext3 journaling performance

hit is made up for in time saved by not having to run fsck on a huge ext2 file system.

f.What is partitioning of a hard disk? What are the different partitions to be created in Linux? What is swap partition?

Partitioning is a means to divide a single hard drive into many logical drives.

A partition is a contiguous set of blocks on a drive that are treated as an independent disk.

A partition table is an index that relates sections of the hard drive to partitions.
/boot partition – contains kernel images and grub configuration and commands
/ partition
/var partition
/home partition
Any other partition based on application (e.g /usr/local for squid)
swap partition — swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. The size of your swap partition should be equal to twice your computer's RAM.

a.What are the files required to be changed when we setup a new system or move the system from one location to another?

You need to:
■■ Set up the IP addresses of your network interfaces. Make changes to:
/etc/sysconfig/network-scripts/ifcfg-eth0
■■ Set up the hostname of your machine. Make changes to:
/etc/sysconfig/network
/etc/hosts
■■ Set up the DNS servers to reference. Make changes to:
/etc/resolv.conf
■■ Make a local file of hostname to IP address mappings. Make changes to:
/etc/hosts
■■ Set up the device order from which hostnames are looked up. Make changes to:
/etc/nsswitch.conf

b.Write a short note on IP addressing.

Ans:-Every computer that communicates over the Internet is assigned an IP address that uniquely identifies the device and distinguishes it from other computers on the Internet. An IP address consists of 32 bits, often shown as 4 octets of numbers from 0-255 represented in decimal form instead of binary form.

For example, the IP address: 168.212.226.204 in binary form is 101000.11010100.11100010.11001100.

But it is easier for us to remember decimals than it is to remember binary numbers, so we use decimals to represent the IP addresses when describing them. However, the binary number is important because that will determine which class of network the IP address belongs to. An IP address consists of two parts, one identifying the network and one identifying the node, or host. The Class of the address determines which part belongs to the network address and which part belongs to the node address. All nodes on a given network share the same network prefix but must have a unique host number.

Class A Network -- binary address start with 0, therefore the decimal number can be anywhere from 1 to 126. The first 8 bits (the first octet) identify the network and the remaining 24 bits indicate the host within the network. An example of a Class A IP address is 102.168.212.226, where "102" identifies the network and "168.212.226" identifies the host on that network.
Class B Network -- binary addresses start with 10, therefore the decimal number can be anywhere from 128 to 191. (The number 127 is reserved for loopback and is used for internal testing on the local machine.) The first 16 bits (the first two octets) identify the network and the remaining 16 bits indicate the host within the network. An example of a Class B IP address is 168.212.226.204 where "168.212" identifies the network and "226.204" identifies the host on that network.
Class C Network -- binary addresses start with 110, therefore the decimal number can be anywhere from 192 to 223. The first 24 bits (the first three octets) identify the network and the remaining 8 bits indicate the host within the network. An example of a Class C IP address is 200.168.212.226 where "200.168.212" identifies the network and "226" identifies the host on that network.
Class D Network -- binary addresses start with 1110, therefore the decimal number can be anywhere from 224 to 239. Class D networks are used to support multicasting.
Class E Network -- binary addresses start with 1111, therefore the decimal number can be anywhere from 240 to 255. Class E networks are used for experimentation. They have never been documented or utilized in a standard way.

c.What is Dynamic Host Configuration protocol? Write a sample dhcpd.conf file.
Using Dynamic Host Configuration Protocol (DHCP), you can have an IP address and the other
information automaticlly assigned to the hosts connected to your network. This method is quite efficient and convenient for large networks with many hosts, because the process of manually configuring each host is quite timeconsuming. By using DHCP, you can ensure that every host on your network has a valid IP address, subnet mask, broadcast address, and gateway, with
minimum effort on your part. While not absolutely necessary, you should have a DHCP server configured for each of your subnets. Each host on the subnet needs to be configured as a DHCP client. You may also need to configure the server that connects to your ISP as a DHCP client if your ISP dynamically assigns your IP address.

In Fedora Core and Red Hat Enterprise Linux the DHCP server is controlled by the text file /etc/dhcpd.conf. Listing 11-1 shows the configuration file for my system. Comment lines begin with a # sign.

#(The amount of time in seconds that the host can keep the IP address.)
default-lease-time 36000;
#(The maximum time the host can keep the IP address.)
#domain name
max-lease-time 100000;
# (The domain of the DHCP server.)
6
#nameserver
option domain-name “tactechnology.com”;
option domain-name-servers 192.168.1.1;
#gateway/routers, can pass more than one:
option routers 1.2.3.4,1.2.3.5;
option routers 192.168.1.1; (IP address of routers.)
#netmask (The subnet mask of the network.)
option subnet-mask 255.255.255.0;
#broadcast address (The broadcast address of the network.)
option broadcast-address 192.168.1.255;
#specify the subnet number gets assigned in
subnet 192.168.1.0 netmask 255.255.255.0
#define which addresses can be used/assigned
range 192.168.1.1 192.168.1.126;

d.How is DHCP client configured? Explain.

First, you need to be sure that you NIC is properly configured and recognized by your system. After that, it is easy to tell your system to use DHCP to obtain its IP information. Follow these steps.
1. Using your favorite text editor, open the /etc/sysconfig/networkscripts/ifcfg-eth0 file.
2. Find the line bootproto=static.
3. Change static to dhcp.
4. Save your changes.
5. Restart the network by issuing the command service network restart, and your system will receive its IP information from the DHCP server.

e.What are the factors deciding the NFS installation? What are the general rules for design process of NFS?

When you are designing your NFS installation, you need to:
■■ Select the file systems to export
■■ Establish which users (or hosts) are permitted to mount the exported file systems
■■ Identify the automounting or manual mounting scheme that clients willuse to access exported file systems
■■ Choose a naming convention and mounting scheme that maintains network transparency and ease of use
A few general rules exist to guide the design process. You need to take into account site-specific needs, such as which file systems to export, the amount of data that will be shared, the design of the underlying network, what other network services you need to provide, and the number and type of servers and clients.

f.Explain the commands associated with NFS.

Ans:-NFS uses standard client/server architecture. The server portion consists of the physical disks containing shared file systems and several daemons that make the shared file systems (or entire disks, for that matter) visible to and available for use by client systems on the network. This process is normally referred to as exporting a file system. Server daemons also provide for file locking and, optionally, quota management on NFS exports. NFS clients simply mount the exported file systems, colloquially but accurately called NFS mounts, on their local system just as they
Would mount file systems on local disks?

Setting up NFS Share
Required some packages
#rpm -qa nfs-utils

#vim /etc/exports
/nfsshare 192.168.0.0/255.255.255.0(sync)

:wq!

#/etc/init.d/nfs restart

#chkconfig nfs on

Check from Physical Machine..      (Remote Testing)
#showmount -e 192.168.2.2 



a.Explain the [global] section of smb.conf file.

The first section of the smb.conf file is the [global] section. The [global] section contains settings that apply to the entire server and default settings that may apply to the other shares. The [global] section contains a list of options and values in the following format:
option = value
You have hundreds of options and values at your disposal, and you look at the most common ones here. For a complete listing of options, refer to the smb.conf man page. Some of the more significant options are:
■■ workgroup = Tardis — This is the name of the workgroup shown in the identification tab of the network properties box on the Windows computer.
■■ smb passwd file = /etc/samba/smbpasswd — This shows the path to the location of the Samba password file. Be sure that you include this option/value pair in your smb.conf file.
■■ encryptpasswords = yes — Beginning with Windows NT service pack 3 and later, passwords are encrypted. If you are connecting to any systems running these versions of Windows, you should choose encrypted passwords.
■■ netbios name = RHL — This is the name by which the Samba server is known to the Windows computer.
■■ server string = Samba Server — This is shown as a comment on the Windows PC in the network browser.
■■ security = user — This is the level of security applied to server access. Other options are share, domain, and server. Share is used to make it easier to create anonymous shares that do not require authentication, and it is useful when the NetBIOS names of the Windows computers are different from other names on the Linux computer. Server is used to specify the server to use if the password file is on another server in the network. Domain is used if the clients are added to a Windows NT domain using smbpasswd, and login requests are executed by a Windows NT primary or backup domain controller.
■■ log file = /var/log/samba/log—This is the location of the log file.
■■ max log size = 50— This is the maximum size in kilobytes that the file can grow to.
■■ socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF= 8192 — This enables the server to be tuned for better performance. TCP_NODELAY is a default value; the 

BUF values set send and receive buffers.

■■ dns proxy = No— This indicates that the NetBIOS name will not be treated like a DNS name and that there is no DNS lookup.

b.Write and explain the default NTP configuration file.

Shows ntpd’s configuration file, /etc/ntp.conf, stripped of most comments and white space.
restrict default nomodify notrap noquery
restrict 127.0.0.1
# --- OUR TIMESERVERS -----
server pool.ntp.org
server pool.ntp.org
server pool.ntp.org
# --- GENERAL CONFIGURATION ---
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
keys /etc/ntp/keys
The first two entries, beginning with the restrict directive, are, not surprisingly, restrictions on the listed IP addresses or hostnames. The first entry uses the keyword default, which means an IP address and mask of 0.0.0.0. The option flags, nomodify, notrap, and noquery, prevent the listed IP address from modifying, logging, or querying the NTP service on the server. The second rule, restrict 127.0.0.1, permits all NTP activity over the loopback interface. All activity is permitted because there are no option flags specified. To deny all activity, you would use the ignore flag, but you shouldn’t do this on the loopback interface because doing so would prevent certain NTP administrative functions (issued using the ntpdc command) from working properly. The next three entries, beginning with the server directive, identify the time servers you want to use as reference clocks. In this case, ntpd is being configured to use the pool servers. Notice that the names are all pool.ntp.org. Even though the names are the same, the NTP server pool is configured to use DNS round robin, so three hostname lookups on the same name will return three different IP addresses. You can try this yourself to verify that round robin is working. Issue the command host pool.ntp.org at the command prompt and, unless your DNS client is broken, you should see output resemblingthe following:
$ host pool.ntp.org
pool.ntp.org has address 213.219.244.16
pool.ntp.org has address 216.27.185.42
pool.ntp.org has address 62.220.226.2
pool.ntp.org has address 69.37.143.241
pool.ntp.org has address 81.169.154.44
pool.ntp.org has address 82.219.3.1
pool.ntp.org has address 139.140.181.132
pool.ntp.org has address 146.186.218.60
pool.ntp.org has address 195.18.140.242
pool.ntp.org has address 203.217.30.156
pool.ntp.org has address 209.126.142.251
pool.ntp.org has address 212.23.29.225
pool.ntp.org has address 212.41.248.75
pool.ntp.org has address 212.254.25.164
pool.ntp.org has address 213.10.208.72
Normally, a hostname resolves to one and only one IP address, but when DNS round robin behavior is enabled, a single hostname can resolve to multiple IP addresses, the purpose being to equalize the load on any single system. The general configuration section sets broad operational policies that control ntpd’s overall behavior. The line server 127.127.1.0 instructs the NTP daemon to use the local clock (referred to as an undisciplined local clock) if no external reference clocks are accessible. You can use any address in the range 127.127.1.0 to 127.127.1.255, although the convention is to use 127.127.1.0. The line fudge 127.127.1.0 stratum 10 limits the use of the local lock by assigning it a very low place in the time server hierarchy, the intent being to prevent the local clock from interfering with other, likely more accurate time sources elsewhere on network and to enable (or, perhaps, compel) ntpd to look pretty hard for other time sources before using the undisciplined local clock. In its normal operation, ntpd listens for broadcasts from other time servers when trying to find a reference clock. If it finds a time server declaring itself at a higher stratum than 10, ntpd will use the higher-stratum clock instead of the undisciplined local clock.
The directive driftfile /var/lib/ntp/drift specifies the name of the file that stores the oscillation frequency of the local clock. NTP uses this frequency, which varies slightly over time, to make appropriate adjustments to the system time. The broadcastdelay directive sets the number of seconds (0.008 in this case) used to calculate the network latency or delay between the local server and a remote reference server. On a LAN, values between 0.003 and 0.007 seconds are suitable, but when two servers must communicate across the Internet, it is often necessary to use a longer delay value. The last line, keys /etc/ntp/keys, tells NTP where to find the cryptographic keys used to encrypt exchanges between client and server machines. The purpose for encrypting the data exchange is to prevent an unauthorized reference server accidentally or deliberately sending time signals to your local time server. Another reason to use encryption is when you enable remote NTP
administration and want to make sure that only properly authorized and authenticated systems can perform remote administration.

c.Enumerate the steps to configure NTP server using autokey encryption.

1. Add the following lines to /etc/ntp.conf:
broadcast 224.0.1.1 autokey
crypto pw serverpassword
keysdir /etc/ntp
Replace serverpassword with a password of your choosing.
2. Generate the key files and certificates using the following commands:
# cd /etc/ntp
# ntp-keygen -T -I -p serverpassword
Using OpenSSL version 90701f
Random seed file /root/.rnd 1024 bytes
Generating IFF parameters (512 bits)...
IFF 0 60 81 1 49 111 2 1 2 3 1 2
Generating IFF keys (512 bits)...
Confirm g^(q - b) g^b = 1 mod p: yes
Confirm g^k = g^(k + b r) g^(q - b) r: yes
Generating new iff file and link
ntpkey_iff_ntpbeast.example.com- \
>ntpkey_IFFpar_ntpbeast.example.com.3318548048
Generating RSA keys (512 bits)...
RSA 0 24 112 1 11 135 3 1 4
Generating new host file and link
ntpkey_host_ntpbeast.example.com- \
>ntpkey_RSAkey_ntpbeast.example.com.3318548048
Using host key as sign key
Generating certificate RSA-MD5
X509v3 Basic Constraints: critical,CA:TRUE
X509v3 Key Usage: digitalSignature,keyCertSign
X509v3 Extended Key Usage: trustRoot
Generating new cert file and link
ntpkey_cert_ntpbeast.example.com->ntpkey_RSA- \
MD5cert_ntpbeast.example.com.3318548048
The output wraps (indicated by \ in the listing) because of page layoutconstraints.

3. If ntpd is running, restart it:

# service ntpd restart
Shutting down ntpd: [ OK ]
Starting ntpd: [ OK ]
If ntpd is not running, start it:
# service ntpd start
Starting ntpd: [ OK ]
4. Use the following chkconfig commands to make sure that ntpd starts in at boot time and in all multiuser run levels:
# chkconfig --level 0123465 ntpd off
# chkconfig --level 345 ntpd on
d.
Explain the steps to configure caching proxy server on Linux.
The configuration process includes the following steps:
1. Verifying the kernel configuration
2. Configuring Squid
3. Modifying the Netfilter configuration
4. Starting Squid
5. Testing the configuration
e.


How can we optimize NFS? Explain.

■■Using a journaling file system offers two clear advantages for an NFS server. First, in the event of a crash, journaling file systems recover much more quickly than nonjournaling file systems. If you value your data, use a journaling file system on an NFS server. Second, journaling file systems need only update the journal to maintain data integrity, so an NFS server running a journaling file system “completes” I/O much faster because only the journal needs to be updated. After updating the journal, the server can safely issue an I/O completed reply to the clients. Meanwhile, the actual file system update occurs when the server is less busy.
■■ Spread NFS exported file systems across multiple disks and, if possible, multiple disk controllers. The purpose of this strategy is to avoid disk hot spots, which occur when I/O operations concentrate on a single disk or a single area of a disk. Similarly, distribute disks containing NFS exported file systems across multiple disk controllers. This measure reduces the amount of I/O traffic on any single controller, which improves the overall performance of the I/O subsystem.
■■ Replace IDE disks with serial ATA disks. If you have the budget for it, use FibreChannel disk arrays. FibreChannel, although markedly more expensive than IDE, serial ATA, and even SCSI, offers even faster performance. However, in small shops and for small
9
servers, using FibreChannel is akin to killing gnats with a howitzer.
■■ If your NFS server is using RAID, use RAID 1/0 to maximize write speed and to provide redundancy in the event of a disk crash. RAID 5 seems compelling at first because it ensures good read speeds, which is important for NFS clients, but RAID 5’s write performance is lackluster, and
good write speeds are important for NFS servers. Write performance is critical because Linux’s NFS implementation now defaults to synchronous mode (and has since about kernel version 2.4.7), meaning that NFS operations do not complete until the data is actually synced to disk.
■■ Consider replacing 10-Mbit Ethernet cards with 100-Mbit Ethernet cards throughout the network. Although only slightly more expensive than their 10-Mbit cousins, 100-Mbit cards offer considerably more throughput per dollar. The faster cards result in better network performance across the board, not just for NFS. Of course, to reap the benefits of 100- Mbit cards, they need to be used on clients and servers, and the gateways, routers, and switches must be capable of handling 100-MB speeds.
■■ In situations in which performance is the paramount concern, Gigabit Ethernet (1000 Mbit) is available, although expensive. Other high performance network options, such as Myrinet and SONET (Synchronous Optical Networking), exist as well but are typically used as cluster interconnect solutions rather than as the underlying protocols for generalpurpose LANS or intranets.
■■ Replace hubs with switches. Network hubs, while less expensive than switches, route all network traffic across the same data channel. During periods of heavy activity, this single data channel can easily become saturated. Switches, on the other hand, transmit network packets across multiple data channels, reducing congestion and packet collisions and resulting in faster overall throughput.
■■ If necessary, dedicate one or more servers specifically to NFS work. CPU or memory-intensive processes, such as Web, database, and compute servers, can starve an NFS server for needed CPU cycles or memory pages.
■■ An increasingly inexpensive alternative is adding a NAS, or network attached storage, device to the network. A NAS device is effectively a large box of disks attached to the network by assigning the NAS its own IP address. NAS devices speed up file access because file I/O is moved off the departmental or workgroup server and because NAS devices usually have special-purpose high-speed I/O chips. Parking NFS exports on a NAS can improve NFS performance significantly.
■■ A common NFS optimization is to minimize the number of write intensive NFS exports. Automounted homes are useful, expected, and therefore hard to eliminate, but for other exports it might be optimal to stop sharing heavily used project directories and require people to access the remote system that houses them via SSH or Telnet to minimize network traffic.
■■ In extreme cases, resegmenting the network might be the answer to NFS performance problems. Resegmenting the network to isolate NFS traffic on its own network segment reduces network saturation and congestion and allocates dedicated bandwidth to NFS traffic.

f.Explain use of ssh, scp, sftp services

Secure Shell, also known as SSH, is a secure Telnet replacement that encrypts all traffic, including passwords, using a public/private encryption key exchange protocol. It provides the same functionality of Telnet, plus other useful functions, such as traffic tunneling.
Secure Copy, also known as scp, is part of the SSH package. It is a secure alternative to RCP and FTP, because, like SSH, the password is not sent over the network in plain text. You can scp files to any machine that has an ssh daemon running.
Secure File Transfer Program, also known as sftp, is an FTP client that performs all its functions over SSH.

a.What are the files required for configuring the BIND server?

The three required files are:
■■ named.conf — Found in the /etc directory, this file contains global properties and sources of configuration files.
■■ named.ca — Found in /var/named, this file contains the names and addresses of root servers.
■■ named.local — Found in /var/named, this file provides information for resolving the loopback address for the localhost.
The two additional files required for the master domain server are:
■■ zone— This file contains the names and addresses of servers and workstations in the local domain and maps names to IP addresses.
■■ reverse zone — This file provides information to map IP addresses to names.
The following script is used to start the BIND server:
■■ /etc/rc.d/init.d/named — This is the BIND server initialization file used to start BIND. A sample file is installed with the RPM from the Installation CDs.
10

b.Explain the components of email delivery process.

Several key components are essential for email to work properly, and as a system administrator it is your responsibility to configure the following items.
Programs:
■■ A mail user agent for users to be able to read and write email
■■ A mail transfer agent to deliver email messages between computers across a network
■■ A mail delivery agent to deliver messages to users’ mailbox files
■■ A mail-notification program to tell users that they have new mail (optional)
■■ The SMTP protocols for packaging email and transferring email messages between MTAs

c.Explain POP3 and IMAP4.

Post Office Protocol version 3, known as POP3, and Internet Message Access Protocol version 4, known as IMAP4. POP3 was developed to solve the problem of what happens to messages when the recipient is not connected to the network. POP3 runs on a server that is connected to a network and that continuously sends and receives mail. The POP3 server stores any messages it receives until the message recipients request them
The Internet Message Access Protocol version 4 (IMAP4) provides much more sophisticated email-handling functionality than SMTP or POP3 do. IMAP4 has more features. IMAP4 enables you to store email on a networked mail server, just as POP3 does. The difference is that POP3 requires you to download your email before your MUA reads it, whereas IMAP4 enables your email to reside permanently on a remote server, from which you can access your mail. And you can do so from your office, your home, your PDA, your cell phone, or anywhere else. Your MUA must understand IMAP4 to retrieve messages from an IMAP4 server.

d.Explain the different ways to maintain email security on Linux.

Protecting against Eavesdropping
Your mail message goes through more computers than just yours and your recipient’s because of store-and-forward techniques. All a cracker has to do to snoop through your mail is use a packet sniffer program to intercept passing mail messages. Apacket sniffer is intended to be a tool that a network administrator uses to record and analyze network traffic, but the bad guys use them too. Dozens of free packet sniffing programs are available on the Internet.
Using Encryption
Cryptography isn’t just for secret agents. Many email products enable your messages to be encrypted (coded in a secret pattern) so that only you and your recipient can read them. Lotus Notes provides email encryption, for example. One common method it to sign your messages using digital signatures, which makes it possible for people to confirm that a message purporting to come from you did in fact come from you. Another typical approach, which can be used with digital signatures, is to encrypt email itself. Combining digital signatures with encryption protects both the confidentiality of your email and its authenticity. Fedora Core and RHEL ship with GNU Privacy Guard, or GPG, which provides a full suite of digital signature and encryption services.
Using a Firewall
If you receive mail from people outside your network, you should set up a firewall to protect your network. The firewall is a computer that prevents unauthorized data from reaching your network. For example, if you don’t want anything from ispy.com to penetrate your net, put your net behind a firewall. The firewall blocks out all ispy.com messages. If you work on one computer dialed in to an ISP, you can still install a firewall. Several vendors provide personal firewalls, and some of them are free if you don’t want a lot of bells and whistles.

e.Explain any five parameters of vsftpd.conf file.


f.

State and explain any five SSL-related configuration directives while running vsftpd over SSL.
Vsftpd’s Secure Sockets Layer (SSL) support convincingly answers one of the primary criticisms of FTP, passing authentication information in clear text. In fact, vsftpd can use SSL to encrypt FTP’s control channel, over which authentication information is passed, and FTP’s data channel, over which file transfers occur. To use SSL, with vsftpd, you need to set at least the ssl_enable=YES in
/etc/vsftpd/vsftpd.conf. If you want to fine-tune vsftpd’s SSL-related behavior, become familiar with vsftpd’s SSL-related configuration directives, listed in Table 22-3.
DIRECTIVE DESCRIPTION
allow_anon_ssl=YES Permits anonymous users to use SSL
dsa_cert_file=path Specifies the location of the DSA certification file (optional)
force_local_data_ssl=YES Forces local user’s FTP sessions to use SSL on the data connection
force_local_logins_ssl=YES Forces local user’s FTP sessions to use SSL for the login exchange
rsa_cert_file=path Specifies the location of the RSA certificate file (default is
/usr/share/ssl/certs/vsftpd.pem)
11
ssl_ciphers=DES-CBC3-SHA Specifies the SSL ciphers vsftpd will accept
ssl_enable=YES Enables vsftpd’s SSL support (required for all other ssl_* directives)
ssl_sslv2=YES Enables vsftpd’s SSL version 2 protocol support
ssl_sslv3=YES Enables vsftpd’s SSL version 3 protocol support
ssl_tlsv1=YES Enables vsftpd’s SSL TLS version 1 protocol support

a.Explain any five global configuration directives of Apache web server.


b.What are the packages required to configure secure server with SSL? How can we obtain digital certificate from certifying authority?

For more information about SSL and certificate creation, the following online resources will prove helpful:
■■ Building a Secure RedHat Apache Server HOWTO (www.tldp.org/HOWTO/SSL-RedHat-HOWTO.html)
■■ SSL Certificates HOWTO (www.tldp.org/HOWTO/SSLCertificates-HOWTO/index.html)
■■ OpenSSLWeb site (www.openssl.org)
To obtain a digital certificate from a recognized CA, you must create a CSR, as described in the previous section, and submit it to a CA. You also have to pay for the certificate. You can choose from a number of CAs, some of which are shown in the following list. The list is not complete, but should provide a starting point for you:
■■ Cybertrust (betrusted.com/products/ssl/shop/index.asp)
■■ Entrust (entrust.com/certificate_services/)
■■ GeoTrust (geotrust.com/web_security/index.htm)
■■ GlobalSign (globalsign.net/digital_certificate
/serversign/index.cfm)
■■ GoDaddy (godaddyssl.com)
■■ Thawte Consulting (.thawte.com/ssl123)
■■ Verisign (verisign.com/products-services/securityservices/ssl/buy-ssl-certificates/index.html)
If you would like to use a free or open source CA, the two best known are:
■■ CAcert (cacert.org)
■■ StartCom Free SSL Certificate Project (http://cert.startcom.org)
12
Each CA has its own procedures, requirements, and fees for issuing signed certificates, so it isn’t possible to describe them in this space.

c.Explain the working of Apache web server.

To understand Apache, its configuration, and how to fine-tune it for your own environment, you should understand how Web servers work in general. Otherwise, lacking this context, Apache’s behavior and configuration might seem arbitrary. Figure 23-1 shows the general process that takes place when a Web browser requests a page and the Apache Web server responds. This simplified illustration disregards the browser cache, content accelerators such Inktomi and Akamai, and the existence of proxy servers between the user’s browser and the Apache Web server. The Web client (a browser in this case) first performs a DNS lookup on the server name specified in the URL, obtains the IP address of the server, and then connects to port 80 at that IP address (or another port if the server is not using the default HTTP port). If you specify an IP address directly, the DNS lookup doesn’t occur. When the connection is established, the client sends an HTTP GET request for the document in the URL, which could be, among other possibilities, a specific HTML
document, an image, or a script. After the server receives the request, it translates the document URL into a filename on the local system. For example, the document URL http://www.example.com/news.html might become /var/www/html/news .html. Next, Apache evaluates whether the requested document is subject to some sort of access control. If no access control is required, Apache satisfies the request as described in the next paragraph. If access control is in effect, Apache requests a username and password from the client or rejects the request outright,depending on the type of access control in place. If the requested URL specifies a directory (that is, the URL ends in /) rather than a specific document, Apache looks for the directory index page, index.html by default, and returns that document to the client. If the directory
index page does not exist, Apache might send a directory listing in HTML format back to the client or send an error message, depending on how the server is configured. The document can also be a specially written script, a Common Gateway Interface (CGI) script. In this case, Apache executes the script, if permitted to do so, and sends the results back to the client. Finally, after Apache has transmitted the requested document and the client receives it, the client closes the connection and Apache writes an entry in one or more log files describing the request in varying levels of detail.
Depending on how the page is written and what it contains, additional processing takes place during the transfer. For example, embedded scripts or Java applets are transferred to and execute on the client side of the connection; serverside includes (discussed in the section titled “Implementing SSI”), however, are processed on the server side, as are CGI scripts, database access, and so forth.

d.What is the searching and indexing system provide with Linux? Explain its features.

ht://Dig is a complete document searching and indexing system designed for a single domain or an intranet. It is not meant to replace the big global search engines like Google, Yahoo!, or Excite. Rather, it is intended for use on single sites and domains and is especially well suited for intranets, primarily because ht://Dig was initially developed for campus use at San Diego State University.
Although ht://Dig is intended for use on a small scale, the word “small” is relative; it is quite capable of searching sites or domains that comprise multiple servers and thousands of documents. ht://Dig can handle sites or domains that consist of multiple servers because it has a built-in Web spider that can traverse a site and index all the documents it encounters. ht://Dig handles thousands
of documents because it uses a static search index that is very fast. Other ht://Dig features include the following:
■■ Character set collation— SGML entities such é and ISOLatin-1 characters can be indexed and searched.
■■ Content exclusion— Support for excluding content from indexing using a standard robots.txt file, which defines files and filename patterns to exclude from searches.
■■ Depth limiting — Queries can be limited to match only those documents that are given number of links or clicks away from the initial search document.
■■ Expiration notification — Maintainers of documents can be notified when a document expires by placing special meta-information inside an HTML document (using the HTML tag) that ht://Dig notices and uses to generate document expiration notices.

■■ Fuzzy searching — ht://Dig can perform searches using a number of well-known search algorithms. Algorithms can be combined. The currently supported search methods include the following:

■■ Accent stripping — Removes diacritical marks from ISO-Latin-1 characters so that, for example, e, e ̄, e˘, e・ , e,, and eˇ are considered the same letter (e) for search purposes.
■■ Exact match — Returns results containing exact matches for the query term entered.
■■ Metaphones— Searches for terms that sound like the query term but based on an awareness of the rules of English pronunciation.
■■ Prefixes — Searches for terms that have a matching prefix, so, for example, searching for the prefix dia matches diameter, diacritical, dialogue, diabolical, and diadem.
■■ Soundex — Searches for terms that sound like the query term.
■■ Stem searches— Searches for variants of a search term that use the same root word but different stems.
■■ Substrings— Searches for terms that begin with a specified substring, so searching for phon* will match phone, phonetic, and phonics but not telephone.
■■ Synonyms — Searches for words that mean the same thing as the query term, causing ht://Dig to perform return results that include synonyms.
■■ Keyword optimization— You can add keywords to HTML documents to assist the search engine using the HTML tag.
■■ Output customization — Search results can be tailored and customized using HTML templates.
■■ Pattern matching— You can limit a search to specific parts of the search database by creating a query that returns only those documents whose URLs match a given pattern.
■■ Privacy protection— A protected server can be indexed by instructing ht://Dig to use a given username and password when indexing protected servers or protected areas of public servers.

e.What is RSS feed? Explain the elements required in an RSS feed

RSS is an acronym for Really Simply Syndication, Rich Site Summary, or RDF Site Summary, depending on which version of the RSS specification you follow. Regardless of the version you use, RSS defines and implements an XML format for distributing news headlines over the Web, a process known as syndication. To express it more simply and generally, RSS makes it possible to distribute a variety of summary information across the Web in a news-headline style format. The headline information includes a URL that links to more information. That URL, naturally, brings people to your Web site.
In terms of content, you might include the following types of information:
■■ News and announcements about products, events, press releases, or whitepapers
■■ If your Web site (rather, the Web site you maintain) frequently updates documents, you might consider providing an RSS feed that lists new or updated documents (or individual pages)
■■ Calendars of events, such as company appearances at trade shows, user group meetings, or listings of training sessions
■■ Listings of available jobs

f.What are the common mailman administrative tasks? Explain.

On a freshly installed Mailman site or with a newly create list, there are a number of small tasks that most administrators might want or need to do. Key tasks include the following:
■■ Presubscribing a list of people
■■ Hiding a list from casual browsers of the mailman interface
■■ Restricting archives access to group members
Mailman’s browser-based interface makes it ridiculously simple to perform all of these tasks.

a.How can we optimize FTP services on a Linux server? Explain.

Out of the box, vsftpd is pretty darn fast and makes lightweight demands on a system’s memory and CPU resources. If its speed fails to suit you, the following tips, adapted from the vsftpd documentation, might help:
■■ If possible, disable the NIS and NIS+ (nis and nisplus) for passwd, shadow, and group lookups in /etc/nsswitch.conf. The idea with this tip is to avoid loading unnecessary runtime libraries into the vsftpd’s memory space and to avoid using NIS for lookups that can be resolved more quickly by resorting to file-based lookups.
■■ Break directories with more than a few hundred entries into smaller directories. Many file systems, such as ext2 and ext3, do not handle such cases efficiently at all, and the process of creating listings of large directories (with, for example, the ls or dir commands) causes vsftpd
to use moderate amounts of memory and CPU. If you are stuck with large directories, use a file system, such as XFS, JFS, or ReiserFS, designed to work with large directory
14
structures.
■■ Limit the number of simultaneous connections to the FTP server.
■■ More drastically, if the load on your FTP server is bogging down the system, you could disable anonymous FTP altogether or dedicate a machine to providing FTP services.
■■ Take advantage of vsftpd’s bandwidth throttling features to limit the network bandwidth consumed by any one connection or connection classes.

b.How can we improve the performance of web services on a Linux server? Explain.

The settings mentioned in that section are good starting points for fine-tuning Apache, but they do not exhaust the possibilities. To recap that discussion:
■■ Increasing the MaxClients setting (to a maximum of 256) increases the maximum number of simultaneous client connections to the httpd server before the server starts refusing additional connections. The default value is 150 (clients). One generally-accepted rule-of-thumb
formula is:
MaxClients = Physical RAM – 128 MB + Size of Active Pages
Nonshared Memory per httpd Process
The theory is that you should use physical RAM for system resources and caching active pages. Leftover RAM should be used by httpd processes serving up active pages. If you have more clients, you will end up swapping, which degrades performance. If you have fewer clients, you will not
be maximizing the available system resources. In practice, you will have to decide what constitutes an active page. One way to go about this is to use the server logs to evaluate which pages are served more than once every TimeOut period, which defaults to 300 seconds (5 minutes).
■■ The TimeOut directive controls how long the server waits between protocol messages before it closes a connection. The longer the TimeOut directive, the longer a client connection will be tied up and, thus, unavailable to another client. The default value is 300 (seconds).
■■ The MaxRequestsPerChild setting controls how many HTTP requests an httpd child process will service before a new child process starts. The default value is 100, but setting it to 0, for unlimited requests, will work just fine on a Red Hat system.
■■ MaxKeepAliveRequests, 100 by default, sets the upper limit on the total number of requests from the same client on the same connection.The following tips and suggestions appear in no particular order. Your mileage may vary, and if breaks, you get to keep both pieces. Some of the following might work better than others; others ideas might fail miserably. If your server is running a lot of CGI scripts or using PHP markup, you should look into resources that discuss Apache tuning in depth. The overhead requirements of PHP and CGI scripts involve creating new processes rather than merely additional RAM, network, or disk I/O.
■■ Set HostnameLookups to Off. Each resolver call impairs performance. If you need to resolve IP addresses to hostnames, you can use Apache’s logresolve program or one of the resolver programs available in the log reporting and analysis packages.
■■ Similarly, use IP addresses instead of host names in Allow fromdomain and Deny from domain directives. Each such query, when domain is a name, performs a reverse DNS query followed by a forward query to make sure that the reverse query is not being spoofed. Using IP addresses avoids having to resolve names to IP numbers before performing the reverse and forward queries.
■■ If you do not use Options FollowSymLinks, or if you do use Options SymLinksIfOwnerMatch, Apache performs extra system calls to check symbolic links. For example, suppose you have the following configuration:
DocumentRoot /var/www/htdocs

Options SymLinksIfOwnerMatch

If a client then requests /index.html, Apache performs an lstat() system call on /var, /var/www, /var/www/htdocs, and /var/www /htdocs/index.html to check the owner matching of the symbolic link. The overhead of these lstat() system calls occurs for each request, and Apache does not cache the results of the system calls. For the best performance (and, unfortunately, the least security against rogue symlinks), set Options FollowSymLinks for all directories and never set Options SymLinksIfOwnerMatch.
■■ A similar performance problem occurs when you use .htaccess files to override directory settings. In this case, Apache attempts to open .htaccess for each component of a requested filename. For the best performance use AllowOverride None everywhere in the Web space
Apache is serving.
■■ Unless you rely on the MultiView option, turn it off. It is perhaps the single biggest performance hit you can throw at an Apache server.
■■ Do not use NFS mounted file systems to store files that Apache servesunless absolutely necessary. Not only is the read performance of NFS slower than the read performance of a local file but also the file being served via NFS might disappear or change, causing NFS cache consistency problems. Moreover, if the Apache server is somehow compromised, the NFS mount will be vulnerable.
■■ If you must use NFS-mounted file systems, mount them as read-only. Read-only NFS mounts are significantly faster than read/write mounts. Not only will this improve performance, disabling write access adds another barrier to bad guys who might compromise the system.
■■ The single most important system resource that Apache uses is RAM. As far as Apache is concerned, more RAM is better because it improves Apache’s ability to store frequently requested pages in its cache. You can also help by limiting the non-Apache processes to the absolute minimum required to boot the system and enable Apache to run — that is, run a dedicated Web server that doesn’t need to share the CPU or memory with other processes. Naturally, a faster CPU, a high-speed Ethernet connection, and SCSI disks are preferable.

c.Why should we upgrade or customize Linux kernel?

the following list summarizes the most common reasons you might want or need to upgrade or customize the kernel on your Fedora Core or RHEL system:
■■ You can recompile the kernel to support your specific CPU, especially features that improve performance. The default Red Hat Linux installation installs a kernel configured to run on the widest possible variety of Intel CPUs. As a result, it does not take advantage of all of the features
and improvements available in the newest CPUs or motherboard chipsets.
■■ Similarly, the default kernel often includes system features that you do not need or does not include features that you do need or want. Customizing and recompiling the kernel enables you to remove unnecessary or unwanted features and to add needed and desired features.
■■ The default kernel supports an enormous variety of the most common hardware, but no single system needs all of that support. You might want to create a new kernel that includes support for only the hardware actually installed on your system.
■■ If you have a system with hardware not supported when you installed Red Hat Linux or for which only experimental support was available, you can rebuild the kernel to include that support once it becomes available or to improve existing support.
■■ You’re dying to use the latest and greatest bleeding-edge kernel version.

d.Explain useradd command with any five options.

Ans:- useradd -Option


The options which apply to the useradd command are:
-b, --base-dir BASE_DIR
The default base directory for the system if -d HOME_DIR is not specified.BASE_DIR is concatenated with the account name to define the home directory. If the -m option is not used, BASE_DIR must exist.If this option is not specified, useradd will use the base directory specified by the HOME variable in /etc/default/useradd, or /home by default.
-c, --comment COMMENT
Any text string. It is generally a short description of the login, and is currently used as the field for the user's full name.
-d, --home HOME_DIR
The new user will be created using HOME_DIR as the value for the user's login directory. The default is to append the LOGIN name to BASE_DIR and use that as the login directory name. The directory HOME_DIR does not have to exist but will not be created if it is missing.
-D, --defaults
See below, the subsection "Changing the default values".
-e, --expiredate EXPIRE_DATE
The date on which the user account will be disabled. The date is specified in the format YYYY-MM-DD.

e.What is sudo? What are its features?

Considering root’s privileges, you can easily understand why root access on a Linux system is carefully protected and the root password tightly guarded. Nevertheless, it is often desirable to grant privileges to a nonroot user (humorously referred to as merely mortal user) that have traditionally been solely root’s domain, such as printer management, user account administration, system backups, or maintaining a particular Internet service. In other operating systems, many environments, subdividing system administration responsibilities is a necessity because the responsibilities of maintaining multiple servers in a large IT shop or ISP can quickly overwhelm a single individual. The problem in such a situation is clear: How do you grant administrative privileges to merely mortal users without providing unfettered root access? In many situations,
Sudo, a mnemonic for superuser do, is one solution. Sudo enables you to give specific users or groups of users the ability to run some (or all) commands requiring root privileges. Sudo also logs all commands executed, which allows you to maintain an audit trail of the commands executed, by whom they were executed, when they were executed, and so on. As the README in the source
distribution states, Sudo’s “basic philosophy is to give as few privileges as possible but still allow people to get their work done.” Sudo’s features include:
■■ Enabling the ability to restrict the commands a given user may run on a per-host basis.
■■ Maintaining a clear audit trail of who did what. The audit trail can use the system logger or Sudo’s own log file. In fact, you can use Sudo in lieu of a root shell to take advantage of this logging.
■■ Limiting root-equivalent activity to a short period of time using timestamp based “tickets,” thus avoiding the potential of leaving an active root shell open in environments where others can physically get to your keyboard.
■■ Allowing a single configuration file, /etc/sudoers, to be used on multiple machines, permitting both centralized Sudo administration and the flexibility to define a user’s privileges on a per host basis.

f.State and explain any five rpm command line options.

Ans:-These options can be used in all the different modes.
-?, --help
Print a longer usage message then normal.
--version
Print a single line containing the version number of rpm being used.
--quiet
Print as little as possible - normally only error messages will be displayed.
-v
Print verbose information - normally routine progress messages will be displayed.
-vv
Print lots of ugly debugging information.