Thursday, September 16, 2010

Configuring software raid in centos linux

Advertisements

This article explains  what is Raid, what are important levels and how to install and configure a raid device in a centos or redhat linux system using the software mdadm. This is tested in Redhat rhel5 and also works with other distributions as fedora, centos etc.

What is RAID?
RAID is redundant array of independent or inexpensive disks. Raid is a storage technology that combines multiple disk drive components into a logical unit. Data is distributed across the drives in one of several ways called "RAID levels", depending on what level of redundancy and performance is required. It is mainly used for data protection. It protects our data storage from failures and data loss. All storage units now use  raid technology.

It has following uses.
1. Data protection
2. Increasing performance

Types of RAIDs:
There are alot of levels of RAIDs. But the main levels are
1. Level 0 or Striping
2. Level 1 or Mirroring
3. Level 5 or Striping + Parity

Level 0:
It is also known as striping. You know hard disk is a block device. Datas are read and written from/to it by blocks.
Suppose we have data block as below
1 0 1 1
Suppose each bit takes one cpu clock cycles for writing  into a disk. Total, it will take 4 cpu clock cycles.

With stripping:
In striping we use "N" number of hard disks. RAID devides the data block by "N" and writes each part to each disk in parallel.

If we have 4 hard disks, It'll take only one cpu clock  cycle if we use Level 0 RAID.
Raid 0 is best for where more writing take place than reading. But is insecure as there is no recovery mechanisms.

Level 1:
Also known as Mirroring. One is the exact replica of other. Whatever writing to master disk will be written in to mirror disk also. Reading can be performed fro each disk simultaneously, thus increasing the read performance.
But can be utilize only 50% of the total size.

Level 5:
It is a combination of striping and parity. Need at least three hard disks. Both parity and data are distributed in all. If one hard disk fails, data on that can be regenerated by the data and parity information in the other two hard disks.

Disk or Partition requirement
Raid 5 :need 3 disks
Raid 0 :need 2 disks
Raid 1 :need 2 disks

Implementation of raid level 5
Here we'll show how to create a Level 5 raid device. Here we use three partitions /dev/sdb /dev/sdc /dev/sdd. Keep in mind that, in real industry it'll be three different hard disks.

This following command will create a raid device /dev/md0 with level 5
#mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd{b,c,d}
mdadm: array /dev/md0 started.

If we run the command "watch cat /proc/mdstat" while executing the above command, we can see the status of the process of creating md raid device. Output will be as given below

[root@server ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat                                                                      Sat Mar  3 19:21:31 2012
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[3] sdc[1] sdb[0]
      10485632 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
      [======>..............]  recovery = 33.5% (1759128/5242816) finish=1.4min speed=39024K/sec
unused devices: <none>

Status when finished.
[root@server ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat                                                                      Sat Mar  3 19:23:27 2012
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[2] sdc[1] sdb[0]
      10485632 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>

To see the details of the created raid device.
[root@server ~]# mdadm --detail /dev/md0
 /dev/md0:
 Version    : 00.90.03
 Creation Time : Sat Mar  3 19:20:43 2012
 Raid Level : raid5
 Array Size : 10485632 (10.00 GiB 10.74 GB)
 Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
 Raid Devices : 3
 Total Devices : 3
 Preferred Minor : 0
 Persistence : Superblock is persistent
 Update Time : Sat Mar  3 19:23:18 2012
 State : clean
 Active Devices : 3
 Working Devices : 3
  Failed Devices : 0
  Spare Devices : 0
  Layout : left-symmetric
  Chunk Size : 64K
  UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
  Events : 0.2
  Number   Major   Minor   RaidDevice State
      0       8       16        0      active sync   /dev/sdb
      1       8       32        1      active sync   /dev/sdc
      2       8       48        2      active sync   /dev/sdd
[root@server ~]#

You have to create one configuration file for the raid device created by mdadm. Else it wont be able to survive a reboot.
[root@server ~]# mdadm --detail --verbose --scan >> /etc/mdadm.conf
[root@server ~]# cat  /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=4e4f2828:3bbe8227:61435180:8ac962cf
   devices=/dev/sdb,/dev/sdc,/dev/sdd
[root@server ~]#

Formatting the raid device
[root@server ~]# mke2fs -j /dev/md0
or
[root@server ~]# mkfs.ext3  /dev/md0

Creating mount point
#mkdir /data

Mounting the raid device to the created mount point
#mount /dev/md0 /data

Making the mount permanent by adding it in fstab
#vi /etc/fstab
/dev/md0 /data ext3 defaults 0 0
:wq
#mount -a

Now we will create a file of size 100mb in /data
[root@server data]# dd if=/dev/zero of=bf bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 1.85005 seconds, 55.3 MB/s
[root@server data]#

Now we will check the raid by failing one partition/disk.
[root@server data]# mdadm /dev/md0 --fail /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
[root@server data]#

Now the status is 
root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar  3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar  3 19:56:08 2012
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.6
Number   Major   Minor   RaidDevice State
   0       8       16        0      active sync   /dev/sdb
   1       0        0        1      removed
   2       8       48        2      active sync   /dev/sdd
   3       8       32        -      faulty spare   /dev/sdc
[root@server ~]#

Now we can remove the faulty one by
[root@server data]# mdadm /dev/md0 --remove /dev/sdc
mdadm: hot removed /dev/sdc
[root@server data]#

Now the status is 
[root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar  3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar  3 20:04:16 2012
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.8
Number   Major   Minor   RaidDevice State
   0       8       16        0      active sync   /dev/sdb
   1       0        0        1      removed
   2       8       48        2      active sync   /dev/sdd
[root@server ~]#

Now we add a new disk replacement for the one we removed.
[root@server data]# mdadm /dev/md0 --add /dev/sde
mdadm: added /dev/sde
[root@server data]#

See the data is rebuilding using the data and parity information from the other two disk. 
[root@server ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat                                                                      Sat Mar  3 20:08:19 2012
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde[3] sdd[2] sdb[0]
      10485632 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [===>.................]  recovery = 18.9% (994220/5242816) finish=1.8min speed=38239K/sec
unused devices: <none>

Also check
[root@server data]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar  3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar  3 20:04:16 2012
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 77% complete
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.8
Number   Major   Minor   RaidDevice State
   0       8       16        0      active sync   /dev/sdb
   3       8       64        1      spare rebuilding   /dev/sde
   2       8       48        2      active sync   /dev/sdd
[root@server data]#

This is how we can create a Raid device with level 1
#mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda{5,6}

This is how we can create a Raid device with level 0
#mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda{5,6}

Stopping mdadm
*Unmount the md0 before stopping mdadm
[root@server ~]# umount /data/
[root@server ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@server ~]#

If you want to create additional devices[ie there exists a /dev/md0] you may need to add an "-a yes" option to the mdadm command.

For example,
#mdadm --create /dev/md1 -a yes --level=0 --raid-devices=2 /dev/sda{5,6}

Adding Spare disk
We can also specify the spare device at the time of creating the raid array.

[root@server ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 --spare-devices=1 /dev/sd{b,c,d}

See
[root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar  3 20:36:58 2012
Raid Level : raid1
Array Size : 5242816 (5.00 GiB 5.37 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar  3 20:37:25 2012
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
UUID : 17b820a4:61fc941e:267bf6c8:8adff61a
Events : 0.2
Number   Major   Minor   RaidDevice State
   0       8       16        0      active sync   /dev/sdb
   1       8       32        1      active sync   /dev/sdc
   2       8       48        -      spare   /dev/sdd
[root@server ~]#

For configuring lvm in centos kindly check the following post
Configuring lvm in Linux

How to install and configure a mail server using Postfix + Dovecot + squirrelmail in linux

Advertisements

This post helps to install and configure a mail server with postfix as MTA[ (Mail Transfer agent), Dovecot as MDA (Mail Delivery Agent) and Squirrel mail as MUA (Mail User
Agent). This is a simple basic configuration without much advanced configurations. This is tested in Redhat linux and will also work in other redhat disrtos like
fedora, centos etc.


Assuming you have a configured yumserver. Else use the rpms.
#yum install postfix* dovecot* Squirrelmail*

Steps

1. Configure the DNS eg: example.com

2. Select the defalt MTA as postfix. Most systems it will be Sendmail.

#alternatives     --config    mta
Select postfix.

3. open the configuration file of Postfix and edit the following.

#vi /etc/postfix/main.cf
edit the following

 1. my domain
 2. my hostname
 3. inet_interfaces

and reload the service.

4. Configure the Squirrel mail
#cd /usr/share/squirrelmail/config/

run the perl file
#./conf.pl

Give
 1.Domain name
 2.host name [FQDN]
 3.protocol

5. Configure the dovecot
#vi /etc/dovecot.conf

protocols = imap

save it and restart the service.

#service dovecot restart
#chkconfig dovecot on

6. Add the MX entry to DNS. Dont fotget to give the priority.

7. Resolve the hostname in /etc/hosts.

8. Start the httpd [apache]

9. Thats it. you can now access the webmail through

http://example.com/webmail

Tuesday, September 14, 2010

How to install and configure a mail server using Sendmail + Dovecot + squirrelmail in linux

Advertisements

This document helps to configure a mail server with sendmail as MTA[ (Mail Transfer agent), Dovecot as MDA (Mail Delivery Agent) and Squirrel-mail as MUA (Mail User
Agent). This is a simple basic configuration without much advanced configurations. This is tested in Redhat linux and will also work in other redhat disrtos like
fedora, centos etc.

Steps:

#yum -y install sendmail* dovecot* squirrelmail* bind*      #Bind for DNS
#yum -y install caching-*                                    #for DNS

Remove the hostname conflicts from /etc/hosts /etc/sysconfig/network
You must set a Fully Quallified Domain name [FQDN]

#sysctl -p

#rm -rf /etc/mail             #if necessary
#rm -rf /var/named              #If necessary. If DNS already exists, dont do this.

#service network restart

Now configure the DNS. If already existing add the entries to it.

Configure the dns as example.com
Check the following command returns the correct IP.

#nslookup example.com

Now Configuring MTA (sendmail)

#vi /etc/mail/sendmail.mc

Comment the lines 116 and 155. I.e add 'dnl' to the begining of those lines
eg:

dnl DAEMON_OPTIONS

Uncomment the line 160. I.e remove 'dnl' from the begining of the line.
eg:

MASQURAD_AS (example.com)

#cd /etc/mail
#make restart
#chkconfig sendmail on

Now installing the MUA (squirrel mail)
#vi /etc/mail/local-host-names

example.com

save it

Now installing the MDA (dovecot)
#vi /etc/dovecot.conf

protocols = imap

save it and restart the service.

#service dovecot restart
#chkconfig dovecot on

now

#cd /usr/share/squirrelmail/config/

run the perl file
#./conf.pl

Give
1.Domail name
2.host name
3.protocol

Configuring DNS

IN NS 
IN MX 

Save it and reload the named service.

add the nameserver ip to
/etc/resolv.conf

You can access the webmail through
http://example.com/webmail

How to connect, install and configure TATA Photon in Linux

Advertisements

This post explains how to connect your Tata photon dongle to a Redhat Linux system. It works on other Redhat distributions like Fedora, CentOS too.

The procedure is explained in step by steps below.

Steps.
1. Connect / Plug your photon+ to the system and wait till it gets ditected.
2. Open a terminal, run a command "dmesg" and check it is showing the modem name as HUAWEI.
3. Run another command "sudo wvdial" and its will create a config file in /etc/wvdial.conf, something like shown below.
(if u dont have wvdial, you can download it from open.alumnit.ca)

by running "cat /etc/wvdial.conf"
or you can edit it using the commadn "vi /etc/wvdial.conf"

#
[Dialer Defaults]
Modem = /dev/modem
Baud = 115200
Modem Type = Analog Modem
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
[Dialer info]
Init9 = AT&V
[Dialer photon+]
Modem = /dev/modem
Baud = 115200
Modem Type = Analog Modem
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Phone =
Username =
Password =
Auto DNS = off
#

some of the field may be already filled.

4. In terminal, connect by "wvdial".
5. Using Network Manager applet is more easier and is automatic whenever you plug the card.
6. In Network Manager, U can create profile by configuring network -> analog POTS -> setup.
   Fill the data as you need. leave the IP & gateways as defaults. Type user's name/pass phrase and dial number
   (usually user name/pass phrase are your phone number & dial number is *777).
5. connect the profile and check its working.
6. Thats it. Now check your mails!