Tuesday, December 18, 2012

configuring tftp server in Ubuntu

Advertisements

We use tftp server for network boots. Loading images etc. In this post we will see how to set up a tftp server in Ubuntu.

Install xinetd:
tftp doesn't have a daemon of its own. So it uses xinetd for service control. It is not installed in Ubuntu by default.
apt-get install xinetd

Now install tftpd:
apt-get install tftpd

Saturday, November 3, 2012

benchmarking with http_load

Advertisements


There are a lot tools are there to benchmark a webserver. One of the useful tool is http_load. It is very simple to use. First you have to download and install http_load. Then test the webserver with urls and different options. With http_load you can run multiple http fetches in parallel to test the performance. It gives you a rough idea of how many bytes a server can serve in a time period.
Install the http_load tool:
wget http://www.acme.com/software/http_load/http_load-12mar2006.tar.gz
tar xvzf http_load-12mar2006.tar.gz
cd http_load-12mar2006
make
make install #if required

Monday, October 15, 2012

No space left on device: mod_jk: could not create jk_log_lock

Advertisements

I got this error while working on Apache.
[crit] (28)No space left on device: mod_jk: could not create jk_log_lock
Configuration Failed

It was due to the semaphore limitation.
[root@host ~]# ipcs -su
------ Semaphore Status --------
used arrays = 127
allocated semaphores = 127
[root@host~]#

Thursday, October 11, 2012

Installing SSL certificate on Apache with Tomcat

Advertisements

I have an Apache tomcat stack on which Apache is front proxy and tomcat is serving contents. How to install SSL on this scenario? Whether to install ssl on Apache or Install ssl on Tomcat? In my case Apache and tomcat is connected using mode_jk. I installed ssl on apache. Here is the steps how I did. Comment if you know some better methods. Operating system used is Centos 5.4, Apache version is httpd-2.2.3-65.el5.centos and Tomcat tested on 5 and 6.

Check here for Installing and configuring Apache with tomcat using mod_jk
Generating key and csr:
yum install mod_ssl openssl

Sunday, October 7, 2012

url redirection in apache using proxypass

Advertisements

Url redirection in Apache webserver.
Here is a small example of url redirection in Apache using proxypass. I used this when I was using Apache as proxy to Apache tomcat using mod_jk.

Example:
you want to forward
www.yourdomain.come/abc to www.yourdomain.com/linux/commands/abc

Wednesday, September 5, 2012

Configuring multiple domains or sub-domains in tomcat

Advertisements


We all know how to create multiple domains in Apache. By adding virtual host etries. But how to configure multiple domains in tomcat? We can do this by adding multiple Host tags in server.xml. Its very simple. See the example below. Edit server.xml under conf directory. Suppose you want to setup three domains domain1.com, domain2.com and domain3.com. Every domain points to same ip address in the server.

Saturday, August 11, 2012

Creating multiple user login in Amazon ec2

Advertisements


We know we can login in Amazon ec2 linux instances with our .ppk/.pem keys. But it is restricted to a single root user. How to create more normal users and let them also login to the instance? of course they cant use root users key. so we have to create new login keys for them. This post is also applicable for normal systems. We will see how to create key based authentication for normal users.

password protected directory in tomcat

Advertisements


How to protect a web directory with a password? if we are using Apache, we can do it easily with .htaccess. It will prompt user for credentials while entering the directory. But how to protect a directory with password in tomcat web server? In this post we will discuss how to do it with tomcat Realms. This example was tested in tomcat 7 and tomcat 6.

Wednesday, July 4, 2012

setting nameserver ip addresses on android

Advertisements

Almost all the android devices will be using dhcp service to get ip address and nameservers. But how to set custom nameservers on android devices? In this article we will see how. Is resolv.conf is there in android? if not? how?

From the command prompt :
run the following command to set the nameservers. We will use google's public nameserver ips 8.8.8.8 and 8.8.4.4 in this example.

Saturday, June 30, 2012

Creating and dropping indexes in MySQL table

Advertisements

Indexes increases the speed of select queries. At the same time indexes decreases the insert query performance if there are lots of indexes in a table. How to add an index to MySQL table  and how to drop an index from a MySQL table? We will discuss both in this article.


For showing indexes in a table we can use show indexes from command. In this we can see that there is only one index that is primary index in the table based on the column employee_id

Benchmarking MySQL with mysqlslap

Advertisements

I have been checking with different tools for benchmarking a mysql server. I went through a lot of blogs and manuals and decided to use a tool named mysqlslap. In this post we will discuss how to install mysqlslap and use it. We are using Centos Linux 5.4 to test this.

Luckily mysqlslap comes with mysql-client rpm itself. With the versions 5.1.4 and above. So you can either install mysql with yum or rpm. rpms are available on mysql.com for download.

I have tested it with MySQL-client-5.5.25-1.rhel5.i386.rpm.
Just install it as 
#rpm -ivh MySQL-client-5.5.25-1.rhel5.i386.rpm
Then you will get the command "mysqlslap"

Thursday, June 28, 2012

importing csv file to MySQL database table

Advertisements


Sometimes we need to get the MySQL tables into an Excel sheet. How this can be done? We can dump the MySQL tables into a csv format file. It is explained in our previous post Dumping MySQL table into CSV file. But how to import a csv file to a mysql database table? We will discuss it in this post.



Login to your MySQL server.
[root@database ~]# mysql -p
Enter password:

Thursday, June 7, 2012

Things to consider in video streaming

Advertisements

Video streaming is gaining momentum. Nowadays nobody wants to download videos and watch. Why should we download videos if we can watch them online? But nobody can enjoy a video if it's quality is bad or if it periodically pauses for buffering etc. So every system administrators must be aware of some aspects while streaming videos. In this post we will discuss main points in streaming videos. The main aspects of video streaming are resolution, encoding, frame rate, bit rate, data rate, aspect ratio and data transfer rate.

Thursday, May 31, 2012

Dumping MySQL table into CSV file

Advertisements

Sometimes we need to get the MySQL tables into an Excel sheet. How this can be done? We can dump the MySQL tables into a csv format file. Lets see how to do it.

Monday, May 21, 2012

Confiugring daily backup of amazon centos instance

Advertisements

Configuring daily backup of amazon linux centos instance to amazon s3 storage

How do you backup your amazon ec2 (Amazon elastic cloud) Linux  instance? Did you buy any costly backup sofwares to do the daily backup task? Here we will discuss how to backup your amazon ec2 Linux  instance to amazon s3 bucket storage using a bash script scheduled with cron on a daily basis.

Configuring daily backup of amazon RDS server

Advertisements

Configuring daily backup of amazon RDS server to amazon s3 storage

How do you backup your amazon RDS (Amazon Relational Database Service) MySQL server? Did you buy any costly backup managers to do the daily backup task? Here we will discuss how to backup your amazon rds  MySQL databases to amazon s3 bucket storage using a bash script scheduled with cron on a daily basis.

Friday, May 18, 2012

Amazon Elastic Load balancing with autoscaling

Advertisements

Suppose you have a website. linxhelp.in And suppose sometimes you get a traffic which your system cannot handle. What you will do? You will create multiple instances of your system and will configure a load balancer. But you have to keep both instances running even if there is not much traffic. We will discuss how to eliminate this resource wastage using amazon elastic load balancer and amazon autoscaling. For using this, your instances should be configured in amazon elastic cloud or ec2.

Monday, April 30, 2012

Please login as the ec2-user user

Advertisements

When login to yous amazon ec2 instance via putty or command line it is possible that you get this error.

Authenticating with public key "imported-openssh-key"
Please login as the ec2-user user rather than root user.

Friday, April 27, 2012

installing s3cmd in ubuntu

Advertisements

s3cmd is a command line tool for uploading, downloading and managing file and directories with amazon simple storage s3. It is very useful when running scripts and scheduling scripts with cron. First you have to install s3cmd package which is available from s3tools.org. In this post we discuss how to install and configure s3cmd in ubuntu or debian systems.

sms notification using nagios

Advertisements

We have discussed how to install and configure nagios monitoring system and configuring nrpe with nagios in previous posts. But it sends only email notifications. What if we are not able to access mail? Internet is not working? Or forgot to check the mails. So it is always better to have a better notification system. Notifications via SMS is an alternative. We can know if a service or host is down by sms even if dont check mail. But how to enable notifications via sms? What are the requirements? Which files needs to modified? How to configure sms gateway? We discuss all these things in this post.

Thursday, April 26, 2012

checking cpu architecture in linux

Advertisements

You may have to check which is the architecture of your linux system 32 bit or 64 bit. Here there is one thing needs to keep in mind. Whether you are checking the version of installed os kernel or the architecture of underlying cpu. It is possible that Operating system is 32 but cpu has 64 bit support. This post explains how to find or check the architecture of the Linux Operating system and cpu.

Monday, April 23, 2012

Dumping mysql database schema only

Advertisements

If  you want to dump only schema of a database,

Execute the following command.
#mysqldump -u root -pPASSWORD -d -h Host_name_or_Ip_Address database_name > database_name.sql
It will dump the schema to the file database_name.sql

Wednesday, April 18, 2012

replicating an amazon instance to different zones or regions

Advertisements

We have to replicate an amazon ec2 instance running in one zone or  region to other zones for load balancing and high availability. Or you may want to migrate your instance to other region for low latency. We can do this by ec2-migrate-bundle command. First we have to create an image. See this post for creating image of amazon linux instance. Then we have to we create a s3 bucket in the destination zone and migrate the image to the destination bucket. Then we can register ami based on that and create instances.

Tuesday, April 17, 2012

Creating amazon windows ami

Advertisements


We have discussed how to create amazon Linux ami in previous post. Now we will discuss how to create windows ami for a ebs rooted instance. Our previously discussed linux instance was instance stored. Not ebs volume. In this post we will discuss how to create an image, how to create an instance based on that image etc.

Requirements:
Private Key File: pk-PRIVATEKEY.pem
X.509 Certificate File: cert-X509CERT.pem
Administrator password of original windows instance

Preparing the instance:
Clear all log files.
for example, clear Tomcat logs, Apache logs, MySQL logs etc.
Remove all the unnecessary data
Clear temporary files (%temp%)
Clear other temporary backups
Emptying recycle bin
Perform disk cleanup
Defragment the disks
Swipe the free space

Creating the AMI:
Syntax:
ec2-create-image -n image_name instance_id --no-reboot –K pk-PRIVATEKEY.pem -C cert-X509CERT.pem
(Can be run from any Linux terminal)
if we don't give --no-reboot option your original windows instance will reboot while creating the image. To avoid that add --no-reboot option.
The keys pk-PRIVATEKEY.pem and cert-X509CERT.pem should be present in the current directory while running the command.

Example:
[root@hostname ~]# ec2-create-image -n windowstest instance-id --noreboot –K pk-PRIVATEKEY.pem -C cert-X509CERT.pem
IMAGE ami-1234s5
[root@hostname ~]#
IMAGE ami-1234s5 is the AMI-ID of the created AMI.

Checking the availability:
Creating the image may take some time. We can check the availability of the image using the following command.

Syntax:
ec2-describe-images ami-id -o self  –K pk-PRIVATEKEY.pem -C cert-X509CERT.pem

Example:
[root@hostname ~]# ec2-describe-images ami-9122139 -o self –K pk-PRIVATEKEY.pem -C cert-X509CERT.pem
IMAGE ami-1234s5 aws-acc-id/windowstest
aws-acc-id pending private i386 machine windows ebs

Creating new instance based on the AMI we just created:
Syntax:
ec2-run-instances K pk-PRIVATEKEY.pem -C cert-X509CERT.pem -g Basics -k cdnkey ami-ID
-g is for the Security group. We have to specify which security group we are using.
-k is for key type. We have to specify which key type we are using.
Last field is the AMI-ID based on the instance will be created.

Example:
[root@hostname ~]# K pk-PRIVATEKEY.pem -C cert-X509CERT.pem -g Basics -k cdnkey ami-9122139
RESERVATION r-54656 aws-acc-id Basics
INSTANCE i-instance-id ami-9122139 pending cdnkey 0 m1.
small 2012-04-16T11:54:53+0000 us-east-1d windows monitoringdisabled ebs
[root@hostname ~]#

i-instance-id is the Id of the new instance. Password of new instance will be same as the original instance.

Testing the AMI:
After launching the new instance we must check a few things:
Check the following things are same for original and new instances:
Disk usage
Services running
Accessibility of services such as rdp, http, tomcat and mysql
Ensuring mysql database is up-to-date.

Recommended Reading

1. Host Your Web Site In The Cloud: Amazon Web Services Made Easy: Amazon EC2 Made Easy
2. Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB
3. Middleware and Cloud Computing: Oracle on Amazon Web Services (AWS), Rackspace Cloud and RightScale (Volume 1)

Monday, April 16, 2012

Creating .pem key from .ppk key

Advertisements


You can create .pem key file from a .ppk (putty ssh key) key file. For the you need to download PuttyGen. Click here to read how to create ppk key from pem keys

Download PuttyGen

Run Puttygen and click "load Private key".
























Browse for the .ppk file and fill the password fields if password is needed or keep it blank. Now click on Conversions at the top of the screen and select "Export OpenSSH Key" Or click on save public key.
 





















Save the file as key.pem.
Thats it.

Best Reads:
1. Linux Bible 
2. The Linux Command Line: A Complete Introduction
3. Amazon Web Services For Dummies 

Getting password amazon windows instance

Advertisements

There is a lot of public AMIs are available in amazon for windows. You can just select one windows AMI and launch it. You may be wondering how to get the administrator password of amazon ec2 windows instance. We can decrypt the password from the command line of any linux / unix systems as follows.

Syntax
ec2-get-password instanceId -k key_file -K pk-ABCDEFGHIJKLMN.pem -C cert-DEFGHIJKLMN.pem

instanceId - is the instance id of windows ec2 instance.
pk-PRIVATEKEY.pem is  Private Key File.
cert-X509CERT.pem is X.509 Certificate File

key file can be cdnkey.pem or k.borah or keys like that. Once you run this command it will show the password in the prompt. You can check this link to convert a .ppk key to .pem key.

Recommended Reading

1. Host Your Web Site In The Cloud: Amazon Web Services Made Easy: Amazon EC2 Made Easy
2. Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB
3. Middleware and Cloud Computing: Oracle on Amazon Web Services (AWS), Rackspace Cloud and RightScale (Volume 1)

Wednesday, April 11, 2012

Creating Amazon Linux AMI

Advertisements

We cannot say Amazon ec2 instance wont go down or data wont be lost. Its always better to hve backups. But a data backup is not a easy to restore option. So it is always better to make a image of your amazon ec2 instance and keep it somewhere, for example in amazon simple storage or s3. In this post we will discuss how to create image or full backup of an amazon ec2 instance, how to upload the amazon ec2 instace image or AMI to amazon s3 bucket, how to register the AMI with ec2 account and how to create a amazon ec2 instance based on the created AMI. I'm sure all those things we'll discuss here will be possible to do with GUI but some of them are possible with the Mozilla addon ElasticFox. But we will do everything from the command line.

installing s3cmd in amazon ec2-instance

Advertisements

s3cmd is a command line tool for uploading, downloading and managing file and directories with amazon simple storage s3. It is very useful when running scripts and scheduling scripts with cron. First you have to install s3cmd package which is available from s3tools.org. Here we are installing s3cmd in a Centos 5 instance using yum.

Tuesday, April 10, 2012

Multiple passowrdless ssh logins

Advertisements


We have discussed the passwordless authentication or passwordless logins in our previous post. But what if you have to allow more than one hosts to login to a server without password? Then you have to add the dsa/rsa keys of the initiator servers to the destination server authorized_keys file.
Suppose we have three systems A,B abd C. And we want to login to system C without password from A and B.

All we have to do is

1. Generage dsa/rsa key in system A and copy that to authorized_keys file of C.
2. Generate dsa/rsa key in system B and APPEND that key to the authorized_key file of C.

Generating the key in system A:
[root@nagios ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
14:13:25:f1:c6:ed:51:c6:08:a4:3f:af:eb:2c:80:97 root@nagios.lap.work

Copying the key to the authorized_keys of system C:
[root@nagios ~]# scp /root/.ssh/id_rsa.pub 192.168.137.85:/root/.ssh/authorized_keys
The authenticity of host '192.168.137.85 (192.168.137.85)' can't be established.
RSA key fingerprint is 63:6d:4a:08:b4:b4:19:3c:d0:58:f3:60:8a:ec:7a:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.137.85' (RSA) to the list of known hosts.
root@192.168.137.85's password:
id_rsa.pub                                                                              100%  402     0.4KB/s   00:00
[root@nagios ~]#

Checking the key from the system C:
[root@test ~]# cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAydSsh5wlG/lvWFeZcI+Rlxr2hTWJ4diU7b1/OsDWE72goA72eIx+tfzg6/aT4vPbWA8GC8arK6XxLOWJbv2Y5tFRGmXwn+Trw3RzWOHFT76NTv6NP+SCvBciwTr55Tt6jIgGrVu6f/pBvU8tIgctu/5efH611w/pToIJbezlooJ/1GGWaydEc3eTJernwzia5UMEsRGIztT6GN8zqkVtKIRhql3y2lQjgg3jA4ceAXwJ8h49xFuo8ZIEo4mWmEwW8Kn2VaTnJVh/YsO7tMRs8KsWXonbTm0vtD2OQv59Lswjs5fMmBv0EGZJvZ3uDypQw/IH33MWKbAotwQ1fewbiw== root@nagios.lap.work
[root@test ~]#

Now creating the key in system B:
[root@server ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5e:7f:e6:bc:3e:bc:9f:65:2f:b3:95:89:d6:0e:9d:5f root@server.lap.work
[root@server ~]#

Now APPENDING (Do not copy it will overwrite the key of the system A) the key of system B to the authorized_keys of system C:
First we will copy the key to a file abc.txt in system C.
Then we will append the file abc.txt to athorized_keys of system C.

[root@server ~]# scp /root/.ssh/id_rsa.pub 192.168.137.85:/root/.ssh/abc.txt
The authenticity of host '192.168.137.85 (192.168.137.85)' can't be established.
RSA key fingerprint is 63:6d:4a:08:b4:b4:19:3c:d0:58:f3:60:8a:ec:7a:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.137.85' (RSA) to the list of known hosts.
root@192.168.137.85's password:
id_rsa.pub                                                                              100%  402     0.4KB/s   00:01
[root@server ~]#

Now in system C:
[root@test ~]# cat .ssh/abc.txt
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAys2LlRFyQZay+9QWaCT6mS7gmM6qN0GzCGM7AXAMlEDWHUSmXSC9EPih4uOAGH6IWGqRk7EVerVEMq39vVchDAE5B3nMofQkc2fAlC9Ct/5+TirQaQxmHCN0If6O+RlO4F3hVhqX7d0ZNjJhvWLezRXsXkZY+g0215nd+qeZSz39N8NtkKBuuYW7LFdEU8dmiUaFrUjkBpZYuP5THaGqD/wZr8Pxf7t/MIpRbkuleP7b6S8kEreR9AdDX5DWJOy3qqxZzJVfXgYH6wq/MDuY14X+p1zJjzqQRV8cD7rA2Q8WQy4R7oBAJvZk9Q5gkyt50rDfiMXLPYF1myrfo/kDpQ== root@server.lap.work
[root@test ~]#

Appending the key in the file abc.txt to authorized_keys
[root@test ~]# cat .ssh/abc.txt >> .ssh/authorized_keys

Now checking the authorized_keys:
[root@test ~]# cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAydSsh5wlG/lvWFeZcI+Rlxr2hTWJ4diU7b1/OsDWE72goA72eIx+tfzg6/aT4vPbWA8GC8arK6XxLOWJbv2Y5tFRGmXwn+Trw3RzWOHFT76NTv6NP+SCvBciwTr55Tt6jIgGrVu6f/pBvU8tIgctu/5efH611w/pToIJbezlooJ/1GGWaydEc3eTJernwzia5UMEsRGIztT6GN8zqkVtKIRhql3y2lQjgg3jA4ceAXwJ8h49xFuo8ZIEo4mWmEwW8Kn2VaTnJVh/YsO7tMRs8KsWXonbTm0vtD2OQv59Lswjs5fMmBv0EGZJvZ3uDypQw/IH33MWKbAotwQ1fewbiw== root@nagios.lap.work
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAys2LlRFyQZay+9QWaCT6mS7gmM6qN0GzCGM7AXAMlEDWHUSmXSC9EPih4uOAGH6IWGqRk7EVerVEMq39vVchDAE5B3nMofQkc2fAlC9Ct/5+TirQaQxmHCN0If6O+RlO4F3hVhqX7d0ZNjJhvWLezRXsXkZY+g0215nd+qeZSz39N8NtkKBuuYW7LFdEU8dmiUaFrUjkBpZYuP5THaGqD/wZr8Pxf7t/MIpRbkuleP7b6S8kEreR9AdDX5DWJOy3qqxZzJVfXgYH6wq/MDuY14X+p1zJjzqQRV8cD7rA2Q8WQy4R7oBAJvZk9Q5gkyt50rDfiMXLPYF1myrfo/kDpQ== root@server.lap.work
[root@test ~]#

Now checking the passwordless login from A to C
[root@nagios ~]# ssh 192.168.137.85 ls
anaconda-ks.cfg
Desktop
install.log
install.log.syslog
[root@nagios ~]#

Now checking the passwordless login from B to C
[root@server ~]# ssh 192.168.137.85 ls
anaconda-ks.cfg
Desktop
install.log
install.log.syslog
[root@server ~]#

You should not expose the keys to others. My system is for testing use and the domain is private. That is why I don't mind to share them.

ssh passwordless login

Advertisements


Configuring password authentication or login via ssh. This post explains how to enable password less authentication between two nodes. The configuration is very simple. You have to generate dsa public and private keys of the server which you want to login from  and copy that to the authorized_keys of the host you want to login to without password. We will generate the keys using the command ssh-keygen.

We have two nodes:
Node1 - hb_test1.lap.work
Node2 - hb_test2.lap.work

On node1:
Generate the key:

[root@hb_test1 ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
9f:5d:47:6b:2a:2e:c8:3e:ee:8a:c2:28:5c:ad:57:79 root@hb_test1.lap.work

Pass the key to node2:
[root@hb_test1 ~]# scp .ssh/id_dsa.pub hb_test2.lap.work:/root/.ssh/authorized_keys

On node2:
Generate the key:

[root@hb_test2 ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
40:66:t8:bd:ac:bf:68:38:22:60:d8:9f:18:7d:94:21 root@hb_test2.lap.work

Pass the key to node1:
[root@hb_test2 ~]# scp .ssh/id_dsa.pub hb_test1.lap.work:/root/.ssh/authorized_keys

Now you will be able to login to node1 to node2 and vice versa without passwords.

Monday, April 9, 2012

s3cmd example commands

Advertisements


s3cmd is a tool for uploading, downloading and managing file and directories with amazon simple storage s3 which is a storage service in aws. Here we will see how to create and remove amazon simple storage s3 buckets, how to upload, download, delete files from and to your Linux system using s3cmd, sync directories etc.

/usr/bin/s3cmd: unrecognized option `--configure'

Advertisements


s3cmd is a tool for uploading, downloading and managing file and directories with amazon simple storage s3. But while configuring s3cmd in your amazon ce2 cloud instance you may get the following error. I got this error in my centos 5.4 instance on amazon ec2.
/usr/bin/s3cmd: unrecognized option `--configure'

[root@xxxxxxxx ~]# s3cmd --configure
/usr/bin/s3cmd: unrecognized option `--configure'
s3cmd [options] <command> [arg(s)]              version 1.2.6
  --help    -h        --verbose     -v     --dryrun    -n
  --ssl     -s        --debug       -d     --progress
  --expires-in=( <# of seconds> | [#d|#h|#m|#s] )

Commands:
s3cmd  listbuckets  [headers]
s3cmd  createbucket  <bucket>  [constraint (i.e. EU)]
s3cmd  deletebucket  <bucket>  [headers]
s3cmd  list  <bucket>[:prefix]  [max/page]  [delimiter]  [headers]
s3cmd  location  <bucket> [headers]
s3cmd  delete  <bucket>:key  [headers]
s3cmd  deleteall  <bucket>[:prefix]  [headers]
s3cmd  get|put  <bucket>:key  <file>  [headers]
[root@xxxxxxx ~]#

Solution:
You have to reinstall the s3cmd package as follows.
you can get the repo from here
http://s3tools.org/repo/RHEL_5/

save the repo in your /etc/yum.repos.d/ as follows


[root@xxxxxxx ~]# cat /etc/yum.repos.d/s3cmd.repo
#
# Save this file to /etc/yum.repos.d on your system
# and run "yum install s3cmd"
#
[s3tools]
name=Tools for managing Amazon S3 - Simple Storage Service (RHEL_5)
type=rpm-md
baseurl=http://s3tools.org/repo/RHEL_5/
gpgcheck=1
gpgkey=http://s3tools.org/repo/RHEL_5/repodata/repomd.xml.key
enabled=1
[root@xxxxxxx ~]#


After that

Install it using yum:

yum install s3cmd

Now configure it. It will ask your access key, secret key and encryption key(just hit enter if you don't want).
s3cmd --configure

Now you will be able to list your buckets in your amazon s3 storage using the following command.
s3cmd ls

Saturday, April 7, 2012

checking for ssl headers... configure error cannot find ssl headers centos

Advertisements


You may get this error while configuring some packages from the source in Linux.
checking for ssl headers... configure error cannot find ssl headers centos.

Reason:-
It couldnt find the packages for ssl headers

Thursday, April 5, 2012

HTTP WARNING: HTTP/1.1 403 Forbidden Nagios

Advertisements

You may get this error after installing nagios
HTTP WARNING: HTTP/1.1 403 Forbidden
This is because there is no index.html file on the document root of Apache.

#cd /var/www/html          (if you installed using yum)
#touch index.html
if you want you can write something in it
#echo "Nagios Server" >> /var/www/html/index.html

Now restart the services
#service httpd restart
#service nagios restart

it must be solved within minutes.

Sunday, March 25, 2012

connecting MySQL database using php script

Advertisements

We have seen a lot of php scripts accessing mysql databases. But ever wondered how they work? Here we will discuss a small php script which can access mysql database(test) and list some columns of the table(people). After reading this you wil know how to connect to mysql using php script from CLI or command line interface. You will need mysqli php module loaded for the php script to work. We will discuss these in detail. In this example we have one Centos 5.2 os installed on a vmware workstation.

Pre-requisites:
you must have mysql server installed and running in your system.
And the php rpms installed

Checking the mysql status:
[root@server ~]# /etc/init.d/mysqld status
mysqld (pid 5601) is running...
[root@server ~]#

Checking the php rpms:
[root@server ~]# rpm -qa | grep -i php
php-cli-5.1.6-20.el5
php-common-5.1.6-20.el5
php-5.1.6-20.el5
php-mysql-5.1.6-20.el5
php-pdo-5.1.6-20.el5
[root@server ~]#

You must have mysqli module installed and loaded. Then only php script can connect to mysql.
[root@server ~]# php -m | grep mysql
mysql
mysqli
pdo_mysql
[root@server ~]#

If not loaded, install it using the following command
[root@server ~]# yum install php-mysql

Now in this example, we will connect to mysql and list the first_name and last_name of the users in people table of the databases test.
This is what we have in mysql.
[root@server ~]# mysql -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.0.45-log Source distribution
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| test               |
+--------------------+
3 rows in set (0.08 sec)

mysql> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql>

mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| people         |
+----------------+
1 row in set (0.00 sec)

mysql> select * from people;
+---------------------+-----------+------------+
| first_name          | last_name | mob_number |
+---------------------+-----------+------------+
| Randeep Raman 1234  | NULL      | NULL       |
| Nibul Roshan  5678  | NULL      | NULL       |
| Afilaj Hussain 1357 | NULL      | NULL       |
| Renjith             | menon     | 1234       |
+---------------------+-----------+------------+
4 rows in set (0.00 sec)
mysql>

This will be our result for the script we are going to make.
mysql> select first_name, last_name from people;
+---------------------+-----------+
| first_name          | last_name |
+---------------------+-----------+
| Randeep Raman 1234  | NULL      |
| Nibul Roshan  5678  | NULL      |
| Afilaj Hussain 1357 | NULL      |
| Renjith             | menon     |
+---------------------+-----------+
4 rows in set (0.00 sec)
mysql>

The script is as follows.
[root@server ~]# cat test.php
<?php
/* Connection object */
/* now we will define the connection object */
/* syntax is as follows */
/* $conn_object_name = new mysqli("hostname", "user_name", "Password", "Database_name");*/
$conn1 = new mysqli("localhost","root","redhat","test");

/* Defining the Query to be executed */
/* We want to list the first_name and the last_name entries from the table people */
$query1 = "select first_name,last_name from people";

/* Now executing the query and storing the result */
$result1 = $conn1->query($query1);

/* Printing the output */
while($obj1 = $result1->fetch_object())
        {
        printf("%s %s\n",$obj1->first_name, $obj1->last_name);
        }
?>
[root@server ~]#

Now testing the script as follows.
[root@server ~]# php -q test.php
Randeep Raman 1234
Nibul Roshan  5678
Afilaj Hussain 1357
Renjith menon
[root@server ~]#

It woorks :)

Thursday, March 22, 2012

Integrating apache tomact with mod_jk

Advertisements

This tutorial explains how to install and configure web-servers Apache or httpd 2 and tomcat 7 and integrate them with mod_jk or jk_module in Centos operating system. All the traffic to the apache will be redirected to tomcat.

We have one Centos 5.2 32 bit vmware instance
IP : 192.168.137.65
Hostname : modjk.lap.work

Wednesday, March 21, 2012

MySQL replication in Linux

Advertisements

Database replication is the frequent copying data from a database in one server to a database in another server to make the data in all servers consistent. Usually one database server(master) maintains the master copy of the database and other servers(slaves) maintain slave copies of the database. Database writes are written to the master database server and are then replicated by the slave database servers. MySQL replication is asynchronous - slaves need not be connected permanently to receive updates from the master. This means that updates can occur over long distance connections and even over temporary or intermittent connections such as a dial-up service. Depending on the configuration, we can replicate all databases, selected databases, or even selected tables within a database.

There are mainly three types of replication: 
Snapshot replication: Data on one server is simply copied to another server, or to another database on the same server.
Merging replication: Data from two or more databases is combined into a single database.
Transactional replication: Users receive full initial copies of the database and then receive periodic updates as data changes.

Benefits of replication:
Scale-out solutions - spreading the load among multiple slaves to improve performance. In this environment, all writes and updates must take place on the master server. Reads, however, may take place on one or more slaves. This model can improve the performance of writes (since the master is dedicated to updates), while dramatically increasing read speed across an increasing number of slaves.
Data security - because data is replicated to the slave, and the slave can pause the replication process, it is possible to run backup services on the slave without corrupting the corresponding master data.
Analytics - live data can be created on the master, while the analysis of the information can take place on the slave without affecting the performance of the master.
Long-distance data distribution - if a branch office would like to work with a copy of your main data, you can use replication to create a local copy of the data for their use without requiring permanent access to the master.

In this tutorial we are using the following version of mysql
mysql-server-5.0.45-7.el5

We have two systems with Centos 5.2 os
192.168.137.100 server.lap.work server (Master)
192.168.137.55 apache.lap.work apache (Slave)

On both systems install mysql server.
yum install mysql*

In master system in the mysql ocnfiguration file and the log-bin variable and server-id entries
Master side:
[root@server mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
log-error=/var/log/mysqld.log
log-bin
server-id=1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@server mysql]#

And restart the mysql service
/etc/init.d/mysqld restart

You can see the status of the master process as
mysql> show master status;
+-------------------+----------+--------------+------------------+
| File                      | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+-------------------+----------+--------------+------------------+
| mysqld-bin.000003 |      342   |                       |                            |
+-------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql>

And can check the server id and log-bin entries as
mysql> show variables like 'server%';
+---------------+-------+
| Variable_name | Value  |
+---------------+-------+
| server_id          | 1         |
+---------------+-------+
1 row in set (0.00 sec)
mysql> show variables like 'log%';
+---------------------------------+---------------------+
| Variable_name                            | Value                       |
+---------------------------------+---------------------+
| log                                              | OFF                        |
| log_bin                                        | ON                         |
| log_bin_trust_function_creators    | OFF                       |
| log_error                                     | /var/log/mysqld.log   |
| log_queries_not_using_indexes    | OFF                        |
| log_slave_updates                       | OFF                        |
| log_slow_queries                         | OFF                        |
| log_warnings                               | 1                              |
+---------------------------------+---------------------+
8 rows in set (0.00 sec)
mysql>

We have to make a user and give him replication permissions. Here we  are using root user
mysql> grant replication slave on *.* to 'root'@'%' identified by 'redhat';
mysql> grant select,super,reload on *.* to 'root'@'%' identified by 'redhat';

Now checking the grants for root user:
mysql> show grants for root;
+------------------------------------------------------------------------------------------------------+
| Grants for root@%                                                                                             |
+------------------------------------------------------------------------------------------------------+
| GRANT SELECT, RELOAD, SUPER, REPLICATION SLAVE ON *.* TO 'root'@'%' IDENTIFIED BY PASSWORD '27c30f0241a5b69f' |
+------------------------------------------------------------------------------------------------------+
1 row in set (0.04 sec)
mysql>

Now we can take the backup of databases in master server and copy to slaves. before copying lock the tables with read lock. So that writes wont happen when we take backup and transfer
mysql> flush tables with read lock;

You can unlock it aftet the transfer as
mysql> unlock tables;

Checking the log status
mysql> show binary logs;
+-------------------+-----------+
| Log_name              | File_size    |
+-------------------+-----------+
| mysqld-bin.000001 |       117    |
| mysqld-bin.000002 |       117    |
| mysqld-bin.000003 |       342    |
+-------------------+-----------+
3 rows in set (0.04 sec)
mysql>

Checking the log events
mysql> show binlog events;
+-------------------+-----+-------------+-----------+-------------+--------------------------------+
| Log_name              | Pos  | Event_type  | Server_id   | End_log_pos | Info                                          |
+-------------------+-----+-------------+-----------+-------------+--------------------------------+
| mysqld-bin.000001 |   4   | Format_desc |              1 |          98       | Server ver: 5.0.45-log, Binlog ver: 4 |
| mysqld-bin.000001 |  98  | Stop              |         1      |         117      |                                                        |
+-------------------+-----+-------------+-----------+-------------+--------------------------------+
2 rows in set (0.00 sec)
mysql>

Client side:
In client server also we have to set the server id, but different that the id of the master server.
[root@apache mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
server-id=100
log-error=/var/log/mysqld.log
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@apache mysql]#

Restart the mysql server and check the id
mysql> show variables like 'server%';
+---------------+-------+
| Variable_name | Value  |
+---------------+-------+
| server_id          | 100     |
+---------------+-------+
1 row in set (0.00 sec)
mysql>

First you have tostop the slae service
mysql> stop slave;
Query OK, 0 rows affected (0.01 sec)

and have to set the master details. The data to be given here will be obtained by running" show master status" on master. File name and position will be there in the output.
mysql> change master to master_host='server', master_user='root',  master_password='redhat', master_log_file='mysqld-bin.000003', master_log_pos=342;
Query OK, 0 rows affected (0.01 sec)

now starting the slave service
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

Checking the slave status
mysql> show slave status;

mysql> show processlist;
+----+-------------+-----------+------+---------+------+-----------------------------+------------------+
| Id | User        | Host      | db   | Command | Time | State                                                         | Info             |
+----+-------------+-----------+------+---------+------+-----------------------------+------------------+
| 27 | root        | localhost | NULL | Query   |    0 | NULL                                                 | show processlist |
| 30 | system user |           | NULL | Connect |   60 | Waiting for master to send event         | NULL             |
| 31 | system user |           | NULL | Connect |   60 | Has read all relay log; waiting for the slave I/O thread to update it | NULL             |
+----+-------------+-----------+------+---------+------+-----------------------------+------------------+
3 rows in set (0.00 sec)
mysql>

In this example we have a database named test and a table people in it. in the table we have three entries..
mysql> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql>

mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| people               |
+----------------+
1 row in set (0.00 sec)
mysql> select * from people;
+---------------------+-----------+------------+
| first_name                | last_name | mob_number |
+---------------------+-----------+------------+
| Randeep Raman 1234  | NULL      | NULL       |
| Nibul Roshan  5678  | NULL      | NULL       |
| Afilaj Hussain 1357 | NULL      | NULL       |
+---------------------+-----------+------------+
3 rows in set (0.00 sec)
mysql>

Now on the master server we update the table by insertting a new raw
mysql> INSERT INTO people (first_name,last_name,mob_number) VALUES ('Renjith','menon','1234');
Query OK, 1 row affected (0.00 sec)

mysql> select * from people;
+---------------------+-----------+------------+
| first_name                 | last_name | mob_number |
+---------------------+-----------+------------+
| Randeep Raman 1234  | NULL      | NULL       |
| Nibul Roshan  5678  | NULL      | NULL       |
| Afilaj Hussain 1357 | NULL      | NULL       |
| Renjith             | menon     | 1234       |
+---------------------+-----------+------------+
4 rows in set (0.00 sec)
mysql>

It should be reflected in the slave machine
Before
mysql> select * from people;
+---------------------+-----------+------------+
| first_name          | last_name | mob_number |
+---------------------+-----------+------------+
| Randeep Raman 1234  | NULL      | NULL       |
| Nibul Roshan  5678  | NULL      | NULL       |
| Afilaj Hussain 1357 | NULL      | NULL       |
+---------------------+-----------+------------+
3 rows in set (0.00 sec)
mysql>

After
mysql> select * from people;
+---------------------+-----------+------------+
| first_name          | last_name | mob_number |
+---------------------+-----------+------------+
| Randeep Raman 1234  | NULL      | NULL       |
| Nibul Roshan  5678  | NULL      | NULL       |
| Afilaj Hussain 1357 | NULL      | NULL       |
| Renjith             | menon     | 1234       |
+---------------------+-----------+------------+
4 rows in set (0.00 sec)

You can check the logs in slave machine if there is any errors
mysql>
[root@apache ~]# tail /var/log/mysqld.log
120321 21:05:42 [Note] Slave I/O thread killed while reading event
120321 21:05:42 [Note] Slave I/O thread exiting, read up to log 'mysqld-bin.000003', position 342
120321 21:05:42 [Note] Error reading relay log event: slave SQL thread was killed
120321 21:05:54 [Note] Slave SQL thread initialized, starting replication in log 'mysqld-bin.000003' at position 342, relay log '/var/run/mysqld/mysqld-relay-bin.000002' position: 480
120321 21:05:54 [Note] Slave I/O thread: connected to master 'root@server:3306',  replication started in log 'mysqld-bin.000003' at position 342
120321 21:05:57 [Note] Slave I/O thread killed while reading event
120321 21:05:57 [Note] Slave I/O thread exiting, read up to log 'mysqld-bin.000003', position 342
120321 21:05:57 [Note] Error reading relay log event: slave SQL thread was killed
120321 21:06:05 [Note] Slave SQL thread initialized, starting replication in log 'mysqld-bin.000003' at position 342, relay log '/var/run/mysqld/mysqld-relay-bin.000001' position: 4
120321 21:06:05 [Note] Slave I/O thread: connected to master 'root@server:3306',  replication started in log 'mysqld-bin.000003' at position 342
[root@apache ~]#

In the salve system there are some files which has information related to replication details
[root@apache ~]# ll /var/lib/mysql/
total 20536
-rw-rw---- 1 mysql mysql 10485760 Mar 21 20:23 ibdata1
-rw-rw---- 1 mysql mysql  5242880 Mar 21 20:23 ib_logfile0
-rw-rw---- 1 mysql mysql  5242880 Mar 20 18:16 ib_logfile1
-rw-rw---- 1 mysql mysql       67 Mar 21 21:21 master.info
drwx------ 2 mysql mysql     4096 Mar 20 18:16 mysql
srwxrwxrwx 1 mysql mysql        0 Mar 21 20:23 mysql.sock
-rw-rw---- 1 mysql mysql       66 Mar 21 21:21 relay-log.info
drwx------ 2 mysql mysql     4096 Mar 21 12:33 test
[root@apache ~]#
[root@apache ~]# cat /var/lib/mysql/master.info
14
mysqld-bin.000003
491
server
root
redhat
3306
60
0
0
[root@apache ~]#
[root@apache ~]# cat /var/lib/mysql/relay-log.info
/var/run/mysqld/mysqld-relay-bin.000002
385
mysqld-bin.000003
491
[root@apache ~]#

Tuesday, March 20, 2012

Configuring samba swat in linux

Advertisements

Samba is a linux software helps to transfer files between a linux box and windows box. Using NFS you can share files between two linux systems, but not with a linux system and windows system. Using WinSCP you can transfer files between linux and windows. But it is very slow and very time consuming. Samba is fast. Samba-swat is a web interface (or samba web administration tool) for samba. Using samba swat, one can configure samba, define shares, configure printers, edit smb.conf parameters, we status of the samba services, stop and restart services, view the current samba configuration and even change passwords and add samba users. This blog post tutorial explains how to install and configure samba swat in centos linux.

Friday, March 16, 2012

ubuntu default root password

Advertisements

You may not login as root user in newly installed ubuntu or debian desktop server because you don't know the default root password. The thing is there is no default root password. You can set the root password as.

Login as the normal user u created doing the installation with the password you specified. Then run

$sudo passwd root
then it will prompt for your logged users password. Entering the password the system will prompt for the credentials for the root password. Give the password you want to set.

Wednesday, March 14, 2012

Securing tmp in centos linux

Advertisements


Securing /tmp is very important. /tmp is world writable directory. So if some intruders get acces to /tmp, its a potential threat. The main thing we have to do is disabling running of scripts in this directory. Now we will see how to harden or secure /tmp /vr/tmp and /dev/shm in centos linux. This tutorial has examples also.

First of all before doing any changes, create a back up file. Make this a habit
cp /etc/fstab /etc/fstab.bak

Securing /tmp:
Create a 5Gb file for /tmp partition (you can adjust the size according to your needs)
dd if=/dev/zero of=/var/tempFS bs=1024 count=5000000

Make ext3 filesystem in the file we just created. Because we are going to use this file to store data.
mkfs.ext3 /var/tempFS

Create  current bckup of the /tmp directory
cp -Rpf /tmp /tmp.bkp

Now mount the newly created file as /tmp
mount -o loop,noexec,nosuid,rw /var/tempFS /tmp

Because /tmp directory is universly writable and nobody can delete files created by others we will set permission 777 + sticky bit =1777
chmod 1777 /tmp

Copy the old data to new /tmp
cp -Rpf /tmp.bkp/* /tmp/
If the old /tmp was empty, it might throw some errors. Don't worry.

Now you can edit fstable and make changes for the /tmp entry
vi /etc/fstab
/var/tempFS  /tmp ext3 loop,nosuid,noexec,rw 0 0

Remount the /tmp for making effects.
mount -o remount /tmp

Securing /var/tmp:
move the /var/tmp directory to some other name
mv /var/tmp /var/tmp.bkp

Now create a link /var/tmp and point it to /tmp. The command is as follows
ln -s /tmp /var/tmp

cp /var/tmp.bkp/* /tmp/
If the old /var/tmp was empty, it might throw some errors. Don't worry

Securing /dev/shm:
vi /etc/fstab
add nosuid and noexec to mount options
tmpfs     /dev/shm    tmpfs   defaults,nosuid,noexec     0 0
save the file

Remount to make the effect
mount -o remount /dev/shm

Monday, March 12, 2012

configuring iptables in linux

Advertisements


iptables is a user space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.

This article is a tutorial regarding how to configure or implement firewall using Linux security firewall iptables. This article explains and give examples of default and user defined iptables tables, chains, acl syntax, writing deleting and replacing iptables rules, blocking or allowing hosts or ip addresses and ports, port or ip redirection, logging options, using linux box as router using iptables, Masquerading, Network address translation (NAT), source-nat (SNAT), destination-nat (DNAT) and netmap

iptables mainly operates at Layers 3 & 4. Layer 3 deals with Source & Destination IP addresses and layer 4 deals with protocols and ports

To Check whether IPTables is enabled or not in the kernel,
#cat /boot/config* | grep CONFIG_NETFILTER
CONFIG_NETFILTER=y

The Main structure of the iptables is as follows.
Tables->Chains->Rules
Tables may contains a number of chains and each chain may contail a number of rules.

Main Tables
There are mainly three tables.

Mangle  -   Allows altering of packats TOS,TTL etc
NAT     -   Network Address Translation. Allow changing sourse destination IP addresses and ports.
Filter     -   Allows IP Packet filtering. [INPUT,FORWARD,OUTPUT]

Iptables rule syntax
1. command
2. tables
3. chain
4. protocol
5. source or destination
6. Jump target

eg:
iptables -t filter -I INPUT -p tcp -s 192.168.1.100 -j ACEEPT

Example :
Blocks any communication to OUR machine from source 192.168.1.77.
iptables -A INPUT -s 192.168.1.77 -j DROP

[root@vm1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
DROP       all  --  192.168.1.77         anywhere

Saving and restoring iptables rules :
Rules will go if we restart without saving it . So we have to save those rules.
To save the IPTables rules
iptables-save > iptables_rules.txt

To restore the IPTables rules
iptables-restore < iptables_rules.txt

Flushing iptables rules
iptables -F

or you can save the rules by just run
service iptables save
or
/etc/init.d/iptables save
it will save the rules tp /etc/sysconfig/iptables permenantly. if you restart iptables it'll read the rules from this file

Filter table has three chains
1. INPUT
2. OUTPUT
3. FORWARD

Nat table has  three chains
1. PREROUTING
2. POSTROUTING
3. OUTPUT

Filter table has four chains
1. PREROUTING2. INPUT
3. OUTPUT
4. FORWARD
-----------------------------------------------------
[root@vm1 ~]# iptables -L -t filter
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
-----------------------------------------------------
[root@vm1 ~]# iptables -L -t nat
Chain PREROUTING (policy ACCEPT) --before routing occurs -nat
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT) --aftet routing deteremined
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
-----------------------------------------------------
[root@vm1 ~]# iptables -L -t mangle
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
-----------------------------------------------------

-t option is for listing a particular table chains and rules.
filter table is the default one.

[root@vm1 ~]# iptables -L -v
list packet details to and from through a chain

[root@vm1 ~]# iptables -L -v --line-numbers
list the rules with line numbers

[root@vm1 ~]# iptables -L -n
lists the numeric values (IP), Disables the resolutions.[Host and Service]

iptables rule for accepting ssh connections
[root@vm1 ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT    

iptables rule for blocking telnet connections
[root@vm1 ~]# iptables -A INPUT -p tcp --dport telnet -j DROP

iptables rule for blocking telnet connections and insert it as rule 1
[root@vm1 ~]# iptables -I  INPUT 1 -p tcp --dport telnet -j DROP

Appending adds the rule to the end. But with inserting you can insert a rule to anywhere in the list. Means to any position[number] in the list.

Deleting an iptables Rule
-D INPUT NUM

[root@vm1 ~]# iptables -D INPUT 3
deletes the rule number 3 from INPUT chain of defalt table.

Or we can delete like this.
iptables -D INPUT -p tcp --dport 22 -j ACCEPT

Replacing an iptables Rule
-R Chain_name NUM

To replace the 1st rule
[root@vm1 ~]# iptables -R INPUT 1 -p tcp --dport telnet -j ACCEPT
IPTables rules are Dynamic. The ssh/telnet connection will be freezed if rules applied in b/w.

Flushing the rules
iptables -F
Flushing will erase all the existing rules in iptables. If you don't save the rules before flushing all rules will be lost.

[root@vm1 ~]# iptables -L INPUT -v
listing rules only in the INPUT chain with packet counts

iptables -Z INPUT
will  zero all the packet counters

Creating  new chains and Renaming exsisting ones
To create User defined chains
-N Chain_name

[root@vm1 ~]# iptables -N ITS
Created a new chain ITS

Rename chains
-E Old_name New_name

[root@vm1 ~]# iptables -E ITS SPARTANZ

Drop Policy of iptables.
Dropping a policy will drop all the traffic through that chain

iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP

Writing rules for only one ethernet device:
To filter all the input through eth0
iptables -A INPUT -i eth0 -j DROP

Negation: (!)
iptables -A INPUT -s ! 192.168.1.55 -j DROP
it Drops all other inputs except from 192.168.1.55

example of TCP:
iptables -A INPUT -i eth+ -p tcp --dport telnet -j DROP
Blocks telnet though both or all ethernet devices

example of UDP:
TFTP, SysLog, NTP, DHCP
-p udp, --protocol udp
--sport 123 --dport 123 for NTP

ICMP (Internet Control Messaging Protocol):
Echo request -PING
Echo reply - Pong

-p icmp, --protocol icmp
--icmp-type name/number

iptables -p icmp --help
for getting help about icmp-types

Disabling ping using iptables.
To deny echo-replies from all hosts
iptables -A INPUT -p icmp --icmp-type echo-reply -j DROP

To drop echo-replies from our host
iptables -A OUTPUT -p icmp --icmp-type echo-reply -j DROP

MULTIPORT: (-m multiport)
-p tcp --dport 8080 or --dport web-cache

iptables -A INPUT -p tcp -m multiport --dport 8080,23 -j DROP

MAC ADDRESS FILTERING: ( -m mac --mac-source or --mac-destination )
Better than using IP addresses because ip addresses can be changed but not mac

Denying a host by mac address using iptables
iptables -I INPUT -m mac --mac-source 00:00:00:00:00:00 -j REJECT

Iptables and states :

in INPUT
iptables -I ITS  -m state --state ESTABLISHED -j ACCEPT
Allows communication in already established services

in INPUT
iptables -I ITS  -m state --state NEW,ESTABLISHED -j ACCEPT
Allows new connections and established connections from the system

Jump Targets in iptables :
ACCEPT -> Sends packets to other rules or processes
DROP -> Packet will be dropped
REJECT -> Sends a courtesy message back
REDIRECT -> Redirect from one destination to another. must be used with pre-routing in NAT. Local ports only.
LOG -> Allows us to log using SysLog

Logging  :
Creating and enabling iptables log using syslog

iptables logs are kernel logs type. So we have to enable this in syslog.conf as follows
vi /etc/syslog.conf
kern.* /var/log/firewall

Create the log file.
touch /var/log/firewall

Restart the syslog service.
service syslog restart

and logging can be enabled as
iptables -I ITS 1 -p tcp --dport ssh -j LOG

ROUTING
You can use linux box as  router with the help of iptables. First we have to enable packet forwarding in the server we are using as router. This can be done by setting  the sysctl variable as follows

vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
Save the file

Reload the sysctl.conf
sysctl -p

NETWORK ADDRESS TRANSLATION [NAT]

Three types:
Basic NAT. This involves IP address translation only, not port mapping.
PAT : Port Address Translation. This involves the translation of both IP addresses and port numbers.
NAPT : Network Address Port Translation.

SNAT and Masquerading can be done in POSTROUTING chain in nat table.
But DNAT is done in PREROUTING chain in nat table.

SNAT - Source NAT: Translation of Source IP Address. Use when u've only one static IP Address and many systems in local network.

DNAT - Destination NAT: Translation of the destination IP address. Used when traffice comes from internet to local systems.

Three default chains are there in nat table which cannot be deleted.
PREROUTING    - Packet that are destined to a system that is accessible to the local router. [DNAT] Internet to Local area network
POSTROUTING   - If we want to change the local ips to something that is routable. [SNAT/MASQUERADING]
OUTPUT        - Locally sourced!!

Masquerading:
this is also similar to snat but uses when dhcp is used rather having static local ip address.

iptables -t nat -A POSTROUTING -j MASQUERADE -s 10.0.0.0/8 -d 192.168.1.0/24
now if u r pinging for 10.0.0.10 to 192.168.1.100 it appears to be pinging from 192.168.1.37 [Ip address of  the system in network 192.168.1.0]

Note:
Masquerading listen to the interface. if dhcp changes the ip of interface, it automatically changes the affect.
Masquerading uses primary interface. Not sub[duplicate] ip addresses.

iptables -t nat -R POSTROUTING 1 -p tcp -j MASQUERADE --to-ports 1024-10240
allows communication only through that port range.

Some examples of nat
iptables -t nat -R POSTROUTING 1 -p tcp -j SNAT --to-source 192.168.1.37:1024-10240 -s 10.0.0.0/8
Do same as the last rule in Masquerading. Uses only if u've a static ip. It fails when ip changes.

iptables -t nat -A POSTROUTING -p tcp -j SNAT --to-source 192.168.1.37 -d 10.0.0.10 -s 192.168.1.100
iptables -t nat -A POSTROUTING -p tcp -j SNAT --to-source 10.0.0.1 -d 192.168.1.100 -s 10.0.0.10

Destination Network Address Translation: INBOUND

DNAT - permits connection to unexposed hosts. Its exact reverse of SNAT.
Rules will be written in PREROUTING.
iptables -t nat -A PREROUTING -j DNAT -p tcp --dport 3389 -to-destination 192.168.1.101 -d 192.168.1.37 -s 10.0.0.10
this will redirect the connection to port 3389@192.168.1.37 to same port @ 192.168.1.101 from 10.0.0.10

Tuesday, March 6, 2012

make: yacc: Command not found

Advertisements

You may get this error while running make
Error:
make: yacc: Command not found

Solution:
yum install bison
yum install byacc

configure: error: C++ preprocessor "/lib/cpp" fails sanity check

Advertisements

You  may get this error while running ./configure
Error:
configure: error: C++ preprocessor "/lib/cpp" fails sanity check

Solution:
Redhat Distributions
yum install gcc gcc-cpp gcc-c++

Debian Distributions:
apt-get install gcc gcc-cpp gcc-c++

Monday, March 5, 2012

Nessus Vulnerability Scanner

Advertisements


Nessus  is the world’s most widely-deployed vulnerability and configuration assessment product. Features includes high-speed discovery, configuration auditing or misconfiguration check (e.g. open mail relay, missing patches, etc), asset profiling, sensitive data discovery, patch management integration, PCI DSS audits and vulnerability analysis. Nessus mainly check for vulnerabilities rather than rootkits by chkrootkit, rkhunter or LMD.

You can download the rpm from nessus.org

Install nessus using rpm
[root@server src]# rpm -ivh Nessus-5.0.0-es5.i386.rpm
Preparing...                ########################################### [100%]
   1:Nessus                 ########################################### [100%]
nessusd (Nessus) 5.0.0 [build R23018] for Linux
(C) 1998 - 2012 Tenable Network Security, Inc.
Processing the Nessus plugins...
[##################################################]
All plugins loaded
 - You can start nessusd by typing /sbin/service nessusd start
 - Then go to https://server.lap.work:8834/ to configure your scanner
[root@server src]#

Start the nessus service
[root@server src]# /sbin/service nessusd start
Starting Nessus services:                                  [  OK  ]
[root@server src]#

Nessus defaultly binds to 8834.
[root@server src]# netstat  -ntpla | grep 8834
tcp        0      0 0.0.0.0:8834                0.0.0.0:*                   LISTEN      5754/nessusd
tcp        0      0 :::8834                          :::*                            LISTEN      5754/nessusd
[root@server src]#

Now you can access the nessus through web interface by accessing
https://IP_address_of_the_nessus_server:8834

You have to get free or enterprise license from nessus.org. Then you can create the admin account for making scans and reports.

configuring nfs in centos linux

Advertisements

NFS is abreviation for network filesystem. It is used in linux unix platform for sharing directories between linux or unix machines over a network. It is more like folder sharing in windows systems. It was originally developed by Sun Microsystems. We will see how to install and configure nfs, How to mount a nfs share. What are the processes associated with nfs, why portpmap is needed for nfs, how to list the nfs shares of a system etc.

Advantages of NFS are:
Local systems needs only less disk space because commonly used data can be stored on a single server system and can be accessed by others over the network usin nfs.
We can mount all removable devices such as dvd, cdrom, floppy etc on one single system and made them available to other systems by sharing those via nfs.

The package name is nfs-utils. We can check whether the nfs package is installed using the following command.
[root@server ~]# rpm -qa | grep -i nfs
nfs-utils-1.0.9-33.el5
[root@server ~]#

Checking the status of the nfs service
[root@server ~]# /etc/init.d/nfs status
rpc.mountd is stopped
nfsd is stopped

Starting the nfs service
[root@server ~]# /etc/init.d/nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]

NFS defaultly binds to the tcp port 2048
[root@server ~]# netstat -ntpla | grep 2049
tcp        0      0 0.0.0.0:2049                0.0.0.0:*                   LISTEN      -

You can find all the sub processes and binded ports of nfs by rpcinfo command. NFS takes the ports assigned by portmapped. Soportmapped needs tobe running for nfs to work.
[root@server ~]# rpcinfo -p
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100011    1   udp    832  rquotad
    100011    2   udp    832  rquotad
    100011    1   tcp    835  rquotad
    100011    2   tcp    835  rquotad
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100021    1   udp  32773  nlockmgr
    100021    3   udp  32773  nlockmgr
    100021    4   udp  32773  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100021    1   tcp  35223  nlockmgr
    100021    3   tcp  35223  nlockmgr
    100021    4   tcp  35223  nlockmgr
    100005    1   udp    872  mountd
    100005    1   tcp    875  mountd
    100005    2   udp    872  mountd
    100005    2   tcp    875  mountd
    100005    3   udp    872  mountd
    100005    3   tcp    875  mountd
[root@server ~]#

/etc/exports is the main file for nfs. We specify the directories to be shared in this file with the information for whom it is shared and with which permissions it is shared.
* - means it is shared to all ip addresses.
ro - means read only
rw - means read write

[root@server ~]# cat /etc/exports
#Directory_path   IP_address(Permissions)
/media/CentOS *(ro)
/kick *()
[root@server ~]#

To activate all shares specified in /etc/exports run the following command
[root@server ~]# exportfs -a

If u made any changes in /etc/exports you can reload it using the following command
[root@server ~]# exportfs -r

You can list the permissions of the shares by running
[root@server ~]# exportfs -v
/media/CentOS   <world>(ro,wdelay,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
/kick           <world>(ro,wdelay,root_squash,no_subtree_check,anonuid=65534,anongid=65534)

For checking the shares in a system with ip address  192.168.137.100
[root@server ~]# showmount -e 192.168.137.100
Export list for 192.168.137.100:
/kick         *
/media/CentOS *
[root@server ~]#

From a remote machine you can mount the share /media/CentOS in the machine 192.168.137.100 to /mnt as
[root@server ~]# mount 192.168.137.100:/media/CentOS /mnt
[root@server ~]# mount
*** OUTPUT TRUNCATED ***
192.168.137.100:/media/CentOS on /mnt type nfs (rw,addr=192.168.137.100)
[root@server ~]#

[root@server ~]# cat /var/lib/nfs/etab
/media/CentOS   *(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534)
/kick   *(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534)

Some of the important nfs files are

/var/lib/nfs/etab contains information about what filesystems should be exported to whom at the moment.
/var/lib/nfs/rmtab contains a list of which filesystems actually are mounted by certain clients at the moment.
/proc/fs/nfs/exports contains information about what filesystems are exported to actual client (individual, not subnet or whatever) at the moment.
/var/lib/nfs/xtab is the same information as /proc/fs/nfs/exports but is maintained by nfs-utils instead of directly by the kernel. It is only used if /proc isn't mounted.

[root@server ~]# cat /var/lib/nfs/rmtab
192.168.137.200:/media/CentOS:0x00000002
192.168.137.200:/kick:0x00000002
192.168.137.248:/media/CentOS:0x00000003
192.168.137.20:/media/CentOS:0x00000001
[root@server ~]#

Wednesday, February 29, 2012

Remote installation of centos linux

Advertisements

Remote installation of centos linux
Remote installation or installing centos linux from a remote location can be done is a few ways. We can do remote installation using mainly three methods. NFS, FTP and HTTP. And when doing remote installation we can pull the graphical screen via VNC to our local system. We can categorize the installation again into two. Attended and unattended. In attended installation, we have to sit in front of the system and give answers. In unattended installation, we can write the answers into a file and notify the installation process to read the answers from it. In linux unattended installation can be done with kickstart file. We can save all things in some installation server and configure network instillation via PXE so that it'll need just a few clicks for the entire installation. We will discuss all the following methods in this article.

Monday, February 27, 2012

Configuring dhcp server in linux

Advertisements



DHCP is Dynamic host configuration protocol
The Dynamic Host Configuration Protocol (DHCP) is a network configuration protocol for hosts on Internet Protocol (IP) networks. Computers that are connected to IP networks must be configured before they can communicate with other hosts. The most essential information needed is an IP address, and a default route and routing prefix. DHCP eliminates the manual task by a network administrator. It also provides a central database of devices that are connected to the network and eliminates duplicate resource assignments.
In addition to IP addresses, DHCP also provides other configuration information, particularly the IP addresses of local Domain Name Server (DNS), network boot servers, or other service hosts. Let's see how to install  and configure dhcp server in a centos 5 or redhat el5 system.

Here we will set the dhcp server for the network 192.168.137.0/24

Network 192.168.137.0/24

Client's ip range        192.168.137.150 - 192.168.137.250
Gateway 192.168.137.1
Bcast 192.168.137.255
DNS servers  8.8.8.8 and 8.8.4.4

The package name is dhcp. We will install usign yum.
[root@server ~]# yum install dhcp
[root@server ~]# rpm -q dhcp
dhcp-3.0.5-13.el5
[root@server ~]#

/etc/dhcpd.conf - is the  main configuration file

/var/lib/dhcpd  - Lease directory
/var/lib/dhcpd/dhcpd.leases - IPV4 Leases

The default dhcp configuration file will be a reference to the sample file.
[root@server ~]# cat /etc/dhcpd.conf
#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp*/dhcpd.conf.sample
#[root@server ~]#

We will copy the sample file and edit it.
root@server ~]# cp /usr/share/doc/dhcp*/dhcpd.conf.sample /etc/dhcpd.conf
root@server ~]# cat  /etc/dhcpd.conf
ddns-update-style interim;
ignore client-updates;
subnet 192.168.137.0 netmask 255.255.255.0 {
        option routers                  192.168.137.1;
        option subnet-mask              255.255.255.0;
        option domain-name              "lap.work";
        option domain-name-servers      8.8.8.8, 8.8.4.4;
        range dynamic-bootp 192.168.137.150 192.168.137.250;
        default-lease-time 21600;
        max-lease-time 43200;
}
[root@server ~]#

Check the service and start it.
[root@server ~]# /etc/init.d/dhcpd status
dhcpd is stopped
[root@server ~]# /etc/init.d/dhcpd start
Starting dhcpd:                                            [  OK  ]
[root@server ~]# chkconfig dhcpd on

Now from the client machine we can set the network settings on the eth0 device to dhcp and restart the network.

DHCP works in DORA format

Client sends DHCPDISCOVER (D)
Server sends DHCPOFFER (O)
Client sends DHCPREQUEST (R)
Server sends DHCPACK (A)

Now on taling the /var/log/messages on dhcp server we can see that all this happens while we restart the network on client
[root@server ~]# tail -f /var/log/messages
Feb 27 22:50:09 server dhcpd: DHCPDISCOVER from 00:0c:29:8d:16:93 via eth0
Feb 27 22:50:10 server dhcpd: DHCPOFFER on 192.168.137.250 to 00:0c:29:8d:16:93 via eth0
Feb 27 22:50:10 server dhcpd: DHCPREQUEST for 192.168.137.250 (192.168.137.100) from 00:0c:29:8d:16:93 via eth0
Feb 27 22:50:10 server dhcpd: DHCPACK on 192.168.137.250 to 00:0c:29:8d:16:93 via eth0

The lease file at the server side is stored at
[root@server ~]# cat /var/lib/dhcpd/dhcpd.leases
# All times in this file are in UTC (GMT), not your local timezone.   This is
# not a bug, so please don't ask about it.   There is no portable way to
# store leases in the local timezone, so please don't request this as a
# feature.   If this is inconvenient or confusing to you, we sincerely
# apologize.   Seriously, though - don't ask.
# The format of this file is documented in the dhcpd.leases(5) manual page.
# This lease file was written by isc-dhcp-V3.0.5-RedHat

lease 192.168.137.250 {
  starts 1 2012/02/27 17:04:49;
  ends 1 2012/02/27 23:04:49;
  binding state active;
  next binding state free;
  hardware ethernet 00:0c:29:8d:16:93;
}
[root@server ~]#

If you want you can make a separate log file for dhcp
add this line
log-facility local8;

so makes the dhcpd.conf
root@server ~]# cat  /etc/dhcpd.conf
ddns-update-style interim;
ignore client-updates;
subnet 192.168.137.0 netmask 255.255.255.0 {
        option routers                  192.168.137.1;
        option subnet-mask              255.255.255.0;
        option domain-name              "lap.work";
        option domain-name-servers      8.8.8.8, 8.8.4.4;
        range dynamic-bootp 192.168.137.150 192.168.137.250;
        default-lease-time 21600;
        max-lease-time 43200;
log-facility local8;
}
[root@server ~]#
Restart the dhcpd service
touch the file /var/log/dhcpd.log
and in /etc/syslog.conf
add the line
local8.*       /var/log/dhcpd.log
and restart syslog servce

In client machine. It gets the ip 192.168.137.250  which is in the range we specified.
[root@server ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:8D:16:93
          inet addr:192.168.137.250  Bcast:192.168.137.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe8d:1693/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:361 errors:0 dropped:0 overruns:0 frame:0
          TX packets:544 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:39256 (38.3 KiB)  TX bytes:130376 (127.3 KiB)
          Interrupt:75 Base address:0x2000

And also the nameserver details
[root@server ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search lap.work
nameserver 8.8.8.8
nameserver 8.8.4.4
[root@server ~]#

The lease file at the client is
[root@server ~]# cat /var/lib/dhclient/dhclient-eth0.leases
lease {
  interface "eth0";
  fixed-address 192.168.137.250;
  option subnet-mask 255.255.255.0;
  option routers 192.168.137.1;
  option dhcp-lease-time 21600;
  option dhcp-message-type 5;
  option domain-name-servers 8.8.8.8,8.8.4.4;
  option dhcp-server-identifier 192.168.137.100;
  option domain-name "lap.work";
  renew 1 2012/2/27 19:37:49;
  rebind 1 2012/2/27 22:34:52;
  expire 1 2012/2/27 23:19:52;
}
[root@server ~]#

Verifying signatures using GPG or PGP

Advertisements


GPG - GNU Privacy Guard
GnuPG is the GNU project's complete and free implementation of the OpenPGP standard as defined by RFC4880 . GnuPG allows to encrypt and sign your data and communication, features a versatile key management system as well as access modules for all kinds of public key directories.

installing a gpg key. GPG is compatible with pgp ( Pretty good privacy). So you can install pgp key aslo.
gpg --import name.gpg

[root@work2 src]# gpg --import sendmail2011.asc
gpg: key A97884B0: public key "Sendmail Signing Key/2011 <sendmail@Sendmail.ORG>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: no ultimately trusted keys found

Listing the installed gpg keys. This will list all the GPG/PGP keys currently installed on your system.
gpg --list-keys

[root@work2 src]# gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub   2048R/CEEEF43B 2011-12-14
uid                  Sendmail Signing Key/2012 <sendmail@Sendmail.ORG>
sub   2048R/1998F74E 2011-12-14

pub   2048R/A97884B0 2011-01-04
uid                  Sendmail Signing Key/2011 <sendmail@Sendmail.ORG>
sub   2048R/620439A5 2011-01-04

Verifying a package. Now verifying the signature using the signature file downloaded against the key installed.
gpg --verify name.x.x.x.sig name.x.x.x.tar.gz

[root@work2 src]# gpg --verify sendmail.8.14.5.tar.gz.sig sendmail.8.14.5.tar.gz
gpg: Signature made Mon 16 May 2011 09:40:21 AM IST using RSA key ID A97884B0
gpg: Good signature from "Sendmail Signing Key/2011 <sendmail@Sendmail.ORG>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 5872 6218 A913 400D E660  3601 39A4 C77D A978 84B0
[root@work2 src]#

Saturday, February 25, 2012

Installation of Linux Malware Detect or maldet

Advertisements

Linux Malware Detect (LMD) is a malware scanner for Linux released under the GNU GPLv2 license, that is designed around the threats faced in shared hosted environments. It uses threat data from network edge intrusion detection systems to extract malware that is actively being used in attacks and generates signatures for detection. In addition, threat data is also derived from user submissions with the LMD checkout feature and from malware community resources. The signatures that LMD uses are MD5 file hashes and HEX pattern matches, they are also easily exported to any number of detection tools such as ClamAV.

Some other antivirus scanners are rkhunter and chkrootkit.