Wednesday, November 16, 2011

ip_conntrack problem: error: "net.ipv4.ip_conntrack_max" is an unknown key

To fix this issue do the following:

1. modprobe ip_conntrack
2. lsmod |grep conn -- if you see entries it means modules have been loaded correctly
3. sysctl -w -- to write the changes you made under /etc/sysctl.conf
4. sysctl -p -- to view the changes and see if it was actually loaded.

Thursday, November 3, 2011

Command to check CronJobs

There are times that you need to immediately check if there are run-away cron entries hidden somewhere. This terminal command will save you time doing just that.

for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done

Now, if for example you need to issue this remotely you can do it like this:

ssh someusername@some_remote_host sudo for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done


Friday, October 14, 2011

Error writing message: File too large

A postfix error that occurs when a user's mailbox is full.
Regularly caused by cronjob output, particularly when there are systematic errors in the cron job that lead to long and frequent email reports.

mailbox_size_limit and virtual_mailbox_limit may be relevant.
You can use
postconf | egrep '(mailbox_size_limit|virtual_mailbox_limit)' see what size they are currently set at, which is probably 50MB.

Set this to a higher limit (in your .cf files), or even to 0 to remove limits.

Sunday, September 25, 2011

Fix issue with "@" on a Perl Script

There are times that you encounter problems executing PERL scripts which failed for example this one:

Possible unintended interpolation of @23mxxxx in string at
/usr/local/scripts/ line 212 (#1)
(W ambiguous) You said something like `@foo' in a double-quoted string
but there was no array @foo in scope at the time. If you wanted a
literal @foo, then write it as \@foo; otherwise find out what happened
to the array you apparently lost track of.

Name "main::23mxxxx" used only once: possible typo at
/usr/local/scripts/ line 212 (#2)
(W once) Typographical errors often show up as unique variable names.
If you had a good reason for having a unique name, then just mention it
again somehow to suppress the message. The our declaration is
provided for this purpose.

NOTE: This warning detects symbols that have been used only once so $c, @c,
%c, *c, &c, sub c{}, c(), and c (the filehandle or format) are considered
the same; if a program uses $c only once but also uses any of the others it
will not trigger this warning.


Escape the character before "@" with "\" this should fix it.

Thursday, September 22, 2011

Advice to employees on the proper use of the System Administrator's valuable time

(In following examples, we will substitute the name "Ted" as the System Administrator)

* Make sure to save all your MP3 files on your network drive. No sense in wasting valuable space on your local drive! Plus, Ted loves browsing through 100+ GB of music files while he backs up the servers.
* Play with all the wires you can find. If you can't find enough, open something up to expose them. After you have finished, and nothing works anymore, put it all back together and call Ted. Deny that you touched anything and that it was working perfectly only five minutes ago. Ted just loves a good mystery. For added effect you can keep looking over his shoulder and ask what each wire is for.
* Never write down error messages. Just click OK, or restart your computer. Ted likes to guess what the error message was.
* When talking about your computer, use terms like "Thingy" and "Big Connector."
* If you get an EXE file in an email attachment, open it immediately. Ted likes to make sure the anti-virus software is working properly.
* When Ted says he coming right over, log out and go for coffee. It's no problem for him to remember your password.
* When you call Ted to have your computer moved, be sure to leave it buried under a year-old pile of postcards, baby pictures, stuffed animals, dried flowers, unpaid bills, bowling trophies and Popsicle sticks. Ted doesn't have a life, and he finds it deeply moving to catch a glimpse of yours.
* When Ted sends you an email marked as "Highly Important" or "Action Required", delete it at once. He's probably just testing some new-fangled email software.
* When Ted's eating lunch at his desk or in the lunchroom, walk right in, grab a few of his fries, then spill your guts and expect him to respond immediately. Ted lives to serve, and he's always ready to think about fixing computers, especially yours.
* When Ted's at the water cooler or outside taking a breath of fresh air, find him and ask him a computer question. The only reason he takes breaks at all is to ferret out all those employees who don't have email or a telephone.
* Send urgent email ALL IN UPPERCASE. The mail server picks it up and flags it as a rush delivery.
* When the photocopier doesn't work, call Ted. There's electronics in it, so it should be right up his alley.
* When you're getting a NO DIAL TONE message at your home computer, call Ted. He enjoys fixing telephone problems from remote locations. Especially on weekends.
* When something goes wrong with your home PC, dump it on Ted's chair the next morning with no name, no phone number, and no description of the problem. Ted just loves a good mystery.
* When you have Ted on the phone walking you through changing a setting on your PC, read the newspaper. Ted doesn't actually mean for you to DO anything. He just loves to hear himself talk.
* When your company offers training on an upcoming OS upgrade, don't bother to sign up. Ted will be there to hold your hand when the time comes.
* When the printer won't print, re-send the job 20 times in rapid succession. That should do the trick.
* When the printer still won't print after 20 tries, send the job to all the printers in the office. One of them is bound to work.
* Don't use online help. Online help is for wimps.
* Don't read the operator's manual. Manuals are for wussies.
* If you're taking night classes in computer science, feel free to demonstrate your fledgling expertise by updating the network drivers for you and all your co-workers. Ted will be grateful for the overtime when he has to stay until 2:30am fixing all of them.
* When Ted's fixing your computer at a quarter past one, eat your Whopper with cheese in his face. He functions better when he's slightly dizzy from hunger.
* When Ted asks you whether you've installed any new software on your computer, LIE. It's no one else's business what you've got on your computer.
* If the mouse cable keeps knocking down the framed picture of your dog, lift the monitor and stuff the cable under it. Those skinny Mouse cables were designed to have 55 lbs. of computer monitor crushing them.
* If the space bar on your keyboard doesn't work, blame Ted for not upgrading it sooner. Hell, it's not your fault there's a half pound of pizza crust crumbs, nail clippings, and big sticky drops of Mountain Dew under the keys.
* When you get the message saying "Are you sure?", click the "Yes" button as fast as you can. Hell, if you weren't sure, you wouldn't be doing it, would you?
* Feel perfectly free to say things like "I don't know nothing about that boneheaded computer crap." It never bothers Ted to hear his area of professional expertise referred to as boneheaded crap.
* Don't even think of breaking large print jobs down into smaller chunks. God forbid somebody else should sneak a one-page job in between your 500-page Word document.
* When you send that 500-page document to the printer, don't bother to check if the printer has enough paper. That's Ted's job.
* When Ted calls you 30 minutes later and tells you that the printer printed 24 pages of your 500-page document before it ran out of paper, and there are now nine other jobs in the queue behind yours, ask him why he didn't bother to add more paper.
* When you receive a 130 MB movie file, send it to everyone as a high-priority mail attachment. Ted's provided plenty of disk space and processor capacity on the new mail server for just those kinds of important things.
* When you bump into Ted in the grocery store on a Sunday afternoon, ask him computer questions. He works 24/7, and is always thinking about computers, even when he's at super-market buying toilet paper and doggie treats.
* If your son is a student in computer science, have him come in on the weekends and do his projects on your office computer. Ted will be there for you when your son's illegal copy of Visual Basic 6.0 makes the Access database keel over and die.
* When you bring Ted your own "no-name" brand PC to repair for free at the office, tell him how urgently he needs to fix it so you can get back to playing EverQuest. He'll get on it right away, because everyone knows he doesn't do anything all day except surf the Internet.
* Don't ever thank Ted. He loves fixing everything AND getting paid for it!

Now this is assuming that you are like Ted! But I am not Ted and most of the stuff here are fun anyways! :P


Friday, September 9, 2011

EPEL Repositories for your CentOS

To enable EPEL (Extra Packages for Enterprise Linux) for CentOS 5 x86 or x64, log in to SSH on your server and execute the following command (dependent on your OS – unsure of what version of CentOS you are running?):

CentOS 6.x 32-bit (x86/i386):

rpm -Uvh

CentOS 6.x 64-bit (x64):

rpm -Uvh

CentOS 5.x 32-bit (x86/i386):

rpm -Uvh

CentOS 5.x 64-bit (x64):

rpm -Uvh

Disable EPEL Repo:

If you want to disable the EPEL repo on your server, set “enabled=0″ in “/etc/yum.repos.d/epel.repo”:

vim /etc/yum.repos.d/epel.repo

Thursday, August 25, 2011

Transfer Files from Linux to a Samsung Tab 10.1v

There are instances that Samsung won't mount automatically there is a couple of fixes out there but this one works fine with my setup. I am using a Mint Linux

Works on 32bit and 64bit Linux. Just a quick guide on how to configure Linux for file transfers using MTP. The instructions and config is intended for Ubuntu Natty 32bit and 64bit, though the same process will work on other platforms – the only real difference being the package manager commands and possibly the config file locations.

The attached files are for the Galaxy Tab 10.1v but should work for the 10.1g / 10.1 as well. See the end of the post to learn how to modify this config to work for other devices.

How to configure for gMTP and other Media Sync tools

1) Install aptitude

sudo apt-get install aptitude

2) Install mtp-tools and mtpfs

sudo aptitude install mtpfs mtp-tools

3) Download this file: for 32bit Linux or for 64bit Linux to your desktop.

4) Extract the 98-gtab.rules file to your desktop

5) Copy the rules file to /etc/udev/rules.d/

sudo cp ~/Desktop/98-gtab.rules /etc/udev/rules.d

6) Reboot

7) Connect your Tab

8) Run the following command to confirm it is working:

ls /dev | grep gtab

This command should return “gtab” if successful. If not, follow the “Modifying” guide below.

9) Download / install gMTP

sudo apt-get install gmtp

10) Open gMTP and select “connect” from the menu

Setting up for Automount (Optional, but recommended)

Before follow these instructions, you must have completed Steps 1-8 above.

1) Edit your fstab file to add your gtab:

sudo gedit /etc/fstab

2) Add this to the end of the file:


mtpfs /media/gtab fuse user,noauto,allow_other 0 0

3) Save and exit

4) Open fuse.conf for editing:

sudo gedit /etc/fuse.conf

5) Find the following line and remove the #


6) Save and exit

7) Open and edit the groups file:

sudo gedit /etc/group

8) Find the details for the group ‘fuse’ and append your username to the end of the line eg.


9) Save and exit

10) Create the folder to mount your Tab:

sudo mkdir /media/gtab

11) Take ownership of the folder:

sudo chown :users /media/gtab

12) Reboot

13) Plug in your Tab.

14) Click on the Places menu and click gtab.

15) You’re in business!

Modifying for other devices

If the above doesn’t work immediately on the 10.1g / 10.1 (I have only tested on the 10.1v), you can easily edit the rules file to support your device.

1) Install lsusb

sudo apt-get install lsusb

2) Run lsusb


3) Check the output of this command to find your device. The 10.1v is shown like this:

Bus 001 Device 010: ID 04e8:6860 Samsung Electronics Co., Ltd

4) Make a note of the Vendor and Product IDs. In the example above, the vendor ID is 04e8 and device ID is 6860 (note 04e8:6860 in the output).

5) Open the rules file for editing (if it’s not already in /etc/udev/rules.d, copy it there now)

sudo gedit /etc/udev/rules.d/98-gtab.rules

6) Find this line

ATTRS{idVendor}==”04e8″, ATTRS{idProduct}==”6860″, MODE=”0666″ SYMLINK+=”gtab”

7) Replace the Vendor ID (04e8) and Product ID (6860) with the ones that you got from step 3 above.

8) Save and exit

9) Reboot

10) Follow step 7 onward in the first guide above


ACTION!=”add”, GOTO=”gtab_rules_end”
SUBSYSTEM!=”usb|usb_device”, GOTO=”gtab_usb_end”

ATTRS{idVendor}==”04e8″, ATTRS{idProduct}==”6860″, MODE=”0666″ SYMLINK+=”gtab”




ACTION!=”add”, GOTO=”gtab_rules_end”
SUBSYSTEM!=”usb|usb_device”, GOTO=”gtab_usb_end”

ATTRS{idVendor}==”04e8″, ATTRS{idProduct}==”6860″, MODE=”0777″ SYMLINK+=”gtab”



Tuesday, August 23, 2011

Additional Datastores for your ESXi 4.1 server via NFS running on CentOS 6

I've recently run out of storage for creation of new instances of servers to our test environment and the IBM 300X series has maxed out. What I did was introduce an NFS drive and mount it on EXSi using vSphere client running on Windows. There are a couple of chores to do first before you can perform this. In my case, I have a spare server running on CentOS 6 with enough SATA/ports to get the job done. So basically, the idea was to prep a new hardrive and configure it to be used as an NFS drive. Lets get down to the details.

First make a directory to place the NFS export mount and assign permissions. Also open up write permissions on this directory if you’d like anyone to be able to write to it, be careful with this as there are security implications and anyone will be able to write that mounts the share:

# mkdir /nfs
# chmod a+w /nfs

Now we need to install the NFS server packages. We will include a package named “rpcbind”, which is apparently a newly named/implementation of the “portmap” service. Note that “rpcbind” may not be required to be running if you are going to use NFSv4 only, but it is a dependency to install “nfs-utils” package.

# yum -y install nfs-utils rpcbind

Verify that the required services are configured to start, “rpcbind” and “nfslock” should be on by default anyhow:

# chkconfig nfs on
# chkconfig rpcbind on
# chkconfig nfslock on

Configure APF Firewall for NFS

Rather than disabling the firewall it is a good idea to configure NFS to work with APF for (iptables). For NFSv3/v4 we need to lock several daemons related to rpcbind/portmap to statically assigned ports. We will then specify these ports to be made available in the INPUT chain for inbound traffic. Fortunately for NFSv4 this is greatly simplified and in a basic configuration TCP 2049 should be the only inbound port required.

First edit the “/etc/sysconfig/nfs” file and uncomment these directives. You can customize the ports if you wish but I will stick with the defaults:

# vi /etc/sysconfig/nfs


We now need to modify the APF firewall configuration to allow access to the NFS ports. For simplicity I did not use “iptables” command and insert the appropriate rules, in my case I am using APF to get the iptables done.

# vi /etc/apf/conf.apf and look for the:
IG_TCP_CPORTS="111,662,875,892,2049,32803" <-- add="" also="" and="" br="" ig_udp_cports="111,662,875,892,2049,32769" look="" ports="" section="" the="" these="" to="" udp="">

Now save the APF configuration to the config file so it will apply when APF is restarted:

# service apf restart

Now we need to edit “/etc/exports” and add the path to publish in NFS. In this example I will make the NFS export available to clients on the subnet. I will also allow read/write access, specify synchronous writing, and allow root access. Asynchronous writes are supposed to be safe in NFSv3 and would allow for higher performance if you desire. The root access is potentially a security risk but AFAIK it is necessary with VMware ESXi.

# vi /etc/exports


Configure SELinux for NFS Export

Rather than disable SELinux it is a good idea to configure it to allow remote clients to access files that are exported via NFS share. This is fairly simple and involves setting the SELinux boolean value using the “setsebool” utility. In this example we’ll use the “read/write” boolean but we can also use “nfs_export_all_ro” to allow NFS exports read-only and “use_nfs_home_dirs” to allow home directories to be exported.

# setsebool -P nfs_export_all_rw 1

Now we will start the NFS services:

# service rpcbind start
# service nfs start
# service nfslock start

If at any point you add or remove directory exports with NFS in the “/etc/exports” file, run “exportfs” to change the export table:

# exportfs -a

Implement TCP Wrappers for Greater Security

TCP Wrappers can allow us greater scrutiny in allowing hosts to access certain listening daemons on the NFS server other than using iptables alone. Keep in mind TCP Wrappers will parse first through “hosts.allow” then “hosts.deny” and the first match will be used to determine access. If there is no match in either file, access will be permitted.

Append a rule with a subnet or domain name appropriate for your environment to restrict allowable access. Domain names are implemented with a preceding period, such as “” without the quotations. The subnet can also be specified like “192.168.10.” if desired instead of including the netmask.

vi /etc/hosts.allow


Append these directives to the “hosts.deny” file to deny access from all other domains or networks:

vi /etc/hosts.deny


And that should just about do it. No restarts should be necessary to apply the TCP Wrappers configuration. I was able to connect with both my Ubuntu NFSv4 and VMware ESXi NFSv3 clients without issues. If you’d like to check activity and see the different NFS versions running simply type:

# watch -n 1 "nfsstat" 

If you encounter errors while attaching/adding the new NFS resource to your ESXi host via vCenter do the following steps:

1. Ensure that you check the "setup" command and un-tick the enable Firewall option and save.
2. Edit the /etc/selinux/config file and do the changes on the line SELINUX=enforcing and change it to SELINUX=disable, save and close.
3. Reload -- rpcbind, nfslock and nfs services.

Try again and cheers!!!

Tuesday, August 16, 2011

Force Apache2 to Redirect All Inbound Traffic to SSL

In this post, I will make another good attribution from a very good post from the net. Configuring apache2 to force redirection of http to https traffic.

Apache2: Forcing All Inbound Traffic to SSL

So, you have an Apache 2 web server and you have decided that you want to force all inbound traffic to be encrypted via HTTPS (port 443) instead of HTTP (port 80). This method actually “dumbs down” the connection so the average user can’t inadvertently negotiate your web site without encrypting their traffic.

My web server of choice is Apache2, running on a Linux Operating System. Preferably Debian but we’ll discuss an option for Red Hat Enterprise Linux 4 (RHEL-4). That being said, you need Apache installed and running on Linux. You also need the Apache module “” installed and an encryption key generated for your server.

In the following snippet of .conf file we will first load mod_rewrite and then redirect all inbound port 80 traffic to port 443.

Add the following code section to your httpd.conf down around line #220, right after the big “load modules” section.

Be aware that “#’s” indicate a comment line in the .conf file and are ignored by Apache2.

#### This is intended to force HTTPS ####
#### for all inbound HTTP requests ####

# This module (mod_rewrite) simply tells Apache2 that all connections to
# port 80 need to go to port 443 – SSL – No exceptions

LoadModule rewrite_module modules/

RewriteEngine on

# The line below sets the rewrite condition for
# That is, if the server port does not equal 443, then this condition is true

ReWriteCond %{SERVER_PORT} !^443$

# The line below is the rule, it states that if above condition is true,
# and the request can be any url, then redirect everything to https:// plus
# the original url that was requested.

RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L]


Add the code to httpd.conf and restart Apache2, check your logs for errors to ensure a clean startup and connect to your server on port 80. It should be instantly redirected to 443.

Alternatively, on RHEL4, you can add the code above into a file (you create) called mod_rewrite.conf in the /conf.d directory (/conf.d/mod_rewrite.conf).

Note the “XXX” marks in my comments, I make a habit to “tag” any configuration files I edit on a linux server so when I come back to it later i can find my edits easily. Your initials work well for this and helps identify which admin makes the change.


Thursday, August 11, 2011

Convert Matroska file formats to Avi

In this post we will dig into a simple linux command line tool which can help you convert your Mastroska file formats into .avi files:

ffmpeg -i harry_potter_sorcerer_stone.mkv -target vcd harry_potter_sorcerrer_stone.avi

Depending on the size of the file it usually takes an hour or hour and half to convert one video into an .avi file. Cheers!!!


Saturday, July 23, 2011

The Perils of TRUST - An IS Auditors Nightmare

I have worked for many years as a systems/network administrator, I have learned to live and adopt with the changing patterns of IT, which we have grown to know well. I have made enemies doing the right thing. I made friends to those enemies whom has done the wrong thing. In the end, we are rational beings tending along the stages of life. There are some science that just cannot be corrected due to its mortal beginnings and one of this truth is "TRUST"

To TRUST is to be believe that a thing, undertaking, and/or idea will work according to an individuals plans and desires. It is our belief system which we choose to adopt shall provide us the necessary cognitive apprehension of what "IS" and what "WAS". As an auditor, we are thought the high gaining values of TRUTH and HONESTY. When we learn to TRUST on something which we believe will generate a fair TRUTHFUL and HONEST outcome, we tend to relax a bit and put down our guard. This should not be the case.

TRUST is the most priced trait of an individual who PRIDES himself/herself of his accomplishment and works. Auditors are not an exemption. We are continually targeted by MALICIOUS self-centered agents of the trade. These agents DISGUISE themselves as CO-WORKERS, CLIENTS, FRIENDS and CORRUPTED POLICIES designed to harbor all the LIES and DECEIT man can think off. Therefore, it is our sworn duty to JUSTLY identify these agents and remove them from the SYSTEM. The SYSTEM is what we serve and though the SYSTEM we grow. Treat it with respect and it will reward you with peace of mind. Treat it with a twisted Intent and you are doomed to have sleepless nights.

The reason, why I find this so compelling is that I worked as a systems engineer for a good deal of time and I have learned everything there is necessary to understand what the DARK desires of an admin are and what they can do to a fellow admin.


A systems administrators builds a new server performs the necessary hardening and then performs the necessary ..... [to be continued ...]

Error during kernel upgrade: gzip: stdout: No space left on device

There are times when you will be surprised that package managers does not automatically remove older archives of the software installed. This happened to in for the first time, when one of the systems I was managing suddenly returned an exit status 1.

Removing the offensive application to free up much needed space is sure to fail! especially if the application mentioned here is a kernel.

Consider this line:

Setting up libcups2 (1.4.6-5ubuntu1.3) ...
dpkg: dependency problems prevent configuration of linux-image-generic:
linux-image-generic depends on linux-image-2.6.38-10-generic; however:
Package linux-image-2.6.38-10-generic is not configured yet.
dpkg: error processing linux-image-generic (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
dpkg: dependency problems prevent configuration of linux-generic:
linux-generic depends on linux-image-generic (=; however:
Package linux-image-generic is not configured yet.
dpkg: error processing linux-generic (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
Setting up linux-headers-2.6.38-10 (2.6.38-10.46) ...
Setting up linux-headers-2.6.38-10-generic (2.6.38-10.46) ...
Setting up linux-headers-generic ( ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.38-8-generic

gzip: stdout: No space left on device
E: mkinitramfs failure cpio 141 gzip 1
update-initramfs: failed for /boot/initrd.img-2.6.38-8-generic
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)

If you are guessing that /boot is in deep trouble. You are correct. Now, the tricky part is this. Issue a purge; autoremove won't do the trick because the drive no longer has enough space. What do you do next?

1. If it is a kernel (old one) that needs to be removed, look closely at your grub.conf or grub.cfg configuration and Identify the kernels you no longer need. Check the currently loaded kernel by issuing uname -r.

2. Take note of the files that needs to be moved, in our example its a debian based system so you will have to move these files: abi, config, initrd.img,, vmcoreinfo, vmlinuz. Just remove/move those that you don't need.

3. Once done issue the command updatedb to update the slocate database of the filesystem

4. Now you can issue the upgrade command back and this will install the new kernel correctly.

Time Snatchers

For the not so unconventional and always looking forward admin, I have a few brewed time killers to get you, looking back at how it used to be when things aren't too serious.


1. Pirates of the Silicon Valley -- the Apple and Microsoft love-affair
2. Revolution OS - Documentary, it highlights the advocacies of Open Source and the Free Software Foundation.
3. VODO.NET - if you are a true advocate of Creative Commons, then find the time to download there huge array of mini-series,documents and short films. This is how tv should be, the viewers chooses if-when-where-it-ends!


1. Probably not everybody's choice but worth the read: Harry Potter (Books 1-7) ain't too old to have time discussing things with the kids. It brings out the ideas in them and sponsors confidence, trust, good learning through reading. A much needed exercise is not to common these days especially for the youth of today.

2. The Art of Unix Programming: A defacto standard! Understand why Unix is still around today and what universal "chi" it has spawned through its 40 years of existence.

3. Cathedral and the Bazzar: Every known X "As" a service must have read this book. Truly worth while.

Sports/Leisure -- Family:

Going to Church, Malling, Dinning with the Family. It gives you a sense of purpose on why you have been working so hard and where it is all being poured to.

Friday, July 22, 2011

Why "curl" is way better than "wget"

I am an OLD SCHOOL admin tought from the old school class of using wget, I guess its time to move on. In this section I will highlight to you some very important key features on why curl is way too robust than wget.

Curl is better than wget for the following reasons:
1. Uses libcurl a cross platform library
2. curl sends more stuff to stdout and reads more from stdin
3. curl supports ftp, ftps, http, https, scp sftp, tftp, telnet, dict, ldap and ldaps while wget supports only http, https and ftp
4. curl has SSL support
5. libcurl supports more http authentication methods
6. Curl is bidirectional while wget offers http post support only
7. Curl has more development activities


curl -l -O

The one advantage I see in using wget is it’s ability to download recursively.
In short curl is better and more powerful. I actually don't need to install it on most UNIX servers as curl is already available by default.


Wednesday, July 20, 2011

12 Reasons Why Every Linux System Administrator Should be Lazy

Lazy sysadmin is the best sysadmin –Anonymous

System administrators job is not visible to other IT groups or end-users. Mostly they look at administrators and wonder why sysadmins don’t seem to have any work.

If you see a sysadmin who is always running around, and trying to put down fire, and constantly dealing with production issues, you might think he is working very hard, and really doing his job. But in reality he is not really doing his job.

If you see a sysadmin (UNIX/Linux sysadmin, or DBA, or Network Administrators), who doesn’t seem to be doing much around the office that you can see, he always seem to be relaxed, and he don’t seem to have any visible work, you can be assured that he is doing his job.

The following are the 12 reasons why a lazy sysadmin is the best sysadmin.

Who is the boss? The main reason why lazy sysadmin is the best sysadmin is because of his attitude. They look at the machines little differently than how other IT departments looks at them. There is a difference between developers and sysadmins. Developers thinks they are here to serve the machines by developing code. There is nothing wrong in this approach, as developers have lot of fun developing the code. But, sysadmins think other way around. They think the machines are there to serve them. All they have to do is feed the machine and keep it happy, and let the machine do all the heavy duty job, while they can relax and just be lazy. The first step in being a lazy sysadmin is a slight change in attitutde, and letting the machine know that you are the boss.

Write scripts for repeated jobs. Being lazy means being smart. A smart sysadmin is a master in all scripting languages (bash, awk, sed, etc.,). Anytime he is forced to do some work, and if there is a remote possibility that the work might be needed in the future, he writes a script to complete the job. This way, in the future when he was requested to do the same job, he doesn’t have to think; he just have to execute the script, and get back to being lazy.

Backup everything. Being lazy means taking backup. A lazy sysadmin knows that he has to put little work in creating a backup process, and write backup scripts for all critical systems and applications. When the disk space is not an issue, he schedules the backup job for every application, even for those that are not critical. This way, when something goes wrong, he doesn’t have to break a sweat, and just have to restore from the backup, and get back to whatever lazy stuff he was doing before. This is also the rule#1 in the three sysadmin rules that you shouldn’t break.

Create a DR plan. Sysadmins doesn’t like to run around when things go wrong. When things are running smoothly, they take some time to create a DR plan. This way, when things go wrong, they can follow the DR plan and quickly get things back to normal, and get back to being lazy again.

Configure highly redundant systems. Lazy sysadmins don’t like to get calls in the middle of the night because of some silly hardware failure problem. So, they make sure all the components are highly redundant. This includes both hardware and software. They have dual network card configured, they have dual power, they have dual hard drives, they have dual of everything. This way, when one component fails, the system still keeps running, and the lazy sysadmin can work on fixing the broken component after he wakes-up in the morning.

Head room for unexpected growth. Lazy sysadmin never allows his system to run in full capacity. He always has enough head room for unexpected growth. He make sure the system has plenty of CPU, RAM and hard disk available. When the business unit decides to dump tons of data over night, he doesn’t have to think about how to handle that unexpected growth.

Be proactive. Being lazy doesn’t mean you just sit and do nothing all the times. Being lazy means being proactive. Lazy sysadmins hate being reactive. They are always anticipating issues and anticipating growth. When they have some free time in their hand, they always work on proactive projects that helps them to avoid unexpected future issues, and to handle future growth.

Loves keyboard shortcut. Lazy sysadmin knows all the keyboard shortcuts for all his favorite applications. If he spends significant time everyday on an application, the first thing he’ll do is to master the keyboard shortcut for that application. He likes to spends less them on the application to get his things done, and likes to get back to being lazy.

Command line master. Every lazy sysadmin is a command line master. This applies to Linux sysadmin, dba, network administrator, etc. If you see an administrator launching a GUI, when the same task can be done from the command line, then you know he is not a lazy sysadmin. There are two reasons why lazy sysadmin loves command line. For one, he can do things quickly at the command line. For another, it makes him feel that he is the boss and not the system. When you use the command line, you are in control, you know exactly what you want to do. When you use GUI, you are at the mercy of the GUI workflow, and you are not in control.

Learns from mistake. Lazy sysadmin never likes to make the same mistake twice. He hates to work on unexpected issues. But, when an unexpected issue happens, he works on fixing it, and thinks about why it happened, and he immediately puts necessary things in place so that the same issue doesn’t happen again. Working on the same problem twice is a sin according to lazy sysadmin. He likes to work on the problem only once, do things to prevent the same mistake from happening in the future, and get back to being lazy.

Learn new technology. There is nothing wrong in learning a new technology to get a better job, or just to keep up with technology growth. But, lazy sysadmin doesn’t learn new technology for this reason. Instead, he learns new technology because he likes to be in control of the systems all the times. He knows he is the boss, and not the machine. So, when a new technology comes, he takes time to study them. Now he has new tools that he can use to keep the system busy, while he continue to be lazy. He learns new technology just for selfish lazy reason.

Document everything. Not every lazy sysadmin does this. Only the best lazy sysadmins does this. You see, lazy sysadmin never likes to be disturbed when he is on the beach enjoying his vacation. So, what does he do? He documents everything, so that when he is not around, other junior sysadmins can do the routine job, and get things moving without disturbing his vacation. There is also another reason for the lazy sysadmin to document everything; because he forgets things. Since he is lazy, he tends to forget what he did a month ago. Since he never likes to think and research the same topic twice, he documents everything, and when he needs to do the same thing in the future, he goes back to his documentation to understand what he did earlier.

Probably you are now convinced that begin a lazy sysadmin is not that easy. It is lot of hard work. If you are not a sysadmin, you can now appreacie a lazy sysadmin when you see one. If you are sysadmin, and always running around, now you know what you need to do to be lazy.

ATTRIBUTION: TechRepublic from one of their recent news 2011 copyright.

Tuesday, July 12, 2011

CentOS 6.0 is here

Waiting for the arrival of branch release updates for the 6.1 version. Took, some time for CentOS community to put this up. So, what is in-store for us? Plenty of new features something to look forward if you plan to move your apps and projects to be hosted on this platform. The links provided below will give you a wealth of information what to expect.

Tuesday, June 7, 2011

Everybody, Somebody, Anybody, and Nobody

This is a little story about four people named Everybody, Somebody, Anybody, and Nobody.

There was an important job to be done and Everybody was sure that Somebody would do it.

Anybody could have done it, but Nobody did it.

Somebody got angry about that because it was Everybody's job.

Everybody thought that Anybody could do it, but Nobody realized that Everybody wouldn't do it.

It ended up that Everybody blamed Somebody when Nobody did what Anybody could have done.

original link:

Thursday, May 19, 2011

How Sitting Can Actually Kill Systems Admins

Just came across this link that details the truth about sitting. Well most of the time Systems Admins sit down. Wired in on there computers and work. Think again .... I guess I am one of them. :D

Sitting is Killing You
Via: Medical Billing And Coding

Thursday, April 7, 2011

Rescue Linux using Live CD via chroot

There are times when you mess up something on the system and you need to rescue it using safe boot etc.  However, this can be a real daunting task if you need a fast cure to most common problems.  Like for example if you have stuff where you need to remount and extend partitions etc.

Solution:  Boot using Live-CD!

1. Boot to your Live-CD environment
2. Create a virtual directory to mount the partition
3. Issue the command:  #mount -t fs.type -o "options" /path/partion /path/directory
4. Issue the command #chroot /path/directory

This drops you to the working directory of your current system and fix things.  Note, that you may have to mount all partitions to the same directory to get all the libraries needed to run the system correctly.

Wednesday, April 6, 2011

Create files with a leading dash "-" in Linux/Unix

If you happen to need a file created with a leading dash "-" all you have to do is add the double "--" and the "-"  

Example:  touch -- -thisfile  

The concept is pretty much the same as on how you would delete,move,edit files with a leading dash.

Example:  rm -- -thisfile
                 mv -- -thisfile
                 vi -- -thisfile

This is handy if for example you have sites that needs id verification from site rating providers etc. 

It's one of those days when you really need to get things done immediately but you have to browse the Internet for some answers.  Cheers!

Thursday, March 31, 2011

Removing old keys in your known_hosts file

If you happen to have changed server IP address and while at it you get this message doing a remote connnection:

ssh ants@
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /home/ants/.ssh/known_hosts to get rid of this message.
Offending key in /home/ants/.ssh/known_hosts:13
RSA host key for has changed and you have requested strict checking.
Host key verification failed.

You will have to remove the entry by editing your known_hosts file.  Remove the old entry, in this case its on line "13" save the file and try connecting again, accept the new one when prompted.


Wednesday, March 30, 2011

SMS triggered service executions on Unix/Linux Systems,

There isn't enough material out there on the net that teaches systems administrators a thing or two about how to integrate your SMS monitoring to a complete real time system that allows you to execute commands via SMS.

In this post we will do just that.  The requirement is that you have an SMS server running and functional; plenty of time to brush up on your proof of concept skills.  It may not be the complete solution to what you have in mind but for me it works fine.  The goal is simply to respond to issues you deal on a day-to-day basis, albeit real-time response whenever a problem arises.


Map out the logical flow behind your project.  We all know that SMS messages are triggered events, it could be automatic events or human response events.  A working SMS server will have both; by the time you receive a message coming from your monitoring system suggest that an automatic event has been triggered which based on the given configuration you made; sent out the notification informing you that such an event has taken place.  Now, often if a computer is within reach our first reflex-response is to login work on the issue.  This is the reality for the first given days, weeks and months.  Apparently, that ceases when you encounter too-many false positives. 

Working out a Real-time solution

To address this one-way communication between you and to those systems that you manage you have to be able to manage it directly via your mobile phone.  A cheap solution to a very urgent event which could mean alot to the company you manage.

.... to be continued ....

Tuesday, March 22, 2011

OpenKM: ERROR [MainDeployer] Could not create deployment: file:/opt/jboss-4.2.3.GA/server/default/conf/jboss-service.xml

If you happen to have a problem with this:  ERROR [MainDeployer] Could not create deployment: file:/opt/jboss-4.2.3.GA/server/default/conf/jboss-service.xml

This means that their is a problem with your host file.  Make sure that you have configured your hostname in line with your local ip address.

Friday, February 25, 2011

Install OCS Inventory Agents minus the Headaches

One of the major hurdles with OCS Inventory is understanding how the wiki pages addresses agent installation methods.  In my opinion the best documentation they had so far is written in french! So what I am going to do is make life easier!  Posting a hashed version which will allow you to install agents with very little problem.  Enough of the talk and lets get down to business.

Before installing the agent it is pressumed that you have OCS Inventory Server running correctly.  The goal in this mini-cookbook is to guide you without having to second guess how the installation might proceed as with the OCS pages do.

[For those using Linux/MacOS]

1)  Download the package  if you want a source install, on the otherhand if you are using newer versions of Debian/CentOS you can easily do an apt-get install "package name" and yum install "package name" though you will have to make some edits to your repos list to update your channel streams

2)  Process will check for availability of "perl" modules for features to work correctly. Please satisfy those dependencies before continuing.

3) During installation you will have to point your agent to listen to e.g. <-- server. It is expected that you will have a  problem connecting this is fine. Once the agent is installed please proceed to item 4.

4) Edit /etc/ocsinventory-agent/ocsinventory-agent.cfg

     <-- change this value to point to the OCS server.

5) Initialize a check, issue the command:

              ocsinventory-agent --debug --info --scan-homedirs
    This will send an agent report to the server. If this is successful  please proceed to item 6.

6) Daemonize the agent

              ocsinventory-agent --deamon --debug --info --scan-homedirs

     This will put the agent to work in the background.

7) To automatically allow your system to send updates to the inventory server you will have to bootstrap the previous command "item 6" on startup.

[For those using Windows] 

1) Download the installer package:  extract and install the package.
Look for the file "OcsAgentSetup.exe" to run setup.

2) On the Server Address use this IP:; Server Port: 80; Enable log file [check it]; Immediately launch inventory [check it].

3) Check if the service is running. You can do this by opening the "service management console > services" double click on the icon and check if OCS INVENTORY is "started" and the Startup type is set to "Automatic" your done.

Process should not take more than 30 minutes of your time. Login to the web console of the OCS Inventory Server and verify that the agents communicated your machine and has been registered.

Thursday, February 24, 2011

Configuring Xymon for SMS Server Tools 3

I have done SMS configurations for a variety of monitoring tools in the past.  Nagios is one of them, however in this post we wont be dealing with nagios or any of those stuff related to nagios.  What we will be configuring is xymon/hobbit monitor to work with sms server tools 3.  If you notice, documentation from these two technologies are almost too stiff-straightforward and you have to do more than just read the man pages to get things done. 

The fun side in this tutorial is this; you will be re-using old and existing equipment to get the job done.  In my case I used an HSDPA modem to act as my SMS modem.  After that you will need an SMS gateway software to glue the entire project. 

Lets dig down to the details:

1. Configure your xymon server for mail notification alerts.  If you get alerts after a good configuration then you are ready to move foward with the next steps.

2. Download and install SMS Server Tools 3

3. Configure your modem to be detected by your system, it doesn't matter what NIX variant you are using.  In my case I used CentOS 5.5 to install and configure my HSDPA modem Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem.  It usually comes with a virtual cdrom and storage that can be detected by your system.  The goal here is remove those you dont need for you to use it only as a communication device.

4. Once you have installed SMS Server Tools 3 you will need to edit this file smsd.conf which can be located at /etc/smsd.conf and edit this line:
device=/dev/ttyS0 to device=/dev/ttyUSB1, save the file.

5. If you can see these lines from your tail log, it means you are doing good. 

$sudo /usr/bin/tail -f /var/log/smsd.log
2011-02-24 21:16:13,6, GSM1: Checking device for incoming SMS
2011-02-24 21:16:13,6, GSM1: Checking if modem is ready
2011-02-24 21:16:13,7, GSM1: -> AT
2011-02-24 21:16:13,7, GSM1: Command is sent, waiting for the answer
2011-02-24 21:16:13,7, GSM1: <- OK
2011-02-24 21:16:13,6, GSM1: Pre-initializing modem
2011-02-24 21:16:14,7, GSM1: -> ATE0+CMEE=1;+CREG=2
2011-02-24 21:16:14,7, GSM1: Command is sent, waiting for the answer
2011-02-24 21:16:14,7, GSM1: <- OK
2011-02-24 21:16:14,7, GSM1: -> AT+CSQ
2011-02-24 21:16:14,7, GSM1: Command is sent, waiting for the answer
2011-02-24 21:16:14,7, GSM1: <- +CSQ: 15,99 OK
2011-02-24 21:16:14,6, GSM1: Signal Strength Indicator: (15,99) -83 dBm (Good), Bit Error Rate: not known or not detectable
2011-02-24 21:16:14,6, GSM1: Checking if Modem is registered to the network

6. Test the sendsms binary, send a text message:  /usr/local/bin/sendsms 091781000102,  you should receive an sms message from your server.

7. Now you need to define the checks you need this will normally be equaivalent to the one you set for you email notification alerts In my case heres how it looks like:

SCRIPT /usr/local/bin/textalert FORMAT=sms
the "textalert" command is a simple bash script (or any scripting language you prefer would suffice) that handles the actual "sendsms" binary that actually sends out the messages. 

Done.  In the next post I will discuss escalations and other modifications necessary to make xymon work as you expect it to be.

This has been reposted to:

Monday, January 10, 2011

Apache Upgrade Woes affecting Compress::Zlib perl libraries

In this post I will sink deep on a problem I had when I upgraded my CentOS 5.3 to the latest 5.5 release.  The problem surfaced when I started working on apache for some changes.  Here is the haunting log of that problem.

[Mon Jan 10 17:34:29 2011] [error] Can't load Perl module Compress::Zlib for server, exiting...
[Mon Jan 10 17:34:46 2011] [error] dualvar is only available with the XS version of Scalar::Util at /usr/lib/perl5/site_perl/5.8.8/Compress/ line 8\nBEGIN failed--compilation aborted at /usr/lib/perl5/site_perl/5.8.8/Compress/ line 8.\nCompilation failed in require at (eval 7) line 3.\n

For the untrained eye this is a major disaster. Issuing the command $sudo package-clean --problem; $rpm -Va --nofiles --nodigest doesnt solve the problem worst you are even thinking of reverting back (rollback) changes to its original configuration.  Which is bad since you loose the availability of moving your box to a patch and security fixed version.

To solve the issue I did a "Hack" to the actual library that is causing the problem.

$sudo vi /usr/lib/perl5/site_perl/5.8.8/Compress/  and checked on the problematic line (8)

1. Remove the qw(dualvar) entry on that line. Which looks like this:  use Scalar::Util qw(dualvar);

2. Save the file

3. Re-start apache:  service httpd restart

Viola!  Problem Fixed.

Hardening Apache on CentOS 5 with mod_security

Apache can be configured to use "mod_security".  Installing it can be quite a daunting task if you are new to apache or to a certain extent has been working on apache but has used only firewall systems to secure it.  What better way to have it work with mod_security as an added defense.

By default especially on modern CentOS systems 5.  mod_security is not included in the repos, you will have to enable the EPEL (Extra Packages for Enterprise Linux) to install mod_security.


1.  Install the EPEL repos base
 # rpm -Uvh
2. Install the package
# yum install mod_security

3. Open /etc/httpd/modsecurity.d/modsecurity_crs_10_config.conf file, enter:
# vi /etc/httpd/modsecurity.d/modsecurity_crs_10_config.conf

4. Make sure SecRuleEngine set to "On" to protect webserver for the attacks:
SecRuleEngine On
5. Turn on other required options and policies as per your requirements. Finally, restart httpd:
# service httpd restart

6. Make sure everything is working:
# tail -f /var/log/httpd/error_log

[Thu Mar 31 03:27:07 2011] [notice] Digest: done
[Thu Mar 31 03:27:08 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads.
[Thu Mar 31 03:27:08 2011] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations
[Thu Mar 31 04:10:17 2011] [notice] caught SIGTERM, shutting down
[Thu Mar 31 04:10:18 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Thu Mar 31 04:10:19 2011] [notice] ModSecurity for Apache/2.5.12 ( configured.
[Thu Mar 31 04:10:19 2011] [notice] Digest: generating secret for digest authentication ...
[Thu Mar 31 04:10:19 2011] [notice] Digest: done
[Thu Mar 31 04:10:20 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads.
[Thu Mar 31 04:10:20 2011] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations

This tutorial is not limited to CentOS only distributions for Debian systems you can use apt-get to intall mod_security or check the site documentation for procedures.