Monday, May 31, 2010

BFD -- Brute Force Detection

Did it ever occurred to you on what better way to block/lock attackers from your system?  Luckily somebody already thought of that and created an open source project out of it.  I came along this site which does that, provided you have working firewall system of-course.  In this re-post,  BFD was installed to co-exist side-by-side with APF.

Brute Force Detection

Current Release:

BFD is a modular shell script for parsing application logs and checking for authentication failures. It does this using a rules system where application specific options are stored including regular expressions for each unique auth format.

The regular expressions are parsed against logs using the ’sed’ tool (stream editor) which allows for excellent performance in all environments. In addition to the benefits of parsing logs in a single stream with sed, BFD also uses a log tracking system so logs are only parsed from the point which they were last read. This greatly assists in extending the performance of BFD even further as we are not constantly reading the same log data. The log tracking system is compatible with syslog/logrotate style log rotations which allows it to detect when rotations have happened and grab log tails from both the new log file and the rotated log file.

You can leverage BFD to block attackers using any number of tools such as APF, Shorewall, raw iptables, ip route or execute any custom command. There is also a fully customizable e-mail alerting system with an e-mail template that is well suited for every day use or you can open it up and modify it. The attacker tracking in BFD is handled using simple flat text files that are size-controlled to prevent space constraints over time, ideal for diskless devices. There is also an attack pool where trending data is stored on all hosts that have been blocked including which rule the block was triggered by.

In the execution process, there is simply a cron job that executes BFD once every 3 minutes by default. The cronjob can be run more frequently for those that desire it and doing so will not cause any performance issues (no less than once a minute). Although cron execution does not permit BFD to act in real time, the log tracking system ensures it never misses a beat in authentication failures. Further, using cron provides a reliable frame work for consistent execution of BFD in a very simplified fashion across all *nix platforms.

Sunday, May 30, 2010

Removing Unwated Services on Debian systems

In this post, I am going to detail to you the required steps in making your debian system remove unwanted services.  These services may or may have unwanted security implications especially if you decide to freeze the system you are working on.  The concept of freezing means you will have to lock-down all updates to the system due to a very critical application requirement. 

Under Debian Linux startup files are stored in /etc/init.d/ directory and symbolic linked between /etc/rcX.d/ directory exists. Debian Linux uses System V initialization scripts to start services at boot time from /etc/rcX.d/ directory. Debian Linux comes with different utilities to remove unwanted startup file

Using rcconf
This tool configures system services in connection with system runlevels. It turns on/off services using the scripts in /etc/init.d/. Rcconf works with System-V style runlevel configuration. It is a TUI(Text User Interface) frontend to the update-rc.d command.

Install rcconf in Debian
#apt-get install rcconf
To start rconf, login as root user and type rcconf
# rcconf
Select the service you would like to enable or disable.

Using sysv-rc-conf

sysv-rc-conf provides a terminal GUI for managing “/etc/rc{runlevel}.d/” symlinks. The interface comes in two different flavors, one that simply allows turning services on or off and another that allows for more fine tuned management of the symlinks. Unlike most runlevel config programs, you can edit startup scripts for any runlevel, not just your current one.

Install sysv-rc-conf in debian

#apt-get install sysv-rc-conf
This will install sysv-rc-conf.Now you need to run the following command
# sysv-rc-conf
Select the service you would like to enable or disable.
Both sysv-rc-conf and rcconf are best tools to use on Remote Debian Linux or when GUI is not available
You can also use update-rc.d script as follows (update-rc.d removes any links in the /etc/rcX.d directories to the script /etc/init.d/service):
# update-rc.d -f {SERVICE-NAME} remove
For example to stop xinetd service you can type command as follows:
# update-rc.d -f xinetd remove

Wednesday, May 19, 2010

Preparing for Nginx+PHP+lighttpd on CentOS 5.5 with ext4 filesystem

Preparing for a lightweight image server is as looking for the right package to host the project.  In this arcticle I am going to detail a few simple steps on how to set-up a lightweight image server running nginx, php, lighttpd on CentOS 5.5. 

Preparing the Server

This one will be using a x86_64 version of the system.  One of the key here is to be able to use ext4 filesystem on one of the partitions, as of this writing the default filesystem is still the ext3 work horse for all RHEL. 

Install the base using the basic.  Just install the base and disable all ticked instances of servers.  After all the partitioning stuff and installation proper.  Update your system using yum, accept all updates.  Afterwards, issue this command.

rpm -Uhv

This command will include the rpmforge repository the latest of which from this writing was March 2010.  Then update your system using yum again.

Download non-repo based php packages

Next download the necessary packages for the PHP you will use in this installation.  Goto  and download the packages.  In our case we just need to download the 5.2.11 jason packages to a specified directory. 

Install additional packages using yum

Install additional packages via yum.  gcc, binutils, make, autoconf, Net-perl-SSLeay, gd, gd-devel, gmp, gmp-devel, pcre, pcre-devel, openssl, openssl-devel, zlib-devel and lastly e4fsprogs (for ext4 kernel support).

Install the PHP packages

Goto the specified directory where you downloaded the php packages. Issue the command. 

rpm -ihv *.rpm

Install lighttpd

Issue the command:  yum install lighttpd

Download Nginx and do a compile.

Download the latest nginx source package for us to do a manual install.  unpack this source package and do the following: 

tar xvf ngix-0.8.32.tar.gz

cd ngix-0.8.32/

./configure --sbin-path=/usr/local/sbin --with-http_ssl_module


checking for OS
 + Linux 2.6.18-194.3.1.el5 x86_64
checking for C compiler ... found
 + using GNU C compiler
 + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-48)
checking for gcc -pipe switch ... found
checking for gcc variadic macros ... found
checking for C99 variadic macros ... found
checking for unistd.h ... found
checking for inttypes.h ... found
checking for limits.h ... found
checking for sys/filio.h ... not found
checking for sys/param.h ... found
checking for sys/mount.h ... found
checking for sys/statvfs.h ... found
checking for crypt.h ... found
checking for Linux specific features
checking for epoll ... found
checking for sendfile() ... found
checking for sendfile64() ... found
checking for sys/prctl.h ... found
checking for prctl(PR_SET_DUMPABLE) ... found
checking for sched_setaffinity() ... found
checking for crypt_r() ... found
checking for sys/vfs.h ... found
checking for nobody group ... found
checking for poll() ... found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
checking for O_DIRECT ... found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for statfs() ... found
checking for statvfs() ... found
checking for dlopen() ... not found
checking for dlopen() in libdl ... found
checking for sched_yield() ... found
checking for PCRE library ... found
checking for OpenSSL library ... found
checking for zlib library ... found
creating objs/Makefile
checking for int size ... 4 bytes
checking for long size ... 8 bytes
checking for long long size ... 8 bytes
checking for void * size ... 8 bytes
checking for uint64_t ... found
checking for sig_atomic_t ... found
checking for sig_atomic_t size ... 4 bytes
checking for socklen_t ... found
checking for in_addr_t ... found
checking for in_port_t ... found
checking for rlim_t ... found
checking for uintptr_t ... uintptr_t found
checking for system endianess ... little endianess
checking for size_t size ... 8 bytes
checking for off_t size ... 8 bytes
checking for time_t size ... 8 bytes
checking for setproctitle() ... not found
checking for pread() ... found
checking for pwrite() ... found
checking for strerror_r() ... found but is not working
checking for gnu style strerror_r() ... found
checking for localtime_r() ... found
checking for posix_memalign() ... found
checking for memalign() ... found
checking for mmap(MAP_ANON|MAP_SHARED) ... found
checking for mmap("/dev/zero", MAP_SHARED) ... found
checking for System V shared memory ... found
checking for struct msghdr.msg_control ... found
checking for ioctl(FIONBIO) ... found
checking for struct tm.tm_gmtoff ... found
checking for struct dirent.d_namlen ... not found
checking for struct dirent.d_type ... found

Configuration summary
  + using system PCRE library
  + using system OpenSSL library
  + md5: using OpenSSL library
  + sha1 library is not used
  + using system zlib library

  nginx path prefix: "/usr/local/nginx"
  nginx binary file: "/usr/local/sbin"
  nginx configuration prefix: "/usr/local/nginx/conf"
  nginx configuration file: "/usr/local/nginx/conf/nginx.conf"
  nginx pid file: "/usr/local/nginx/logs/"
  nginx error log file: "/usr/local/nginx/logs/error.log"
  nginx http access log file: "/usr/local/nginx/logs/access.log"
  nginx http client request body temporary files: "client_body_temp"
  nginx http proxy temporary files: "proxy_temp"
  nginx http fastcgi temporary files: "fastcgi_temp"

next ...

do a make && make install  to finish the job. 

Configure to nginx.conf to listen to port that you desire.  By default it will listen to 80.  But in this case you have lighttpd installed that is already listening on that port.

Preparing your spare drive for ext4 filesystem

Issue the command: 

fdisk -lu /dev/sdb .... etc whichever device you may have in my case its sitting on /dev/sdb

create a new partition on this device using fdisk

format this new partition with the new ext4 filesystem

mkfs.ext4 /dev/sdb1 

introduce this filesystem to your system

vi /etc/fstab

add the following line at the bottom of fstab

/dev/sdb1        /      ext4     defaults    0 0

(in my case i mounted on /opt)

mount the filesystem

mount -a

Done!  You now have nginx+php+lighttpd plus ext4 filesystem installed on your system.  I hope this basics covered much of the information you need to immediately setup a working lightweight webserver for hosting images.

Monday, May 17, 2010

NetInstall CentOS 5.5 with Mirrors

CentOS NetInstall 5.4

CentOS NetInstall 5.4A CentOS Net Install (CentOS NetInstall) is basically installing from a very small downloaded ISO image which downloads the needed files to complete the full operating system installation on-the-fly. This documentation explains the process of installing CentOS 5.4 using the HTTP NetInstall method. This method is much faster for basic systems since you don't have to download 4-6 ISO files or one huge DVD based ISO just to get started. If you are installing many systems you may want to look into the stand-alone DVD as it will save time in the end.

Downloading NetInstall ISO

First you must download the NetInstall ISO from your favorite mirror based on your specific system requirements. Depending on your system requirements you will have to choose the proper architecture type. For example, i386 is used on most standard computers where as x86_64 is used if your hardware can support 64bit. You can get a list of mirrors for your location by checking out the CentOS mirrors list. Many times you will have to find a mirror that offers ISO images. For this example, I will be using CentOS 5.2 with a 1386 architecture.

Download i386:

Mirrors List:

CentOS NetInstall

Burning the ISO Image to Prepare for Installation Process

After you burn the ISO to a CD you can boot from it to begin the setup. You will have to get through a few steps using the graphical install process. For example, you will need to choose your language and setup your network so the box can access the Internet. I have seen issues with the Manual IP Configuration so if you have DHCP for this step choose Dynamic IP Configuration to make your life easier. Choose HTTP as the setup option when asked for an installation method.
Configure TCP/IP

CentOS NetInstall

Choose Installation Method

CentOS NetInstall

During the HTTP Setup you will need to enter in the web site name and CentOS directory information. Make sure you make the proper changes for your platform and other requirements. You may need to change the directory based on your OS version and architecture type.

Website Name:
CentOS Directory: centos/5.5/os/x86_64

CentOS NetInstall 5.2 5.x

Installation Process
CentOS NetInstall

In conclusion, the whole process took less than 30 minutes. That includes downloading and burning the NetInstall ISO image as well as running a full LAMP install. The process that took the most time was downloading the stage image after the process has started.. The CentOS NetInstall continues to save much time. If your machine is not equipped with the ability to read DVD disks this is a great option.

Thursday, May 6, 2010

Mounting NTFS Partitions on CentOS 5 via yum

I have updated this post to reflect recent changes in CentOS base repos to 5.5 versions of RHEL. 

CentOS 5 NTFS Mount (fuse, ntfs, yum update)

Windows Drive mount Used to ntfs-3g

1. yum-priorities package install for rpmforge add to yum
[root@localhost ~]# yum install yum-priorities -y

2. "priority=N" add to /etc/yum/pluginconf.d/priorities.conf

[root@localhost ~]# vi /etc/yum/pluginconf.d/priorities.conf
enabled = 1
check_obsoletes = 1

3. Install rpmforge

[root@localhost ~]# rpm -ivh

4. update yum

[root@localhost ~]# yum check-update

5. install "fuse", "fuse-ntfs-3g", "dkms", "dkms-fuse"

[root@localhost ~]# yum install fuse fuse-ntfs-3g dkms dkms-fuse -y

6. make directory "windows" for mount ntfs

[root@localhost ~]# mkdir /windows

7. mount ntfs filesystem to "/windows" , type is ntfs-3g

[root@localhost ~]# mount -t ntfs-3g /dev/sda1 /windows
[root@localhost ~]# ls -al /windows/

Sunday, May 2, 2010

The Law of Multiplying Yourself

One of the greatest challenge of a senior administrator is the ability of teachability.  Teaching junior member of the team to adopt immediately to the brutal environments of enterprise systems.  Its often not the skill that matters most but rather the management skills which allows a more technically capable member of the team to be able to manifest the skill-sets that allows him/her to push forward in making decisive plans in effectively duplicating himself/herself to new and more junior members of team.

In this post I am going to detail some very important management strategies that will equip seniors in performing these task with ease of mind.


Duplication is very important, this allows you to immediately and responsibly prepare your would be successors in the arena that is to come.  Your goal is simple.  To be able to make yourself equally dispensable to those you are under your team.  The more your subordinates know how you think and make decisions the more easy for you to delegate tasks which can be performed in almost seamless perfection.

You should be able to make your subordinates masters of what your team do.  The ability to move in swiftly and address issues which requires an almost pre-emptive solution.  Imagine this scenario:  You are on vacation and suddenly an entire cluster of servers goes down due to problems incurred by a patching session done in utter wanton.  Can you see how the boomerang will hit?

Avoiding these disasters can be foreseen provided you have a mitigation strategy that can immediately address an issue once it becomes evident.  These check-points can be package in your DR strategies and help to guide and equip your team the necessary tools in responding to such events without you having to scatter to find an Internet connection.


Attacking the problem requires alot more the skill-sets, it requires that you have the complete array of experience and knowledge of the problem.  Therefore it is important that you will be able to understand where the problem is actually taking place.  If you have a complete mapping of the your IT infrastructure then you will be prepared.  Alot of systems administrators fail to see this at the very beginning.  Things such as DR initiatives and alterations in systems engineering.  Those who try to re-invent everything are doomed to fail.


Probably the most important aspect of management is understanding your ground and the ground of all the people you are working with.  Emotions play a critical part of the puzzle.  "Working harmoniously with your co-leads and subordinates"... there are aspects of work that just wont be dealt with pure technical details alone.  Interpersonal strength and understanding to use them on a situation will help you prepare for the worst.  There are tons of books out-there that outlines strategies for working out the group, "as a well oiled machinery" that will move forward as the challenge arises.   But, all of them primarily deal with one truth.  Working out the difficulty between emotional/cultural differences.


There is a big difference between someone who is willing to share his knowledge/toughts to somebody who excels without reproach but unwillingly display the inability to delegate knowledge transfer!  This is very important, knowing your systems engineering team will spell the big difference in understanding the underlying framework of your current technical details.  Their are people who has the inborn gift of teaching... there are those who can't.  But, I belive teaching is inborn to all of us.  What keeps that guy/gal from doing his part to teach junior members has something to do with their personal trait and character.  Understanding and strengthening these traits to energies that expounds more than what they can offer will help your team in the long run.  You will see later on members of the team moving forward to preparing development areas for you and the team.


I remember the first time I worked as a clerk encoder for some government office about 16 years ago.  I had very little with some technical technical knowledge in managing and using systems.  I was eagerly and patiently working my way out of the problems I have created.  An employee of the agency approached me and gave me one hell of a beating!  To quote " is a shame that the agency would hire someone whose incapacities had done more damage than good!!!"  It was a turning point in my career.  I was not a graduate of engineering or any related courses dealing with technology, I was a business guy who dabbled with the internals of systems of my time.  But it was one event that shook the foundations of pride-in me.  It was clear that whatever I wanted to do in my life in taking this  career shift would be uncompromising.  This event led me to take on a post-graduate study on technology studies dealing primarily with information technology.  It was a hard earned degree out of the original 28 students in the class.  Only 2 came out with a degree fortunately I was one of the lucky two (2) who made it.

I always tell my subordinates and juniors that career moves are necessary, it is the only way you and the members of the team could have a common shared vision.  You must be willing to and bravely tell them the importance of continuing education in the field.  Likewise, the importance of certifications.  I was an agnostic individual when it comes to certifications however, in today's landscape it means alot to be certified.  It has its uses but eventually the person to decide on how your career will take off is you.