Tuesday, December 28, 2010

SUDO - A friend with a Nasty Make-over

One of the strengths of NIX systems is to ability to use tools that allows non-escalated users to perform administrative task within a limited set of controlled domains on a given system (reminiscent with the Windows "RUN-AS" command).  This is where SUDO comes in!  If you are a seasoned engineer sudo can help you alot in giving access to user who needs to issue what commands and do what commands.

However, if done incorrectly you will have one hell of a nightmare looking for someone in the team who performed what on where.   Ideally, you should only give access to certain users who are members of your systems engineering team.  But there are cases when members of the development needs to have escalated access to perform task.

At times a systems engineer who is on vacation will have to rely on his next in line colleague to do things right.  But what if that person is not around to do those things?  The next protocol would be to allow members of the technical to have access to a same level of privileged as yours or any member of your team.

The problem arises when someone misuses this privilege to play with something.  Recently, I had experience the worst possible example of a mismanaged "SUDO" (which I authored) implementation!  It started on a regular working day in which I need to login to production system (Note:  PRODUCTION SYSTEM), I issued the command $sudo su - (to drop me to root, because we virtually erased from memory root password!)  and voila!  I go this instead:

.... Your are not in the sudo list this will be reported ....

Not only one system but on two (2) different servers!

What are the lessons learned here

1.  Implement a sudo list that has levels of escalations.  It will help you identify the culprit immediately.

2.  Identify known vulnerabilities of key programs such us vi etc.  If you know it by now a user who can issue escalated vi privileges can easily drop him to root!

3.  Always be prepared to suspend all users with sudo capability!  This will prompt your users that you are serious in implementing control to which users will have escalated privileges to what program and to what extent.

SUDO is good.  Only if it serves you right and with the right mind set.  But if you think its a solution to help you solve fire at will hungry tech guys then you are mistaken. Better have a PASSWORD VAULT!  I think you know how the rest of the story from here on.

What I learned from DDoS attacks recently

It's a real pain to have your system constantly under increased levels of attacks from all points of entry.  But that is the challenge!  To be able to address issues once their ugly head pops out.

In my recent dealings as a systems engineer to an e-commerce site.  I will have to re-learn everything their is to know about the limits of what you could do to the domain you have control and to those that you don't have.

In this post I am going to give you my insights on how things can go terribly wrong and the things that we failed to realise on the soonest possible time.

I.  Patching Session

Its common knowledge to all systems admins the true worth and importance of patch management.  Patches are released to address known vulnerability issues to a software package installed on the system, it could be a port exploit or a known bug that allows an attacker to take control of a feature or functionality that a package is expected to perform.  In my seat, patches where never released for the reason of lack in understanding and a clean management handle for admins to perform their job.

If a patch is known to break the application sitting on the system.  Then the patch is not worth it!  This is sure guarantee for the troubles that lies ahead.

II.  Ensuing Budget to cover the Holes

As part of the core management team.  My goal is to have my department levy the necessary budgetary requirements that will en-corporate a great deal of resource focusing on securing those information technology investments.  Hardware Architecture and Platform design goes hand in hand and its best that you do it right early on than regret it afterwards.  In my recent memory,  this is my biggest disappointment.   If the ratio of income generated online is equivalent to a magnitude of ratio to all offline income pouring in to the company basket.  Then you should consider acquiring the best known DDoS provider around to help you combat unwanted attacks where it leaves your entire technical team entirely hands tied at their ends.

III.  Human Divestment (Incident Response Teams)

A core team of engineers composed of your systems/network engineering team and members of your development core team, should be tapped to organise your quick action force readily in dispose whenever the problem arises.  Though I will caution even with the most experienced systems engineers, they need to undergo training to counter new treats coming from the net.  There are areas in data and information security which helps organise the chaotic deluge of skill-sets and help shape the team to perform duties that require the following:  1) Risk management, 2) Threat assessment and 3) Exploit forensics.    Though I must warn that It takes awhile to fine tune this team but worth every penny you invest on.

IV.  The Ethical Hack

Ethical hacking maybe a name that spawned from the nasty reality of hackers doing more damage than good.  For me the idea was simply to exploit/identify  vulnerabilities and be able to fix them before others can.  You could do so much with tools that aides in performing such activities.  However, it is likely that part 1 of this document may be the only thing missing in your whole security plan.  In my experience working with enterprise systems patch management has played a considerable role in a systems admins life, thus aiding in the process of hardening your box.

In my opinion, the whole TCP/IP stack which holds up the world wide web is the problem!  The standard on which the OSI layers had been built was several decades ago and IPV4 is likewise a problem.  Countless number of papers have suggested otherwise on a need to research on the next logical predecessors of TCP/IP.  This means a whole new generation of routers and switches to handle how data,voice,image traffic will move.  I believe that "Nicola Tesla" envisioned such with the use of wireless communication without the need to use cables.  However, this would prompt a new generation of exploiters to counter check if this new standard will be the defining technology.

For the time being we are limited to the tools that our current technology can accommodate.  The sad thing is that any ordinary guy with some technical knowledge can do so much damage because of the available tools on the net.  Therefore hacking to your system (ethical hacking) is a must in order for you to see if those security placements will hold.   Give it your best shot! 

To be continued ...

Hacking into Linux packages and fixing problems when the hammer slams!

In the years I have been working on NIX systems (Unix, Linux) there are a couple of instances when you need to have a purely "nirvanic" sense to fix problems when things go wrong.

A couple of days ago I had a ghost revisiting my team.  This has something to do with a package that has been installed using rpm.  The problem started when we upgraded the box memory and viola!  The magic happened when we tried to fire up apache!

It failed! Why? Because the modules needed by apache was missing.  We have very little time to re-act and fix the issue.  We cannot wait another 30 or so minutes to get things done.  Obviously, the hardware engineers done a wonderful job of messing something or It could be just one of those days when things just got bad and their is not way of explaining what is happening.

I surveyed the situation and immediately it came to me to HACK!  It's what we are best in doing so and a simple understanding on how things work would mean saving your company millions of precious income and/or saving your job!

Ever wonder why you have those, test and development environment servers  almost identical (production servers) if not  a clone?  Aside from the fact the code should behave as expected once tested and deployed to those environments? 

The reality is simple!  Be ready to cannibalise those  boxes when the time calls for it.  In my case, we really did not cannibalise anything here or at-least to the hardware level.  What we did is simply to fix the issue with apache.   If you are a seasoned systems engineer, you may have already figured out what I am talking here.

Sunday, December 12, 2010

Fixing SVN commit failure - Cannot write to the prototype revision file of transaction

There are times that svn misbehaves and when the going gets rough you have to have the initial understanding on how to fix the problem.


Transmitting file data ....svn: Commit failed (details follow):
svn: Cannot write to the prototype revision file of transaction '4615-1' because a previous representation is currently being written by this process

1. What you could is try to re-load/re-start apache - usually it refreshes the entire tree for any locks to remove the handle.

2. Use svnadmin to fix the problem - in many case if the first method doesnt fix it you can use the svn admin tool to fix it.  

3. You may want to get a fresh copy from your main branch.  This is due in time since your local copy may no longer be as consistent with that of found on the server.  Again, svnadmin plays a critical role in fixing problems related to this.

svnadmin lstxns /path/to/repository

But most of the time "1" is sufficient.

Thursday, December 9, 2010

ntop Install Fix for CentOS 5.3

If you happen to install ntop and gets this error:

Default RPM install (from rpmforge)
CentOS 5.3
ntop 3.3.8-2.el5.rf
Works runs fine when executed from the command line, however, the following happens when service ntop start is ran.
Starting ntop:    Processing file /etc/ntop.conf for parameters...
Mon Aug  3 19:49:38 2009  NOTE: Interface merge enabled by default
Mon Aug  3 19:49:38 2009  Initializing gdbm databases
FATAL ERROR: Unrecognized/unprocessed ntop options...
 --user=ntop, --db-file-path=/var/ntop, --use-syslog=local3, --daemon,

run ntop --help for usage information

    Common problems:
        -B "filter expressions" (quotes are required)
        --use-syslog=facilty (the = is required)


Here is the fix for it:

The fix was orignally posted here, all credit goes to them, I’m reposting it here for my own convenience.
The Fix
Edit /etc/init.d/ntop
start () { echo -n $"Starting $prog: " # daemon $prog -d -L @/etc/ntop.conf daemon 
$prog @/etc/ntop.conf -d -L -M In addition to this, /etc/ntop.conf needs to be edited and any spaces in the options should be replaced with =.

### Sets the user that ntop runs as. ### NOTE: This should not be root unless you really understand the security risks. --user=ntop ### Sets the directory that ntop runs from. --db-file-path=/var/ntop

[root@Neptune ~]# service ntop start Starting ntop: Processing file /etc/ntop.conf for parameters... Fri Dec 10 11:09:11 2010  NOTE: Interface merge enabled by default
Fri Dec 10 11:09:11 2010  Initializing gdbm databases [ OK ]

Note:  There is a pesky error (**ERROR** RRD: Disabled - unable to create directory (err 13, /var/ntop/rrd/graphics), when trying to view network load page. To fix this problem you have to do the following:

#cd /var/ntop
#chown -R ntop.nobody rrd

Wednesday, December 8, 2010

RHEL 6 ditches System V init for Upstart: What Linux admins need to know

With the release of RHEL 6.  There is a mantra of new ways of doing things; those that generally change the way a system administrator does his/her job.  On this post I will give you a brief backgrounder on how things sounds.

The release of Red Hat Enterprise Linux (RHEL) 6, Red Hat will use the new Upstart boot service, as a replacement for the old init. In this article you'll learn about the changes to this essential Linux process, and what it means for your work as an administrator.

The disadvantage of the old System V init boot procedure is that it was based on runlevel directories that contained massive amounts of scripts that all had to be started. Upstart is event driven, so it contains scripts that are only activated when they are needed, making the boot procedure a lot faster. A well-tuned Linux server that uses Upstart boots significantly faster than an old system using System V init.

In an attempt to make it easier to understand, the Upstart service still works with an init process. So you'll still have /sbin/init, which is the mother of all services. But if you have a look at the /etc/inittab file, you'll see that everything has changed.

Understanding the changes from init to Upstart
The good news, the changes to the boot procedure on RHEL 6 are minimal. You still work with services that have service scripts in /etc/init.d, and there is still a concept of runlevels. So after adding a service with yum, you can still enable it like you were used to by using the chkconfig command. Also, you can still start it with the service command.
But if you are looking for settings that you used to apply from /etc/inittab, you'll see that many things have changed. The only thing that didn't change, is the line that tells your server what runlevel to use by default:
All other items that were previously handled by /etc/inittab are now in individual files in the /etc/init directory (not to be confused with /etc/init.d, which contains the service scripts). Below you can find a short list of the scripts that are used:

/etc/init/rcS.conf handles system initialization by starting the most fundamental services
/etc/init/rc.conf handles starting the individual runlevels
/etc/init/control-alt-delete.conf defines what should happen when “control-alt-delete” is pressed
specify how terminals are to be handled

Apart from these generic files, some additional configuration is in the /etc/sysconfig/init file. Here, some parameters are defined that determine the way that startup messages are formatted. Apart from these not so important settings, there are three lines that are of more interest:


Of these, you can give the first line the value yes, to have your system detect swap devices automatically. Using this option means that you don't have to mount swap devices from /etc/fstab anymore. The ACTIVE_CONSOLES line determines which virtual consoles are created. In most situations, tty[1-6] works fine, but this options allows you to allocate more or less virtual consoles. But be aware that you should never use tty[1-8], because tty7 is reserved for the graphical interface.
Last but not least, there is the SINGLE=/sbin/sushell line. This line can have two parameters: /sbin/sushell (the default) which drops you in a root shell after starting single-user mode, or /sbin/sulogin, which launches a login prompt where you have to enter the root password before single user mode can be started.
With Upstart, RHEL 6 has adopted a new and much faster alternative for the old System V boot procedure. With the adoption of this new service, Red Hat has still managed to keep the old management routines in place, meaning that as an administrator you can still manage services the way you are used to – well almost that way - with some changes to the settings in the /etc/inittab file.

Sunday, November 28, 2010

Compile Linux Kernel 2.6

I have never done a blog on "kernel" compile in the past and with the growing ease of using Linux nowadays one would ask.  Who needs compiling it? Apparently, it has its uses and for those experimenting with Linux and or maybe doing a sort of review, prior to taking an exam.  I came across this wonderfully crafted easy to use kernel compile tutorial.  When I sifted through it, I realised how simple it was to compile Linux kernel.

I am taking this opportunity to re-post this article for purposes of good archiving as with all my other post.  I look for a candidate where I can equally enjoy and have the luxury of messing up the system if something goes wrong it won't be a big deal.  The target machine was a debian lenny 5.0 which I just downloaded (11-28-2010).  Installed the base and configured networking to set the stage. Read on.

How to: Compile Linux kernel 2.6

Compiling custom kernel has its own advantages and disadvantages. However, new Linux user / admin find it difficult to compile Linux kernel. Compiling kernel needs to understand few things and then just type couple of commands. This step by step howto covers compiling Linux kernel version 2.6.xx under Debian GNU Linux. However, instructions remains the same for any other distribution except for apt-get command.

Step # 1 Get Latest Linux kernel code

Visit http://kernel.org/ and download the latest source code. File name would be linux-x.y.z.tar.bz2, where x.y.z is actual version number. For example file inux- represents kernel version. Use wget command to download kernel source code:
$ cd /tmp
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-

Note: Replace x.y.z with actual version number (in my case its the actual current kernel on the site).

Step # 2 Extract tar (.tar.bz2) file

Type the following command:
# tar -xjvf linux- -C /usr/src
# cd /usr/src

Step # 3 Configure kernel

Before you configure kernel make sure you have development tools (gcc compilers and related tools) are installed on your system. If gcc compiler and tools are not installed then use apt-get command under Debian Linux to install development tools.
# apt-get install gcc
Now you can start kernel configuration by typing any one of the command:
  • $ make menuconfig - Text based color menus, radiolists & dialogs. This option also useful on remote server if you wanna compile kernel remotely.
  • $ make xconfig - X windows (Qt) based configuration tool, works best under KDE desktop
  • $ make gconfig - X windows (Gtk) based configuration tool, works best under Gnome Dekstop.

Step # 4 Compile kernel

For example make menuconfig command launches following screen:
$ make menuconfig
You have to select different options as per your need. Each configuration option has HELP button associated with it so select help button to get help.

Start compiling to create a compressed kernel image, enter:
$ make
Start compiling to kernel modules:
$ make modules
Install kernel modules (become a root user, use su command):
$ su -
# make modules_install

Step # 5 Install kernel

So far we have compiled kernel and installed kernel modules. It is time to install kernel itself.
# make install
It will install three files into /boot directory as well as modification to your kernel grub configuration file:
  • System.map-
  • config-
  • vmlinuz-

Step # 6: Create an initrd image

Type the following command at a shell prompt:
# cd /boot
# mkinitrd -o initrd.img-

* In my case I used mkinitramfs -o initrd.img-2.6.25 (which ever version number of kernel you downloaded)

initrd images contains device driver which needed to load rest of the operating system later on. Not all computer requires initrd, but it is safe to create one.

Step # 7 Modify Grub configuration file - /boot/grub/menu.lst

Open file using vi:
# vi /boot/grub/menu.lst
title           Debian GNU/Linux, kernel Default
root            (hd0,0)
kernel          /boot/vmlinuz- root=/dev/hdb1 ro
initrd          /boot/initrd.img- 
Remember to setup correct root=/dev/hdXX device. Save and close the file. If you think editing and writing all lines by hand is too much for you, try out update-grub command to update the lines for each kernel in /boot/grub/menu.lst file. Just type the command:
# update-grub 
... Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /vmlinuz-
Found kernel: /vmlinuz-2.6.26-2-686
Updating /boot/grub/menu.lst ... done

Neat. Huh?

Step # 8 : Reboot computer and boot into your new kernel

Just issue reboot command:
# reboot

So depending on how you setup your menuconfig in the first part would dictate on how long the compile and the installation of the modules for the new kernel will take.  In my case its almost one day.  Good luck.

  1. If for example you experienced an error during the initial "make" chances are you have several packages that is missing one such package is the libncurses5-dev (if you are using debian based system) package.
  2. Make sure you have enough disc space on the directory you will be building the sources, in my recent experience, I needed at least 6Gb of free space under /usr/src to get it done.

Friday, November 12, 2010

Understanding Priority Issues Impacting Business Operations

Huge enterprises have a network of complex policies that tightly knit together the business models and the support structure made of people managing technology tools.  The progression of decades in problem management/experience have lead to the creation of policies that provides a domino effect inherent in the support structure chain that responds immediately to certain events leading to a possible loss or something similar.  Thus change management came into the light along with risk management and systems audit management.  One of the most highly debunked support structure of this new areas in systems management was the creation of "Priority levels"

What are priority levels?  Priority levels are part of the change management chain and widely used by management and stakeholders to address issues on systems.

In this post I am going to detail a very important ingredient that highlights the life of a systems personnel (admins and developers alike),  addressing change management using priority levels.  It is my intention that a clear picture must be drawn, while discerning the events where priority levels take into account.

The P1 (Priority Level 1)  -- Response Time 1 hour (CRITICAL)

- Any major failure affecting an entire site/business or more than one device/server
- Business is loosing millions
- Impacting huge number of users

Priority level 1 is the most "CRITICAL" a  P1 is when you drop everything and focus on the problem.  However, this has been misused and abused by senior management who is paranoid of the fact of losing so much that isn't there.   I have seen and experience these scenarios happen in real life I myself was a victim of it!  To battle your way out those criteria must be present above to address a real P1 concern.  I hate it when P1 levels are brought down the test environment.  It's a complete mockery of the policy and virtually nothing to do with the real issues in dealing with those mentioned by definition.

 - Immediate notification to Engineers
-  Escalation direct to 3rd level Engineers
-  Escalation direct to Incident Manager / Senior Management

The P2 (Priority Level 2)  -- Response Time 8 hours  (HIGH)

- Incident affecting single, critical device/server
- Site/service functioning but performance is degraded.
- Affects only a small number of users.
- Incident during normal/critical period.

Priority level 2 P2 is considered "HIGH" importance is significant but requires that the issue be address within the business day span and not more than EOB (End of Business).  Normally, P2 will succeed a P1 in order of task to be undertaken.  A good example is when you have a key functionality in a system that is hampering expedient recourse to a given output or desired performance levels.  In my experience P2 issues in systems administration are the most "VAGUE" a P2 is raised due to a phenomenal behavior in the system which defies even the most prudent investigation procedure.  Often a fix is found by making the wrong decision for the right course of action, which eventually led to a key vulnerability issue or a kernel bug not yet known.

- Immediate notification to Help Desk Supervisor.
- Incident Manager also informed.
- Notification to Team leader/Manager if response SLA not met.

The P3 (Priority Level 3) -- Response Time 2 days (NORMAL)

- Normal service requests and incidents affecting non-critical device/server
- Site/service functioning but performance is degraded.
- Affects only a small number of end users.
- Incident during quiet period.

This level is a normal day to day affair that systems administrators must face. e.g. unlocking locked out users from the system, creation/removal of accounts.  Investigation to key services behaving normally and registered by the system however, unexpected results appear from time to time due to a bug in the code somewhere etc.  Tickets such as those pertaining to maintenance related works on the system take the course of this level.  Meaning an expected well define procedure is in place to execute a task which is about to happen.

- Notification to Team leader/Manager if response SLA not met.

Their is a P4 and P5 in some cases these levels are considered low and not-prioritized respectively.  Sysadmins are always at the forefront of making sure that the infrastructure supporting a sizeable application with huge monetary value is sitting on a very robust system.  Those levels presented above may or may hold true to your organizations use, but then again...management has the last say on where to put priorities especially if management has a good grasp of the technical workings involved (assuming your management team are former IT people).  Jack-of-all trades are the bane of most companies, again I will refer it to my previous post...http://myopensourcestints.blogspot.com/2010/05/law-of-multiplying-yourself.html.

If that is the case, then better throw in the towel and start looking elsewhere for an organization that respects these standards.


Tuesday, November 9, 2010

The Google 80/20 Innovation Model

What makes Google very  attractive place to work is because of the company's ability to bring out the best in its people.  This post is found in web and I am going to reiterate it here for purpose of good archive.  Read on!

Management Friday: Google’s 80/20 Innovation Model

This week I visited the “Googleplex,” Google’s 12,000 person Palo Alto campus. The trip gave me a lot of food for thought on innovation, work/life balance, recruiting, and employee retention. I’ll be posting on the work/life balance questions raised by my visit over the weekend; today I’m writing about Google’s 80/20 “innovation time off” (ITO) model, and how those of us in non-tech industries might apply the concept.

The ITO policy encourages Google employees to spend 80% of their time on core projects, and roughly 20% (or one day per week) on “innovation” activities that speak to their personal interests and passions. These activities may benefit the company’s bottom line – as in the case of Gmail, Google News, AdSense and Orkut. But more importantly they keep employees challenged and engaged in ways that aid retention and keep staff learning and growing.

A side note about why I think the 80/20 rule is especially interesting for working mothers. One of the reasons so many working mothers are unhappy with their jobs is the seemingly meaningless, frustrating, rote work that takes us away from our kids with little greater benefit.

Imagine a scenario where you could spend 20% of your time on projects that you think could benefit your company or world, and that you “own.” That could stimulate you to think differently and passionately about the other 80% of your work, leading to a more fulfilling professional experience. More fulfilling is good – it keeps mothers in the workplace. (I’ve written more about the importance of innovation in tough times here.)

Of course, this model works well for developers, engineers and other creative types. What about for the rest of us? Is there an 80/20 innovation model that could help your administrative assistant do his or her job better? Help middle managers make the leap more effectively to senior staff? Energize senior staff by offering mentoring and stewardship opportunities around such projects?

I say yes. I’ve talked before about how innovation is the key for companies surviving this economic downturn. I believe that more formal innovation policies and pipelines are critical not just for the high-tech and creative industries, but also for those of us in more traditional financial, non-profit, and management settings. Here are some thoughts on implementing an innovation policy in your workplace:

1. Create a formal process for project selection, monitoring, and evaluation. At Google they track innovation time and know exactly which projects are being pursued. Employees who want to take advantage of innovation time off should submit a brief proposal and timeline, and be able to articulate how they will measure success.

2. Don’t worry about failure. In some ways innovation, like so many other things, is a numbers game. You throw up 50 projects, and maybe one or two stick. Most will fail, but you can’t know which will work unless you try. Failure is a critical p[art of true innovation.

3. Start small. Successful pilot projects help to leverage support and build awareness. Encourage your employees to create scalable projects that can be launched with relatively little investment.

4. Let your staff shine. Champion good ideas by facilitating and advocating, but let your employee present directly to senior management. Managers benefit when CEOs see that they have recruited intelligent and insightful staff.

5. Manage expectations. Not every project can be seen to fruition – in fact 95% of projects generated by your innovation policy won’t go anywhere. You don’t want disappointed, disillusioned employees, so manage their expectations.

Interesting links on the 80/20 Innovation Time Off model:

* Scott Berkun talks about how and why the 80/20 model works.
* Scott Belsky talks about why the model is a good idea, even if 95% of Google revenues come from non-innovation business.
* Ron Wilson at Electronics Strategy, Design, News talks about why innovation has to go beyond the technical now more than ever.
* The HR Capitalist talks about recent developments at Google that are squeezing the innovation model.

Tuesday, September 14, 2010

SSH Hardening

Top 20 OpenSSH Server Best Security Practices

Don't tell anyone that I'm free
OpenSSH is the implementation of the SSH protocol. OpenSSH is recommended for remote login, making backups, remote file transfer via scp or sftp, and much more. SSH is perfect to keep confidentiality and integrity for data exchanged between two networks and systems. However, the main advantage is server authentication, through the use of public key cryptography. From time to time there are rumors about OpenSSH zero dayexploit. Here are a few things you need to tweak in order to improve OpenSSH server security.

Default Config Files and SSH Port

  • /etc/ssh/sshd_config - OpenSSH server configuration file.
  • /etc/ssh/ssh_config - OpenSSH client configuration file.
  • ~/.ssh/ - Users ssh configuration directory.
  • ~/.ssh/authorized_keys or ~/.ssh/authorized_keys - Lists the public keys (RSA or DSA) that can be used to log into the user’s account
  • /etc/nologin - If this file exists, sshd refuses to let anyone except root log in.
  • /etc/hosts.allow and /etc/hosts.deny : Access controls lists that should be enforced by tcp-wrappers are defined here.
  • SSH default port : TCP 22
SSH Session in Action
SSH Session in Action

#1: Disable OpenSSH Server

Workstations and laptop can work without OpenSSH server. If you need not to provide the remote login and file transfer capabilities of SSH, disable and remove the SSHD server. CentOS / RHEL / Fedora Linux user can disable and remove openssh-server with yum command:

# chkconfig sshd off
# yum erase openssh-server

Debian / Ubuntu Linux user can disable and remove the same with apt-get command:
# apt-get remove openssh-server

You may need to update your iptables script to remove ssh exception rule. Under CentOS / RHEL / Fedora edit the files /etc/sysconfig/iptables and /etc/sysconfig/ip6tables. Once donerestart iptables service:

# service iptables restart
# service ip6tables restart

#2: Only Use SSH Protocol 2

SSH protocol version 1 (SSH-1) has man-in-the-middle attacks problems and security vulnerabilities. SSH-1 is obsolete and should be avoided at all cost. Open sshd_config file and make sure the following line exists:
Protocol 2

#3: Limit Users' SSH Access

By default all systems user can login via SSH using their password or public key. Sometime you create UNIX / Linux user account for ftp or email purpose. However, those user can login to system using ssh. They will have full access to system tools including compilers and scripting languages such as Perl, Python which can open network ports and do many other fancy things. One of my client has really outdated php script and an attacker was able to create a new account on the system via a php script. However, attacker failed to get into box via ssh because it wasn't in AllowUsers.
Only allow root, vivek and jerry user to use the system via SSH, add the following to sshd_config:
AllowUsers root vivek jerry
Alternatively, you can allow all users to login via SSH but deny only a few users, with the following line:
DenyUsers saroj anjali foo
You can also configure Linux PAM allows or deny login via the sshd server. You can allow list of group name to access or deny access to the ssh.

#4: Configure Idle Log Out Timeout Interval

User can login to server via ssh and you can set an idel timeout interval to avoid unattended ssh session. Open sshd_config and make sure following values are configured:
ClientAliveInterval 300
ClientAliveCountMax 0
You are setting an idle timeout interval in seconds (300 secs = 5 minutes). After this interval has passed, the idle user will be automatically kicked out (read as logged out). See how to automatically log BASH / TCSH / SSH users out after a period of inactivity for more details.

#5: Disable .rhosts Files

Don't read the user's ~/.rhosts and ~/.shosts files. Update sshd_config with the following settings:
IgnoreRhosts yes
SSH can emulate the behavior of the obsolete rsh command, just disable insecure access via RSH.

#6: Disable Host-Based Authentication

To disable host-based authentication, update sshd_config with the following option:
HostbasedAuthentication no

#7: Disable root Login via SSH

There is no need to login as root via ssh over a network. Normal users can use su or sudo (recommended) to gain root level access. This also make sure you get full auditing information about who ran privileged commands on the system via sudo. To disable root login via SSH, update sshd_config with the following line:
PermitRootLogin no
However, bob made excellent point:
Saying "don't login as root" is h******t. It stems from the days when people sniffed the first packets of sessions so logging in as yourself and su-ing decreased the chance an attacker would see the root pw, and decreast the chance you got spoofed as to your telnet host target, You'd get your password spoofed but not root's pw. Gimme a break. this is 2005 - We have ssh, used properly it's secure. used improperly none of this 1989 will make a damn bit of difference. -Bob

#8: Enable a Warning Banner

Set a warning banner by updating sshd_config with the following line:
Banner /etc/issue
Sample /etc/issue file:
You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only.
By using this IS (which includes any device attached to this IS), you consent to the following conditions:
ed to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enfo
+ The XYZG routinely intercepts and monitors communications on this IS for purposes including, but not limi trcement (LE), and counterintelligence (CI) investigations. + At any time, the XYZG may inspect and seize data stored on this IS.
may be disclosed or used for any XYZG authorized purpose. + This IS includes security measures (e.g.,
+ Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and authentication and access controls) to protect XYZG interests--not for your personal benefit or privacy. + Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching
oduct are private and confidential. See User Agreement for details. ------------------------------------------
or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work p
Above is standard sample, consult your legal team for exact user agreement and legal notice details.

#8: Firewall SSH Port # 22

You need to firewall ssh port # 22 by updating iptables or pf firewall configurations. Usually, OpenSSH server must only accept connections from your LAN or other remote WAN sites only.

Netfilter (Iptables) Configuration

Update /etc/sysconfig/iptables (Redhat and friends specific file) to accept connection only from and, enter:
-A RH-Firewall-1-INPUT -s -m state --state NEW -p tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -s -m state --state NEW -p tcp --dport 22 -j ACCEPT
If you've dual stacked sshd with IPv6, edit /etc/sysconfig/ip6tables (Redhat and friends specific file), enter:
-A RH-Firewall-1-INPUT -s ipv6network::/ipv6mask -m tcp -p tcp --dport 22 -j ACCEPT
Replace ipv6network::/ipv6mask with actual IPv6 ranges.

*BSD PF Firewall Configuration

If you are using PF firewall update /etc/pf.conf as follows:
pass in on $ext_if inet proto tcp from {,} to $ssh_server_ip port ssh flags S/SA synproxy state

#9: Change SSH Port and Limit IP Binding

By default SSH listen to all available interfaces and IP address on the system. Limit ssh port binding and change ssh port (by default brute forcing scripts only try to connects to port # 22). To bind to and IPs and to port 300, add or correct the following line:
Port 300
A better approach to use proactive approaches scripts such as fail2ban or denyhosts (see below).

#10: Use Strong SSH Passwords and Passphrase

It cannot be stressed enough how important it is to use strong user passwords and passphrase for your keys. Brute force attack works because you use dictionary based passwords. You can force users to avoid passwords against a dictionary attack and use john the ripper tool to find out existing weak passwords. Here is a sample random password generator (put in your ~/.bashrc):
genpasswd() {
local l=$1
[ "$l" == "" ] && l=20
tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs
Run it:
genpasswd 16


#11: Use Public Key Based Authentication

Use public/private key pair with password protection for the private key. See how to use RSAand DSA key based authentication. Never ever use passphrase free key (passphrase key less) login.

#12: Use Keychain Based Authentication

keychain is a special bash script designed to make key-based authentication incredibly convenient and flexible. It offers various security benefits over passphrase-free keys. See how to setup and use keychain software.

#13: Chroot SSHD (Lock Down Users To Their Home Directories)

By default users are allowed to browse the server directories such as /etc/, /bin and so on. You can protect ssh, using os based chroot or use special tools such as rssh. With the release of OpenSSH 4.8p1 or 4.9p1, you no longer have to rely on third-party hacks such as rssh or complicated chroot(1) setups to lock users to their home directories. See this blog post about new ChrootDirectory directive to lock down users to their home directories.

#14: Use TCP Wrappers

TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet. OpenSSH does supports TCP wrappers. Just update your /etc/hosts.allow file as follows to allow SSH only from :
sshd : 
See this FAQ about setting and using TCP wrappers under Linux / Mac OS X and UNIX like operating systems.

#15: Disable Empty Passwords

You need to explicitly disallow remote login from accounts with empty passwords, update sshd_config with the following line:
PermitEmptyPasswords no

#16: Thwart SSH Crackers (Brute Force Attack)

Brute force is a method of defeating a cryptographic scheme by trying a large number of possibilities using a single or distributed computer network. To prevents brute force attacks against SSH, use the following softwares:
  • DenyHosts is a Python based security tool for SSH servers. It is intended to prevent brute force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses.
  • Explains how to setup DenyHosts under RHEL / Fedora and CentOS Linux.
  • Fail2ban is a similar program that prevents brute force attacks against SSH.
  • security/sshguard-pf protect hosts from brute force attacks against ssh and other services using pf.
  • security/sshguard-ipfw protect hosts from brute force attacks against ssh and other services using ipfw.
  • security/sshguard-ipfilter protect hosts from brute force attacks against ssh and other services using ipfilter.
  • security/sshblock block abusive SSH login attempts.
  • security/sshit checks for SSH/FTP bruteforce and blocks given IPs.
  • BlockHosts Automatic blocking of abusive IP hosts.
  • Blacklist Get rid of those bruteforce attempts.
  • Brute Force Detection A modular shell script for parsing application logs and checking for authentication failures. It does this using a rules system where application specific options are stored including regular expressions for each unique auth format.
  • IPQ BDB filter May be considered as a fail2ban lite.

#17: Rate-limit Incoming Port # 22 Connections

Both netfilter and pf provides rate-limit option to perform simple throttling on incoming connections on port # 22.

Iptables Example

The following example will drop incoming connections which make more than 5 connection attempts upon port 22 within 60 seconds:
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5 -j DROP
Call above script from your iptables scripts. Another config option:
$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT
$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
# another one line example
# $IPT -A INPUT -i ${inet_if} -m state --state NEW,ESTABLISHED,RELATED -p tcp --dport 22 -m limit --limit 5/minute --limit-burst 5-j ACCEPT
See iptables man page for more details.

*BSD PF Example

The following will limits the maximum number of connections per source to 20 and rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.
table persist
block in quick from
pass in on $ext_if proto tcp to $sshd_server_ip port ssh flags S/SA keep state (max-src-conn 20, max-src-conn-rate 15/5, overload flush)

#18: Use Port Knocking

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports. Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s). A sample port Knocking example for ssh using iptables:
$IPT -N stage1
$IPT -A stage1 -m recent --remove --name knock
$IPT -A stage1 -p tcp --dport 3456 -m recent --set --name knock2
$IPT -N stage2
$IPT -A stage2 -m recent --remove --name knock2
$IPT -A stage2 -p tcp --dport 2345 -m recent --set --name heaven
$IPT -N door
$IPT -A door -m recent --rcheck --seconds 5 --name knock2 -j stage2
$IPT -A door -m recent --rcheck --seconds 5 --name knock -j stage1
$IPT -A door -p tcp --dport 1234 -m recent --set --name knock
$IPT -A INPUT -p tcp --dport 22 -m recent --rcheck --seconds 5 --name heaven -j ACCEPT
$IPT -A INPUT -p tcp --syn -j doo
  • fwknop is an implementation that combines port knocking and passive OS fingerprinting.
  • Multiple-port knocking Netfilter/IPtables only implementation.

#19: Use Log Analyzer

Read your logs using logwatch or logcheck. These tools make your log reading life easier. It will go through your logs for a given period of time and make a report in the areas that you wish with the detail that you wish. Make sure LogLevel is set to INFO or DEBUG in sshd_config:
LogLevel INFO

#20: Patch OpenSSH and Operating Systems

It is recommended that you use tools such as yumapt-getfreebsd-update and others to keep systems up to date with the latest security patches.

Other Options

To hide openssh version, you need to update source code and compile openssh again. Make sure following options are enabled in sshd_config:
# Turn on privilege separation
UsePrivilegeSeparation yes
re home directory and key file permissions StrictModes yes # Turn on
# Prevent the use of insec u reverse name checking VerifyReverseMapping yes # Do you need port forwarding?
ion is allowed. The
AllowTcpForwarding no X11Forwarding no # Specifies whether password authentica tdefault is yes.
ation no
PasswordAuthenti c
Verify your sshd_config file before restarting / reloading changes:
# /usr/sbin/sshd -t
Tighter SSH security with two-factor or three-factor (or more) authentication.


  1. The official OpenSSH project.
  2. Forum thread: Failed SSH login attempts and how to avoid brute ssh attacks
  3. man pages sshd_config, ssh_config, tcpd, yum, and apt-get.
If you have a technique or handy software not mentioned here, please share in the comments below to help your fellow readers keep their openssh based server secure.