Tuesday, December 28, 2010

SUDO - A friend with a Nasty Make-over

One of the strengths of NIX systems is to ability to use tools that allows non-escalated users to perform administrative task within a limited set of controlled domains on a given system (reminiscent with the Windows "RUN-AS" command).  This is where SUDO comes in!  If you are a seasoned engineer sudo can help you alot in giving access to user who needs to issue what commands and do what commands.

However, if done incorrectly you will have one hell of a nightmare looking for someone in the team who performed what on where.   Ideally, you should only give access to certain users who are members of your systems engineering team.  But there are cases when members of the development needs to have escalated access to perform task.

At times a systems engineer who is on vacation will have to rely on his next in line colleague to do things right.  But what if that person is not around to do those things?  The next protocol would be to allow members of the technical to have access to a same level of privileged as yours or any member of your team.

The problem arises when someone misuses this privilege to play with something.  Recently, I had experience the worst possible example of a mismanaged "SUDO" (which I authored) implementation!  It started on a regular working day in which I need to login to production system (Note:  PRODUCTION SYSTEM), I issued the command $sudo su - (to drop me to root, because we virtually erased from memory root password!)  and voila!  I go this instead:

.... Your are not in the sudo list this will be reported ....

Not only one system but on two (2) different servers!

What are the lessons learned here

1.  Implement a sudo list that has levels of escalations.  It will help you identify the culprit immediately.

2.  Identify known vulnerabilities of key programs such us vi etc.  If you know it by now a user who can issue escalated vi privileges can easily drop him to root!

3.  Always be prepared to suspend all users with sudo capability!  This will prompt your users that you are serious in implementing control to which users will have escalated privileges to what program and to what extent.

SUDO is good.  Only if it serves you right and with the right mind set.  But if you think its a solution to help you solve fire at will hungry tech guys then you are mistaken. Better have a PASSWORD VAULT!  I think you know how the rest of the story from here on.

What I learned from DDoS attacks recently

It's a real pain to have your system constantly under increased levels of attacks from all points of entry.  But that is the challenge!  To be able to address issues once their ugly head pops out.

In my recent dealings as a systems engineer to an e-commerce site.  I will have to re-learn everything their is to know about the limits of what you could do to the domain you have control and to those that you don't have.

In this post I am going to give you my insights on how things can go terribly wrong and the things that we failed to realise on the soonest possible time.

I.  Patching Session

Its common knowledge to all systems admins the true worth and importance of patch management.  Patches are released to address known vulnerability issues to a software package installed on the system, it could be a port exploit or a known bug that allows an attacker to take control of a feature or functionality that a package is expected to perform.  In my seat, patches where never released for the reason of lack in understanding and a clean management handle for admins to perform their job.

If a patch is known to break the application sitting on the system.  Then the patch is not worth it!  This is sure guarantee for the troubles that lies ahead.

II.  Ensuing Budget to cover the Holes

As part of the core management team.  My goal is to have my department levy the necessary budgetary requirements that will en-corporate a great deal of resource focusing on securing those information technology investments.  Hardware Architecture and Platform design goes hand in hand and its best that you do it right early on than regret it afterwards.  In my recent memory,  this is my biggest disappointment.   If the ratio of income generated online is equivalent to a magnitude of ratio to all offline income pouring in to the company basket.  Then you should consider acquiring the best known DDoS provider around to help you combat unwanted attacks where it leaves your entire technical team entirely hands tied at their ends.

III.  Human Divestment (Incident Response Teams)

A core team of engineers composed of your systems/network engineering team and members of your development core team, should be tapped to organise your quick action force readily in dispose whenever the problem arises.  Though I will caution even with the most experienced systems engineers, they need to undergo training to counter new treats coming from the net.  There are areas in data and information security which helps organise the chaotic deluge of skill-sets and help shape the team to perform duties that require the following:  1) Risk management, 2) Threat assessment and 3) Exploit forensics.    Though I must warn that It takes awhile to fine tune this team but worth every penny you invest on.

IV.  The Ethical Hack

Ethical hacking maybe a name that spawned from the nasty reality of hackers doing more damage than good.  For me the idea was simply to exploit/identify  vulnerabilities and be able to fix them before others can.  You could do so much with tools that aides in performing such activities.  However, it is likely that part 1 of this document may be the only thing missing in your whole security plan.  In my experience working with enterprise systems patch management has played a considerable role in a systems admins life, thus aiding in the process of hardening your box.

In my opinion, the whole TCP/IP stack which holds up the world wide web is the problem!  The standard on which the OSI layers had been built was several decades ago and IPV4 is likewise a problem.  Countless number of papers have suggested otherwise on a need to research on the next logical predecessors of TCP/IP.  This means a whole new generation of routers and switches to handle how data,voice,image traffic will move.  I believe that "Nicola Tesla" envisioned such with the use of wireless communication without the need to use cables.  However, this would prompt a new generation of exploiters to counter check if this new standard will be the defining technology.

For the time being we are limited to the tools that our current technology can accommodate.  The sad thing is that any ordinary guy with some technical knowledge can do so much damage because of the available tools on the net.  Therefore hacking to your system (ethical hacking) is a must in order for you to see if those security placements will hold.   Give it your best shot! 

To be continued ...

Hacking into Linux packages and fixing problems when the hammer slams!

In the years I have been working on NIX systems (Unix, Linux) there are a couple of instances when you need to have a purely "nirvanic" sense to fix problems when things go wrong.

A couple of days ago I had a ghost revisiting my team.  This has something to do with a package that has been installed using rpm.  The problem started when we upgraded the box memory and viola!  The magic happened when we tried to fire up apache!

It failed! Why? Because the modules needed by apache was missing.  We have very little time to re-act and fix the issue.  We cannot wait another 30 or so minutes to get things done.  Obviously, the hardware engineers done a wonderful job of messing something or It could be just one of those days when things just got bad and their is not way of explaining what is happening.

I surveyed the situation and immediately it came to me to HACK!  It's what we are best in doing so and a simple understanding on how things work would mean saving your company millions of precious income and/or saving your job!

Ever wonder why you have those, test and development environment servers  almost identical (production servers) if not  a clone?  Aside from the fact the code should behave as expected once tested and deployed to those environments? 

The reality is simple!  Be ready to cannibalise those  boxes when the time calls for it.  In my case, we really did not cannibalise anything here or at-least to the hardware level.  What we did is simply to fix the issue with apache.   If you are a seasoned systems engineer, you may have already figured out what I am talking here.

Sunday, December 12, 2010

Fixing SVN commit failure - Cannot write to the prototype revision file of transaction

There are times that svn misbehaves and when the going gets rough you have to have the initial understanding on how to fix the problem.

Example:

Transmitting file data ....svn: Commit failed (details follow):
svn: Cannot write to the prototype revision file of transaction '4615-1' because a previous representation is currently being written by this process


1. What you could is try to re-load/re-start apache - usually it refreshes the entire tree for any locks to remove the handle.

2. Use svnadmin to fix the problem - in many case if the first method doesnt fix it you can use the svn admin tool to fix it.  

3. You may want to get a fresh copy from your main branch.  This is due in time since your local copy may no longer be as consistent with that of found on the server.  Again, svnadmin plays a critical role in fixing problems related to this.


svnadmin lstxns /path/to/repository

But most of the time "1" is sufficient.

Thursday, December 9, 2010

ntop Install Fix for CentOS 5.3

If you happen to install ntop and gets this error:

Default RPM install (from rpmforge)
CentOS 5.3
ntop 3.3.8-2.el5.rf
Works runs fine when executed from the command line, however, the following happens when service ntop start is ran.
Starting ntop:    Processing file /etc/ntop.conf for parameters...
Mon Aug  3 19:49:38 2009  NOTE: Interface merge enabled by default
Mon Aug  3 19:49:38 2009  Initializing gdbm databases
FATAL ERROR: Unrecognized/unprocessed ntop options...
 --user=ntop, --db-file-path=/var/ntop, --use-syslog=local3, --daemon,

run ntop --help for usage information

    Common problems:
        -B "filter expressions" (quotes are required)
        --use-syslog=facilty (the = is required)

FAILED]


Here is the fix for it:

The fix was orignally posted here, all credit goes to them, I’m reposting it here for my own convenience.
The Fix
Edit /etc/init.d/ntop
start () { echo -n $"Starting $prog: " # daemon $prog -d -L @/etc/ntop.conf daemon 
$prog @/etc/ntop.conf -d -L -M In addition to this, /etc/ntop.conf needs to be edited and any spaces in the options should be replaced with =.

### Sets the user that ntop runs as. ### NOTE: This should not be root unless you really understand the security risks. --user=ntop ### Sets the directory that ntop runs from. --db-file-path=/var/ntop
 
and

[root@Neptune ~]# service ntop start Starting ntop: Processing file /etc/ntop.conf for parameters... Fri Dec 10 11:09:11 2010  NOTE: Interface merge enabled by default
Fri Dec 10 11:09:11 2010  Initializing gdbm databases [ OK ]

 
Note:  There is a pesky error (**ERROR** RRD: Disabled - unable to create directory (err 13, /var/ntop/rrd/graphics), when trying to view network load page. To fix this problem you have to do the following:

#cd /var/ntop
#chown -R ntop.nobody rrd

Wednesday, December 8, 2010

RHEL 6 ditches System V init for Upstart: What Linux admins need to know

With the release of RHEL 6.  There is a mantra of new ways of doing things; those that generally change the way a system administrator does his/her job.  On this post I will give you a brief backgrounder on how things sounds.

The release of Red Hat Enterprise Linux (RHEL) 6, Red Hat will use the new Upstart boot service, as a replacement for the old init. In this article you'll learn about the changes to this essential Linux process, and what it means for your work as an administrator.

The disadvantage of the old System V init boot procedure is that it was based on runlevel directories that contained massive amounts of scripts that all had to be started. Upstart is event driven, so it contains scripts that are only activated when they are needed, making the boot procedure a lot faster. A well-tuned Linux server that uses Upstart boots significantly faster than an old system using System V init.

In an attempt to make it easier to understand, the Upstart service still works with an init process. So you'll still have /sbin/init, which is the mother of all services. But if you have a look at the /etc/inittab file, you'll see that everything has changed.

Understanding the changes from init to Upstart
The good news, the changes to the boot procedure on RHEL 6 are minimal. You still work with services that have service scripts in /etc/init.d, and there is still a concept of runlevels. So after adding a service with yum, you can still enable it like you were used to by using the chkconfig command. Also, you can still start it with the service command.
But if you are looking for settings that you used to apply from /etc/inittab, you'll see that many things have changed. The only thing that didn't change, is the line that tells your server what runlevel to use by default:
id:5:initdefault:
All other items that were previously handled by /etc/inittab are now in individual files in the /etc/init directory (not to be confused with /etc/init.d, which contains the service scripts). Below you can find a short list of the scripts that are used:

/etc/init/rcS.conf handles system initialization by starting the most fundamental services
/etc/init/rc.conf handles starting the individual runlevels
/etc/init/control-alt-delete.conf defines what should happen when “control-alt-delete” is pressed
/etc/init/tty.conf
and
/etc/init/serial.conf
specify how terminals are to be handled

Apart from these generic files, some additional configuration is in the /etc/sysconfig/init file. Here, some parameters are defined that determine the way that startup messages are formatted. Apart from these not so important settings, there are three lines that are of more interest:

AUTOSWAP=no
ACTIVE_CONSOLES=/dev/tty[1-6]
SINGLE=/sbin/sushell

Of these, you can give the first line the value yes, to have your system detect swap devices automatically. Using this option means that you don't have to mount swap devices from /etc/fstab anymore. The ACTIVE_CONSOLES line determines which virtual consoles are created. In most situations, tty[1-6] works fine, but this options allows you to allocate more or less virtual consoles. But be aware that you should never use tty[1-8], because tty7 is reserved for the graphical interface.
Last but not least, there is the SINGLE=/sbin/sushell line. This line can have two parameters: /sbin/sushell (the default) which drops you in a root shell after starting single-user mode, or /sbin/sulogin, which launches a login prompt where you have to enter the root password before single user mode can be started.
With Upstart, RHEL 6 has adopted a new and much faster alternative for the old System V boot procedure. With the adoption of this new service, Red Hat has still managed to keep the old management routines in place, meaning that as an administrator you can still manage services the way you are used to – well almost that way - with some changes to the settings in the /etc/inittab file.