Posted by shannonclark on May 7, 2003
On why I monitor
or how hard it can be to explain just what I know how to do and how I do it, the tale an an ongoing hacker attack
This evening, I began to notice that my network access was degrading, that checking my email from one of my servers (the local one) was not functioning well, and that I appeared to be losing some services. For a time, I thought this was something wrong with my machine (it is a windows 98 box after all and does periodically get into an odd state). However, when other people in my office began to have the same types of problems, and when my developer mentioned that he too was having “sluggish” network access, I began to investigate.
First check, was our website down? We’ve had our DSL service fail in the past, usually a quick reboot of our DSL router and we are back up and running. But in this case our website was still up, but we seemed to be losing performance by the minute.
Then I tried to connect to my server, telneted in (yeah, if I was a real serious geek it would be ssh, sorry, haven’t gotten that to work well on my windows machine, will do it one of these days, but everything internal to my network is hidding behind layers of firewalls in any case.). I was able to login as myself, but upon attemping to actually run any commands I began to see the problems – was running out of processes to fork into.
I did a quick ps -auxwww (okay okay, I learned unix years ago, over a decade ago, still like Berkeley formatting and codes) – lo and behold, I think I see our problem… we had the same process spawned to an uncountable number of processes (Junkbuster).
I tried to su to be able to kill the errant processes, but no such luck, no processes available to spawn my root shell, also could not login on the CONSOLE (I have opened up our network room and logged into, well attempted to log into, our console).
So drastic measures time, I power cycled my server (before you geeks get too concerned, it is a server appliance, not many moving parts, very robust, and yes it did take a while doing a good fsck on boot). While it was rebooting, I also took it offline from our network, and we reconnected one of our macs to the server’s outbound connection so we could research junkbuster and the attack while our server rebooted (nothing like multitasking when you have to, and the risk was minimal, very few attacks on a unix system would also penetrate or even bother a System 9.x MacOS system with no server processes running.)
However, here we met with our first hurdle – no clear signs of a similar attack, that is something exploiting junkbuster for nefarious ends. So I took over the searching, and began to piece together the puzzle somewhat. It appears likely that the version of junkbuster (which is designed to filter cookies and advertising, however we do not in fact use it, it runs as a proxy server) which was packaged with our OS had a known potential bug, that said, it was claimed at least that no attacks on it were known to exist in the wild (yeah right, I see one happening at the moment).
But at least it began to point the way forward. On rebooting I went looking through my server. Following my typical “troubleshooting” methodology. I won’t give away all of my secrets here 🙂 but here are a few of the steps I took.
1. Looked immediately at all of the major log files on my server and looked at recently logged messages (prior to the reboot) to look for any signs of the attack in my log files.
2. Confirmed, as best as possible, that nothing major had been modified as a side result of the attack (i.e. no processes suddenly have been spawned that I did not mean to have running, my installation/boot config scripts were unmodified etc.
3. Located the junkbuster configurations to look over what it was doing (and to check the junkbuster logs in case they held any information – nothing)
4. Found all cases which might spawn the junkbuster process, and set them up so that they would not be started on reboot (into any boot configuration.
5. Stopped the one junkbuster process that was ongoing.
6. While monitoring the server for suspicious behavior, carefully reconnected it to the outside world and watched what happened in terms of my network traffic loads and processes on my server. With the recieving process halted I did not experience any problems.
Then I took to more serious steps, I sniffed the packets on my server and looked for signs of the attack (which, I correctly as it turns out, assumed would be ongoing). It was (and indeed is as I write this) and I was able to detect the two IP addresses where the attack appears to be coming from.
Looking these IP addresses up on the Internet I was able to obtain the 800 number for the ISP that is managing these addresses. On calling that number and getting through to an operator (most technical operator I have ever spoken to, speaks well for MCI (as it turns out) I was able to obtain the email address to report security related problems to.
Then it was simply a matter of generating a log of the attack, which I did and emailed in the body of my email to the email address provided.
They have now responded to me with an automated response and with a ticket number, and that is where things stand for the moment.
Geeky enough for you?
As I told this to my mom this evening, she reminded me that I have been managing a server on the Internet for well over a decade. Usually I forget this about myself, it is so much a second nature for me that I forget just how technical I can be when I need to be, and how much I know how to do in a very short time (did I mention that I checked a list of known attacks for the signature of this attack to see if I could determine what it is that is attacking me?).
Anyway, now it is almost 11pm, and I have not yet eaten dinner, so it will be a very late night for me indeed – not sure where I will end up, probably at Tempo, a local 24hr diner which I seem to be going to frequently these days.
What’s more, I could be even more technogeeky about this than I have been in this entry. Scary.