Share Your IT Horror Story

IT horror stories and nightmares

IT horror stories and nightmares
We’ve all been here… submit your story below!

If you work in IT, chances are you’ve lived through an IT horror story or two…or three. Whether it was *cough* your fault *cough* or you witnessed it, tell us what happened!

The top horror stories will be featured in our October newsletter.

Submit your nightmare in the form below by October 1.

Now that you’ve submitted your nightmare, read this IT horror story

“In 2011 the company I worked for purchased another company that dealt in power generation. They had about fifteen electrical engineers on staff who all “needed” admin access, and the rest of the staff (totaling about forty which included machinists and admin staff) also had admin access. For the two years prior to the acquisition, they were running with no IT, only a single guy who would come in for once or twice a day per month to handle emergency issues that had been written out. Patches hadn’t been done in excess of nine months on some stations, there was no inventory, and at least two PCs weren’t running licensed software. Some systems were home built style by the same IT guy, and one of them had an SLI set up so he could run CAD faster (laughable). This is just the PCs!

The server room was four or five systems, and these were all dedicated to a 2003 AD, file/print server, and a couple application boxes (Quicken was one). They hadn’t been patched. Ever. The patch panel was split between phones (a VOIP system using PoE) and cat5. Nothing was routed, there was no cable management, nothing was labeled. The company had bought into a three-year T1 contract that marginalized their data for the phones, resulting in performance that was worse than a DSL at a cost of greater than $1200/month, and the firewall was a simple Juniper box with rules that were very complex, right up until the end, where it did an all/all/all. The UPS was amazing! The engineers had used three batteries from solar installations (easily three feet tall apiece and over 200 lbs each), chained them, added a 1000w converter (as you might find on a car), and chained that to a minuteman UPS and two small APC units (the converter wasn’t fast enough to switch the power during an outage). This all combined for a total runtime of three days.

No racks for the servers, just tables. One of the servers, their “most important” one, sat on an old desk. No cooling, either, and it was all stored in a closet that was accessed every day by everyone because the paper boxes were in there, too.

My task? Fix it all. In two weeks. I worked until 11pm or later on this project, getting stuck inside once due to the alarm system, and at the halfway point, my manager took the project away from me and declared that someone else, from 2500 miles away (Denver-Alaska), would be taking over the project. He arrived a week later, and I was released for being unable to perform quickly enough.” ~Grey H.


Join the Conversation

2 comments

  1. joe elsaesser Reply

    A place I worked once had a wise Unix admin. When he was told that certain nonIT managers needed the root
    password, the Unix admin informed these conceited clowns that they would put on the oncall rotation. They
    backed down after that.

    1. Zane Schweer Reply

      Thanks, Joe!