Every company suffers from IT problems but it’s not every day that your server crashes and brings down the entire business with it.
Franchise operation Appliance Tagging Services, which tests and tags electrical systems for other companies, suffered a near-death experience when a system designed to produce reports and invoices for clients failed, leaving the company with crippled cashflow.
Co-founder Sarah Allen says the incident serves as a lesson to make sure you have a contingency plan for everything – especially for critical IT services.
How’s the business been travelling?
Appliance Tagging is going well and our franchising arm is doing well also. We’ve picked up a large number of national clients. It’s not without its day-to-day challenges but we’re looking like we’ll have grown 20-30% for the past three to four years. The nature of our business is that when we pick up a new client it adds to our top line revenue and we’re recruiting new franchisees regularly as well.
What happened to cause the server problems?
We picked up a rather large client and one of their requirements was that they needed a solid reporting system. They wanted to develop summary reports and a whole bunch of compliance reporting needed to be done. Our systems at the time couldn’t do it so we had to sit back and create a new platform.
At the same time we wanted to create something that could benefit our clients. So we started asking questions about how we do that and from an IT perspective what sort of infrastructure we needed.
We started from scratch and designed a system that we felt could last us for years.
Can you describe the system you wanted to build?
Our users are able to go online to see their reports because we run all those systems through our CRM. It sounds quite bizarre but it’s part of our database. If they’ve had testing over the past four years they can go online and check the report.
Did you start seeing problems when it launched?
Even getting something to launch was the problem. We had a horrible server from HP and it simply didn’t work. We tried to load stuff up, it kept crashing and everyone was finger pointing.
Something that should have been a simple process became a nightmare and instead of a nice streamlined system it was just manual and horrible
How long was the server down for?
They were down for about four months. It would crash four or five times a day, which stops everything. You have 10 staff standing around saying they can’t do anything and of course we couldn’t do invoices.
What did HP say?
They were saying that it wasn’t their problem, it was the installer’s problem and then of course the installer said it was HP’s problem. We just weren’t getting any resolutions from anyone.
This would have caused problems for your cash flow.
We went for about a month without doing a single invoice. The rest we did manually – we had staff here at 1am doing invoices.
So the invoices were part of the system as well?
Our entire system was geared up to be automated. We service a client, the invoice and so on is reported automatically. But as soon as you have errors everything grinds to a halt.
We were doing manual reports, which takes 15 times as long. It was horrible.
How bad did it get?
Cashflow was definitely affected. It got quite bad, even to the stage of thinking about how we were even going to pay wages.It was a real struggle and we were relying on the next invoice to get us through.
But eventually we got a new server and got out of it. We definitely had some tough times though.
What’s the main lesson you think you gained from this whole experience?
Always have a plan B or a contingency plan.
There are so many unknowns that could occur when you’re looking at upgrading your IT system, so many little things can go wrong and one or two mistakes can take down your entire system. Because so many IT systems are dependent on each other there is a chain of events that occur when something goes wrong.
But just have a contingency plan and test everything before you assume it’s just going to work.