Home > Uncategorized > emsgDmpDeviceDisconnected

emsgDmpDeviceDisconnected

emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected emsgDmpDeviceDisconnected

The above spam is what my nights at work have been like for the last couple weeks. Tons of this one IPCC alert, “emsgDmpDeviceDisconnected”. It usually ends up being a trash event. The device returns with full connectivity when it’s checked up on pretty much every time. yet I get to build a 24h action ticket every time we get one of these stupid events and I get to wake someone up over it. why we can’t just check up on these myself and avoid that whole shitstorm is beyond me. Sucks for the on-call engineer too. Last week we had to call the poor guy 3-5 times during the night, every night of the week.

In other news, a certain company has been getting on my nerves. We’ll call them “Badwich” for the sake of anonymity. the text behind the cut was a separate post, but I took pity on my friends list.

So last night the UK/Europe sites for a major corporation we monitor went down. Like… all of them. The main ATM circuit that routes traffic from all the remote sites in the UK/Europe region went down. We lost connectivity with several dozen locations. so no internet/email/telephones for them. After going through the normal song and dance of chasing telco and such, we get a call back from them. They dispached an engineer to work on the device. There’s this cable that more or less connects this company’s remote offices to the outside world. After the engineer unplugged the cable and plugged it back in, all the sites came back up. Magic! It surprised me that the company wouldn’t at least jiggle all the cables when we asked them if they had checked power and equipment. but whatever, problem is solved… not a big deal anymore.

So today a handfull of the devices that went down last night crashed again, but not all of them. This time the main ATM circuit registered as up/up, but a few offices in the UK and a big chunk of France were down. We do the telco song and dance again, and this time we get a call from the company’s main “IT-stuffs” contact… we’ll just call him “Dan”. Dan tells us that he’s got service back to the remote locations, and if this happens again we should “hammer the circuit until they come back up”. O_o. are you serious? for the laypeople, this basically means flipping the power switch to all of the uk and europe intermittently (including sites that are up and working fine) until we’re satisfied that everything is up. This is a horrible fix! There’s obviously a problem with the device or its cabling and really it needs to be looked into. Not band-aid fixed with internet duct tape.

Categories: Uncategorized Tags:
  1. No comments yet.
  1. No trackbacks yet.