Well it's pretty easy to blame your database vendor and assume that errors *just* occur on the system. When you say the cluster crashed, something DID go wrong!
The root cause of the intermittent access was a complex error in the database cluster that was not previously known to us or our database vendor. It surprised us because we had not made any changes to our system. The error just appeared.
and what steps are the folks taking?
s we previously announced to you, we are half way through the roll out of Mirrorforce, our new data center architecture. Contrary to some reports, the full deployment of Mirrorforce, which will happen in Q1, would not have prevented this problem. Mirrorforce is a standby, mirrored, replicated data center. If we lost the West Coast data center because of a major hardware failure, a natural or man-made disaster, or a terrorist attack, the new data center would automatically take over.
We are confident that once fully implemented, Mirrorforce will represent long term value to our customers. But an extremely rare, undocumented software issue is not something that even the most robust systems can prevent 100% of the time. No system has 100% performance, and no software is bug free.
Anyways I know that the DBA's are busy but I'm sure they're also happy to have finally got it heard. Read the rest of the statement here.
Comments
Post a Comment