It’s Day Two of the Chase.com mega crash, and still it is unclear why the site is in meltdown.
The New York Times reports that the site is suffering from some sort of “computer flaw.” Here’s the explanation (if you can call it that):
Bank officials have ruled out an online attack. Instead, they said they believed that a software error caused by a “third-party database company” had corrupted its computer systems, rendering them unable to process customer log-ins at Chase.com. “This resulted in a long recovery process,” [a bank spokesman] said.
So there’s a “third-party database company” that has some sort of protocol within the login procedure? And the database company had “corrupted” JPM’s “computer systems”? I can’t make sense of it. Help.
Not surprisingly, the outage has effected millions, and puts what I would consider to be a dent into online banking. Online banking is great — until it doesn’t work. And when it doesn’t, it becomes Microsoft Windows with your money, and we all know how reliable Windows is. To wit, again courtesy of the Times:
“A system outage of this length communicates to me that they really don’t have a handle on their systems,” said Vic Caterina, a Chase customer in Chicago who does all of his banking online. “My relationship with Chase is now under reconsideration.”
But I digress. Remember, there is a Dr Brown’s Diet Cream Soda (the single greatest soda ever created, other than Fanta in Europe) to anyone who can nail down why Chase.com is broken.
Chase is blaming Oracle (who is, in turn, blaming someone else):
http://www.theregister.co.uk/2010/09/20/chase_oracle/
“According to Curt Monash, a database industry commentator, Chase said a third party supplier’s database software corrupted systems information and this prevented customers logging in to Chase.com.
“Monash said JP Morgan Chase runs its user profile Oracle database on a cluster of eight Solaris T4520 servers, each with 64GB of RAM, with the data held on EMC storage. El Reg is told that Oracle support staff pointed the finger of blame at an EMC SAN controller but that was given the all-clear on Monday night.
“Monash subsequently posted that the outage was caused by corruption in an Oracle database which stored user profiles. Four files in the database were awry and this corruption was replicated in the hot backup.
“Recovery was accomplished by restoring the database from a Saturday night backup, and then by reapplying 874,000 transactions during the Tuesday.”