At Oracle Open World I had after the keynote from Larry Ellison the opportunity to see the machines where everybody was talking about. Marco (the XMLDB guy) found out where they were, so Marco and me went to Moscone North and found the machines in their full glory… (a bit exaggeration, but to be honest as a former worker at IBM it still gives me a good feeling to see this kind of technology…).
At the moment we really went close to the machines, a security officer gave us some signs that we were not allowed to cross a line I didn’t see before. Anyway still looking at the machines from a small distance and making many pictures…
After a short while, the security guy crossed his heart and let us nearer to these holy machines. We made it we were there. I made a nice picture of the opened machines.
While being so close to the machines I had a nice chat with Doug Cackett Oracle Director BI&W Architecture. He informed me more over the technical details of the machine and how the machines were designed for big Data Warehouses. The advantages of the machine are the high search speed through tremendous amounts of data (read TB’s). I had a small discussion with him about the fact that Oracle was working with HP, and how Oracle now was stuck to HP on this point. I don’t want to give any statements here, but I like to give my own opinion.
Doug Cackett and me having a nice conversation
I personally can’t believe that Oracle only will work with HP, I think eventually that other vendors like IBM, SUN and storage vendors like EMC and NetApp will get the opportunity to get on the Oracle Database Machine market. In principle the controllers of the so called “cells” should understand the “smart scan” which has been introduced with the 11.1.0.7 patchset. But for now only HP has been certified to be using this…
I spoke to a lot of people after the keynote. A lot of people including myself had not such a right feeling about the Oracle Database Machine at first. After having some thought about it, I started to realize that the Oracle Database machine which is working with “EXADATA” storage servers internally, has been built for a very special goal. So it is not a solution for probably 90% of the current customers. This because of the high cost and the fact that this machine will only come to its full value, when you are in situation (like really big data ware houses), that you have to search in very big datasets, and just have to retrieve a relative small amount out of these datasets.
Personally I know from practice that a lot of I/O bottlenecks are due to very bad written sql, or just a matter of better indexing or less frequent executing a certain query. Further can a big database buffer cache solve in a lot of cases the I/O bottlenecks. I think only when the system is not able to be tuned like this, what is very sporadically the case we would have to look at solutions like EXADATA.
Don’t understand me wrong here, I am sure, and I have been to customers where EXADATA would be a solution. So the situation is very important and just say it is good or bad or better say you love it or you hate is, is not so easy. I was happy that after we wanted to go, we were surprised by a nice goody, the EXADATA cap. What seemed to be a real collector’s item…
As a final touch I like to add that we all know that the amount of data in companies is increasing and increasing, so technologies like EXADATA will get more important in the end. The fact that a lot of data is stored unstructured, will give in future also more problems to be able to access these data. Possible EXADATA can be of help in these situations. What will happen, we will see…