Hotsos 2013 A personal Touch 2/4 – Second Day

Due to a cold or “the bug” some at the symposium called it, I had a very bad night sleep. In the morning I was not able to follow the sessions and I ended up having a good breakfast and released the new white paper and latest presentation to Hotsos for distribution.

At 13:00 I attended the presentation of Dr. N.J.G. Gunther titled “Superlinear Scalability: The Perpetual Motion of Parallel Performance”.  Because of his subjects I like to be present at his presentations, although on the other track Gwen Shapira had her first presentation titled “Visualizing Database Performance Using R” which was for me also a subject I would love to be present. The presentation from Neil discussed an important topic regarding the effect of increasing the amount of servers giving better throughput than expected on linearity, this phenomenon has been baptized by Neil as “Superlinear Scalability”. During the past couple of years he struggled to have his USL to fit with this phenomenon and at first he just ignored it, but after seeing the phenomenon more he had to admit the fact it really exists and his USL should be able to cope with it. After a long process he came to the conclusion that his USL is still able to apply if he would loosen the limitation of accepting negative numbers for his alpha parameter (Contention) in his USL. It basically means that by increased number of servers you get a kind of hybrid effect temporary (the throughput has increased with a factor more than expected on the number of added threads based on linear scalability). On a certain moment you still have to face the music and throughput degradation starts to appear due to coherency (Beta parameter in the USL formula). Based on the gathered proof, based on different data sets, he concluded that his USL still is valid, also in situations the “Superlinear Scalability” phenomenon is occurring. As usual Neil really showed in a very good scientific way that his claims were accurate and as he always says, “Models come from God and data comes from the devil!”. If you like to read more you can checkout his blog at:

Dr. N.J.G. Gunther at Hotsos 2013

Dr. N.J.G. Gunther at Hotsos 2013

After the presentation from Neil it was my turn to give my own presentation as I mentioned earlier, titled “”Method GAPP” Used to Mine OEM 12c Repository and AWR Data”. In my presentation I showed how Method-GAPP could use data from the Oracle Enterprise Manager 12c Repository to find out bottlenecks in your end-user process response times. OEM is most of the time very centrally placed in the system and so the repository contains almost all the input you need to use the method. In the method it is important to have your data synchronized at timestamps and basically this job is done for you by using the data from the OEM repository. In most of the Method-GAPP projects I have been the data collection and synchronization steps from the method took the most time, so by have this done by OEM you can gain a lot of valuable time in this kind of projects. Next to the usage of OEM I gave examples of the power of the method by making factorial analyses of AWR, SQL-statement response time data and AWR, event data. In the graph below is shown what Method-GAPP in base can do with the primary components, as defined in the method, in a very simplified way.

Simplified Capabilities of Method-GAPP Next to this Method-GAPP can use secondary components, as defined in the method, to find for example which SQL is involved in causing biggest amount of variance in the end user response time (as Cary said the variance is much more important as the mean). The picture below shows how the factorial analyses to explain the response time of “Warehouse Query”, which was using data of a test when different transactions where running are compared with the Top Activity SQL when a test was done with only the “Warehouse Query” running.

Compare between factorial analyses and OEM12c most used sql The picture shows clearly that although the factorial analyses was done on data with a mixed workload of transactions, the factorial analyses still can find the SQL involved in one specific transaction. This shows how powerful the data mining actual is.

At the end of the presentation I got several questions from the audience locally present and from the audience remotely present. One of the questions I got a lot of times is what to use to get your end-user response time measurement. One of the tools I have seen to be very helpful with this is Oracle Real User Experience Insight (RUEI). The newest version has a very good integration with OEM 12c, what makes it even better to be used in Method-GAPP.

Based on the reactions I got from the audience and the people I spoke afterwards, like Neil Gunther and Cary Millsap, and of course my own feeling, I guess the presentation just contained too much information for one hour to explain the whole method in this detailed way. In the future I will break down my presentation into smaller presentations to only deal with a part of the whole method and be able to explain the steps in higher detail. The presentation can be found here, and the new white paper can be found here.

After my presentation I attended the presentation from Kellyn Pot’Vin titled “EM12c: Metric Extensions — Designing, Deploying and Dynamic Success”. In her presentation she showed how the new OEM 12c feature “Metric Extensions” could be used. Basically the “Metric Extensions” are replacing the good old “User Defined Metrics”. The big advantages of “Metric Extensions” over “User Defined Metrics” are:

  • Library to store and view.
  • Development cycle support, (dev., test, deploy)
  • Versioning.

    Kellyn Pot'Vin presenting at Hotsos 2013

    Kellyn Pot’Vin presenting at Hotsos 2013

To get more information on the “Metric Extensions”, see also the Oracle documentation:

After attending the presentation from Kellyn I attended the presentation from Kyle Hailey with his presentation titled “Database Virtualization Comes of Age”. In this presentation Kyle talked about Database Virtualization, which is a technology to reduce the amount of storage for (virtual) copies of the database. It basically only stores the original database and the changed blocks in comparison of the original database. In lots of organizations a lot of copies of the production database exist for test purposes. The database virtualization concept is basically used to help in this kind of situations. Kyle showed that CloneDB is a very cool tool to use, you can read more about it at:

Kyle Hailey presenting at Hotsos 2013

Kyle Hailey presenting at Hotsos 2013

After this long second day I had a nice talk with Kyle Hailey, Neil Gunther and Karl Arao. Karl explained about his extensive testing on some big systems. It was great to see that Karl gathered so much data from different systems, including Exadata systems. You can read more about it in the blog post “cores vs threads, v2 vs x2” at

Dr. Neil Gunther, Karl Arao and me

Dr. Neil Gunther, Karl Arao and me

For the dinner that evening I was invited by Neil to join him for Japanese. The dinner was great and we had a good chance to catch up with each other again as old friends (Thanks again Neil for the invitation!).

After having dinner we peeked in into the on going party from Hotsos. Hotsos organized this time a kind of retro evening with all kinds of games. After the party we had with several other speakers a drink. It is always fun to be able to have fun and great discussion in the field with so many of you…

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>