After having a good sleep I attended the first session in the morning from George Djerdj Srdanov with a presentation called “How to Get the Most Out of Your I/O Subsystem?”. George dived deep into the I/O subsystem, with the following important subjects:
Different RAID configurations (Read and Write penalties)
Block alignment and recommendation
George Djerdj Srdanov presenting at Hotos 2013
The presentation was good and triggered me to check some things at a customer I am currently working for. In the customer case the block alignment in regard of the online redolog files might be a point of investigation. Continue reading →
Due to a cold or “the bug” some at the symposium called it, I had a very bad night sleep. In the morning I was not able to follow the sessions and I ended up having a good breakfast and released the new white paper and latest presentation to Hotsos for distribution.
At 13:00 I attended the presentation of Dr. N.J.G. Gunther titled “Superlinear Scalability: The Perpetual Motion of Parallel Performance”. Because of his subjects I like to be present at his presentations, although on the other track Gwen Shapira had her first presentation titled “Visualizing Database Performance Using R” which was for me also a subject I would love to be present. The presentation from Neil discussed an important topic regarding the effect of increasing the amount of servers giving better throughput than expected on linearity, this phenomenon has been baptized by Neil as “Superlinear Scalability”. During the past couple of years he struggled to have his USL to fit with this phenomenon and at first he just ignored it, but after seeing the phenomenon more he had to admit the fact it really exists and his USL should be able to cope with it. After a long process he came to the conclusion that his USL is still able to apply if he would loosen the limitation of accepting negative numbers for his alpha parameter (Contention) in his USL. It basically means that by increased number of servers you get a kind of hybrid effect temporary (the throughput has increased with a factor more than expected on the number of added threads based on linear scalability). On a certain moment you still have to face the music and throughput degradation starts to appear due to coherency (Beta parameter in the USL formula). Based on the gathered proof, based on different data sets, he concluded that his USL still is valid, also in situations the “Superlinear Scalability” phenomenon is occurring. As usual Neil really showed in a very good scientific way that his claims were accurate and as he always says, “Models come from God and data comes from the devil!”. If you like to read more you can checkout his blog at: http://perfdynamics.blogspot.nl/2012/11/hotsos-2013-superlinear-scalability.html
Dr. N.J.G. Gunther at Hotsos 2013
After the presentation from Neil it was my turn to give my own presentation as I mentioned earlier, titled “”Method GAPP” Used to Mine OEM 12c Repository and AWR Data”. Continue reading →
It is the first of March 2013 finally… I will travel to Dallas for (one of) the best Oracle performance symposia in the world, Hotsos 2013. The flight to get there will be from Amsterdam to Philadelphia and from Philadelphia to Dallas. Against all odds I will not travel alone but an old Amis colleague and friend Marco Gralike with his colleague will be on the same flight (even going back). After departure from Schiphol at 13:00 in the afternoon, and having our stop in Philadelphia we arrive at the Omni Mandalay Hotel in Las Colinas (Irving / Dallas) at 22:30 local time… This is the real start of an awesome time at this great symposium…
Last week I got the great opportunity to present on Method-GAPP again at the UKOUG 2011 (see presentation of the UKOUG2011). This time the focus in the presentation was partly on the multi linear regression and for the other part especially on AWR data. The multi linear regression makes it possible to get a linear equation to calculate the end user response time, what makes it possible to get a complete breakdown of all involved components in the end user response time as show in the graph below. In the graph the test and modelling from the white paper is shown:
Breakdown of all the involved components for the end-user response time
In the breakdown, the UTILR80 is the utilization of the I/O and the UTILRAU is the utilization of the CPU. The breakdown shows that basically the REST is time which is always there but might be split out in more components if the involved model is enhanced. So more time is explained from the found variance of the end-user (R) response time. Continue reading →
In a lot of cases you like to know which SQL, wait-events, metrics, etc. in AWR is important for your specific end-user process response time. So it could be very well possible that the most important SQL, wait-events, metrics, etc. are show-in up in your “Top Activity” in your OEM grid control and AWR reports are actually not the most important for your end-user process response time.
After you know the share of time of your end-user process is taken by the database server (Method-GAPP primary components), you actual can use all the AWR (and ASH) information as secondary components as input in Method-GAPP (see the white paper). Basically we simply can use the “Data Mining – Explain” step in the method and create a factorial analyses as shown below (see the white paper).
After a long time of not able to finish my whitepaper, I finally finished it. Just struggling with time constraints made it hard to get my whole method on paper. I really wanted to have it finished before I would present the new improvements on the method at the HOTSOS Symposium 2011. In a couple of hours at 13:00 Dallas time I will do my talk based on the whitepaper and really hope I get a packed room of people.
Of course I hope the audience will see it’s potential and I will be able to put the message in the presentation as good as possible. I am just nervous on the demo I try to give… As some people may recall from HOTSOS 2009 I had a big issue with my laptop and in the end started 10 minutes late without a demo. So really hope this time everything will go smoothly.
The presentation will also become available on the blog, but for now you can download the official Method-GAPP whitepaper in the download section. As a last note I like to thank Cary Millsap and Dr. Neil Gunther for their inspiration and support.
Since I am working on my method-GAPP (see method-GAPP overview presentation) I have been challenged with the task to model a real system and not a Lab system with a programmed load profile. The big issue with a real system is that the load profile is changing all the time and the only thing we can recognize are periods of time we have a not to changing workload profile. For example an OLTP system will do during production hours from 9:30 in the morning till 11:30 and from 14:00 till 16:00 in the afternoon comparable things, but will do from 01:00 till 06:00 in the night something totally different. The given example could match maybe some OLTP systems but could be totally different for your OLTP production system. Continue reading →
The last couple of months I have worked very hard on method GAPP and have finally made a very big improvement to it. In the past GAPP was only able to pin point where in the architecture the biggest variance in response time was caused. The improvement to GAPP makes it now also possible to find within certain error also the service time per measured component in the architecture. The point is that sometimes the component causing the biggest variance in end user response time is not always the component responsible for the most service time of the total response time.
The second version of GAPP has now an extra step inside the method, which is “data modeling”, the data is first modeled by using normalized response times for different amount of servers by using the Erlang C formula. Next to this data mining is used with a generalized linear model and ridge regression, to solve near collinearities in the data. With this extra step in place the prediction of service time and wait time per measured component became possible. When I first verified it against real system data I was really happy to find out that it works very well. More information will follow soon in blogs and hopefully for the end of this year in a white paper.
I am very happy I get the opportunity from Hotsos to be able to present it next year in march 2011. Via this way I also like to thank everybody who inspired me and made this possible, especially Cary Millsap and Dr. Neil Gunther.
As always Hotsos started off with a nice keynote, this time done by Tom Kyte. Tom Kyte was introduced by Hotsos president Gary Goodman after the HOTSOS 2010 opening. Tom’s keynote theme was “Should we be less smart some times”. Tom told about own experiences, that he in the past gave sometimes too fast an answer. It is very important to think about an answer before giving it… Why? Well some things applied in the past or for a specific version, and now they don’t anymore… this can be a problem, a real issue. Always make sure you talk about the same definitions, and agree on them. Make sure talking about the same version and of course about similar circumstances. When you start giving answers in general be sure to work with facts and not some assumptions which might be wrong. So you should always think about the information, about the circumstances and the assumptions you do, it means “Continuous Thinking”.
Some time ago I encountered an issue with an outerjoin query. Although the execution plan was not that bad the respons time was really bad. I found out that the outerjoin in the query was causing the biggest problem. After doing some easy research I checked out the performance of a query with a direct join (a.col = b.col) and one without a join (a.col is not joined). Even if you would execute them seperate you would be much faster as doing the outerjoin. This brought me to the idea of doing these two queries for the sake of the data be retrieved by the outerjoin. By doing a union between these two queries and to get rid of the double records, I would have the same result as with the outerjoin. This is what I did, a collaegae of mine Jorrit Nijssen changed the code to a emp/dept example (thanks Jorrit). The base case looks like: Continue reading →