Some time ago I encountered an issue with an outerjoin query. Although the execution plan was not that bad the respons time was really bad. I found out that the outerjoin in the query was causing the biggest problem. After doing some easy research I checked out the performance of a query with a direct join (a.col = b.col) and one without a join (a.col is not joined). Even if you would execute them seperate you would be much faster as doing the outerjoin. This brought me to the idea of doing these two queries for the sake of the data be retrieved by the outerjoin. By doing a union between these two queries and to get rid of the double records, I would have the same result as with the outerjoin. This is what I did, a collaegae of mine Jorrit Nijssen changed the code to a emp/dept example (thanks Jorrit). The base case looks like: Continue reading
Category Archives: general
ORA-01422 while drop table (even DUAL looks fine)
Today on the OTN forum a question was comming up about an ORA-01422, in combination of a table drop.
SQL> drop table xxx
2 /
drop table xxx
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01422: exact fetch returns more than requested number of rows
Due to the fact that this looked a bit weird, I did a small test on my 11.1.0.7 database. The following test case I did: Continue reading
Enhancing fast queries using “Tapio-Indexes”
During the years I have a lot of times encountered performance problems that ended up to be fast queries (less than 10ms) which are executed very much. In these situations the execution plans, from such queries can look like:
OPERATION OPTIONS OBJECT# NAME
——————– ——————– ———- ———————————–
SELECT STATEMENT .
SORT ORDER BY .
TABLE ACCESS BY INDEX ROWID 120713 XXX.TABLE_WITH_MANY_COLUMNS
INDEX RANGE SCAN 121558 XXX.INDEX_WITH_FEW_COLUMNS
In a lot of cases we deal with a query for example which is returning fewer columns than exists in the involved table like four, from an involved table having twenty columns. Although the execution plan looks already pretty “OK”, it still results in a query which is in top three most resource taking queries.
To enhance the query we can use “Fat Indexes” or nowadays better known “Tapio Indexes” (I call them like that nowadays). What are these kind of indexes… In principle nothing really new, but for me a couple of years ago an (re)eye opener by Tapio Lahdenmaki. This Finish independent database performance consultant and instructor gave a presentation at Miracle Open World 2007 in Denmark, I attended. Tapio can present this topic in a very special way, and in my opinion “his message” should be known by all developers and dba’s.
The most important thing of “his message” is the fact that we should create an index which is highly selective and contains all the columns asked in the query. So in principle we create an index with forexample two selective columns, with the most selective one in front and behind these two columns the other selected columns. When creating such an index an execution plan would look like:
OPERATION OPTIONS OBJECT# NAME
——————– ——————– ———- ——————————–
SELECT STATEMENT .
SORT ORDER BY .
INDEX RANGE SCAN 128140 XXX.IND_WITH_ALL_SELECT_COLS
The big advantage is that only the index is accessed and not the table anymore, what can result in a decrease of elapsed time for the query with a factor four or even more. Why this is much better? Let we go back how indexes and tables are stored on the physical storage.
An index is stored in an ordered way on the storage, so an index range scan will mean that data is retrieved from the storage in a sequential read from a storage point of view and so a read-head from a spindle can stay on almost the same place. This eliminates seek times and latency on the physical storage spindles.
A Table is stored in a non ordered way on the storage, so accessing table rows will almost always result in accessing the data in a scattered read from a storage point of view. This implies that a lot of seek and read times are wasted.
Of course the existence of caches makes it a bit different but still the basic principles are valid, this means also for striping.
So when only an index is accessed instead of index and table, we don’t waste read times for columns we don’t need (table rows), and we eliminate a lot of seek times for retrieving the different table rows.
If you want to know more about this kind of index design, you can read the book “Relational Database Index Design and the Optimizers” from Tapio Lahdenmaki.
SQL Tracing in Self Service Applications (CRM) within Oracle Applications 11i
Some years ago I posted a blog on the AMIS technology blog describing a trace method of Self Service applications within Oracle EBS 11i. The main difference in this post is that the insert into the table fnd_profile_option_values now has one extra field inserted. This change has been introduced somewhere in 11.5.9. The scripts below can be used in higher versions, the post from the past at the AMIS technolog blog can be used in older versions.
In a lot of cases it is very interesting to be able to trace Self Service Applications (CRM) in Oracle Applications 11i. Most of the time it is very hard to understand what is going on in such a session and in that case it might be very handy to get the SQL behind it. To do this there is a very nice not really known feature of the profile option “Initialization SQL Statement – Custom”. With this profile option it is possible to trace any session of a given Apps user on any level. To ensure you don’t end up with a hanging apps login (quote’s should be placed the right place) you can use the following scripts just on the SQL plus prompt to activate and stop the tracing. Continue reading
ORA-25330 is encountered when running DBMS_PREDICTIVE_ANALYTICS.PREDICT
I recently discovered that under certain circumstances, for now it looks to be environment setting depended, that an ORA-25330 is encountered when running dbms_predictive_analytics.predict procedure. I found out of this problem while preparing a demo and certainly the error was comming up in TOAD (I am not a TOAD fan, but it is handy searching through the very big amount of factor columns in a GAPP analysis). When the same command below was executed directly on the server via sqlplus, the error was not encountered.
To investigate I tried to create a errorstack on the error, but without luck. So I started to do a sqltrace and found out that the ORA-25330 is actually on the server translated as ORA-40206. After I found that I created the following small test in TOAD: Continue reading
GAPP presentation at HOTSOS 2009
GAPP Performance Method at HOTSOS 2009
For some months ago (october 2008) I wrote an abstract for a paper to present at the HOTSOS 2009 symposium. Although I hoped I was selected to give the presentation, I was not selected by Hotsos. This was a pitty but looking at the people and the presentations who made it to the agenda of the HOTSOS 2009 symposium, it would have been very special to be there also. Due to the high quality of the sysmposium and the fact that it is the place to be for every performance geek out there, I planned together with my collegae Marco Gralike a trip to the symposium this year only as an attendee.
Last week I received unexpectedly an email from Cary Millsap with a very special opportunity within. Cary asked me if I was still interested in presenting at the comming symposium and although very little time I said of course YES. So although late I was added to the HOTSOS 2009 symposium agenda to give a presentation called “Practical Use of Method Gapp to Find Performance Bottlenecks for Business Processes in Complex Architectures”
I hope personal that I will give a presentation wich will be equally or even more interresting than the one I gave at the HOTSOS Symposium 2008. As a last word I really like to thank Cary Millsap for this opportunity.
Regards, Gerwin
Dr. N.J. Gunther Receives A.A. Michelson Award
Dr. N.J. Gunther has received at the annual CMG International Conference in Las Vegas last week the A.A. Michelson Award. Dr. Neil Gunther was the recipient of the prestigious A.A. Michelson Award; the industry’s highest honor for computer performance analysis and capacity planning. As he said in his acceptance speech, it was the fulfillment of a dream he had entertained at his first CMG Conference in 1993.
I really like to congratulate Dr. Neil Gunther with this very big achievement. For further reading you can check out the blog post from Dr. Neil Gunther on his website. Don’t forget to check out the speech presentation with at the end a nice picture of Dr. Neil Gunther and his father.
The Oracle Database Buffer Cache should it be bigger…
During the years I heard lots of stories about sizing the Oracle Database Buffer Cache. Oracle introduced in Oracle 8i (8.1.7) the view V$DB _CACHE_ADVICE. According the documentation this view shows IO’s that would be performed at various cache sizes from 10% to 200% of the current size. For example (GV$DB_CACHE_ADVISE):
The thing what is crossing my mind all the time is the question, will a system really react the way the advisor is predicting? This question is a very obvious question of course and in this post I like to place a nice little discussion in place.
Continue reading
How to get easily all hidden init.ora parameters in 11G
As you know I have just started my own blog and I realized that sometimes pretty simple things can be really handy to know. Some time ago I found out by accident that Oracle 11G database has a nice feature to find out the available underscore parameters in the pfile/spfile (there are more underscore parameters, but these are really dangerous).
=================== DISCLAIMER ===================
The usage of underscore parameters is officially prohibited. Underscore parameters should only be used after having instructions from Oracle Support. The usage from underscore parameters without having these instructions can lead to an unsupported Oracle environment. Also instability, crashes or even corruptions can be caused by usage of these parameters. So please use the information in this post with care.
================================================
These parameters can be easily obtained by giving the following command on the sql prompt as shown here:
Continue reading