In the previous post I showed how to create SQL Tuning Sets (STS) by capturing SQL statements from AWR and the cursor cache. Once the STS exist they can be used to create SQL Plan Baselines. Ideally, you would export these and load them into a database that you are going to be testing your upgrades. Again, my scripts are based on the thorough work that Carlos has provided.
The first thing to do is to stage and pack your SQL Tuning Sets. By “tagging” the name of them with something common (See sqlset_tag in params.sql) the scripts will stage everything using the same tag.
Run pack_sql_tuning_sets.sql with a user with the ADMINISTER ANY SQL TUNING SET privilege. This script will stage your STS to a table called SPM_STGTAB_STS. Export this table with exp (or expdp) and scp it to your test database.
Now, run unpack_sql_tuning_sets.sql on your test database. I put a Pause in the script after it creates the staging table. At that point you can import your dump file and then hit enter to continue. As I type this, it occurs to me that the staging table step may not really be required, but we had to do it this way for our operation DBAs. Feel free to modify 🙂
Once the STS are unpacked they are available for your use. Run spm.sql to verify that your STS match production. After that the next step is to upgrade your test database and start running some load!
So, now what? How do you know if SQLs have regressed? Carlos has a script that mines AWR and shows SQL that have either improved or gotten worse. That is useful if you have enough AWR history that spans the period from before the upgrade. I did something similar which compares the elapsed time from the STS with what is currently in the cursor cache.
SQL> @sqlset_regress_cur.sql SQL_ID V12C_PHV EXECUTIONS AVG_ETIME V11G_PHV EXECUTIONS AVG_ETIME D DIFF_ETIME PERFORMAN ------------- ---------- ---------- ------------ ---------- ---------- ------------ - ---------- --------- 6zjxqxdb9t6vd 890145704 3 318.897 2409362170 634767 1.630 * -317.26637 Regressed 376fjswqu4cnt 1920954746 3 140.638 2259879957 3218 .328 * -140.31039 Regressed 0wf3hdbcmfn9d 1566315087 3 161.397 1566315087 174 65.093 -96.304033 Regressed f61rxd361b8g2 1566315087 3 113.604 1566315087 163 69.922 -43.682398 Regressed gp1p36b8yskbv 3302961321 7 27.064 1218582728 8981 .682 * -26.381721 Regressed b2kxd8xwd3ymy 659280062 1 26.402 659280062 101 7.523 -18.879445 Regressed ghfdrs7hrw252 2155093733 33 23.465 2603635391 85319 5.554 * -17.911551 Regressed 0znx18d043bg3 2743575134 1 15.805 3971049314 107 .004 * -15.801179 Regressed d29prd53mzuyc 2371823249 8 14.128 3772892946 17002 .084 * -14.044119 Regressed a6yadgjwu1km1 4258360765 17 9.351 1959399513 81643 .015 * -9.3363534 Regressed
Kerry Osborne’s unstable_plans.sql is another useful script for identifying SQL that are changing PHV and shows SQLs with a wide spread between elapsed times. You can also use AWR’s Top SQL reports or ashtop to identify SQL_IDs that weren’t present before, but are now showing up.
Let’s say you’ve identified a SQL that you suspect as taking longer, let’s prove it. I use awr_plan_change_all.sql which was based on another one of Kerry’s, but mine includes some additional columns for where the time is being spent (IO, CPU, Cluster waits,etc)
SQL> @awr_plan_change_all 2 Enter value for sql_id: f3dmvyhb80a3c SNAP_ID NODE BEGIN_INTERVAL_TIME SQL_ID PLAN_HASH_VALUE EXECS AVG_ETIME AVG_CPUTIME AVG_IOTIME AVG_CLUSTERTIME AVG_LIO AVG_PLSQL_TIME ---------- ------ ------------------------------ ------------- --------------- ------------ ------------ ------------ ------------ --------------- -------------- -------------- 37365 1 10-AUG-16 10.00.42.154 AM f3dmvyhb80a3c 3596954981 558 6.360 5.989 .001 .061 412,116.3 0 37366 1 10-AUG-16 11.00.02.498 AM f3dmvyhb80a3c 597 6.038 5.738 .001 .044 414,712.6 0 37367 1 10-AUG-16 12.00.15.310 PM f3dmvyhb80a3c 601 5.962 5.713 .000 .029 416,235.8 0 37368 1 10-AUG-16 01.00.03.469 PM f3dmvyhb80a3c 596 6.030 5.776 .000 .017 417,754.2 0 37369 1 10-AUG-16 02.00.09.605 PM f3dmvyhb80a3c 577 6.240 5.954 .000 .018 419,133.1 0 37370 1 10-AUG-16 03.00.17.792 PM f3dmvyhb80a3c 577 6.209 5.921 .000 .014 419,306.3 0 37371 1 10-AUG-16 04.00.08.722 PM f3dmvyhb80a3c 578 6.238 5.958 .000 .010 420,520.5 0 37372 1 10-AUG-16 05.00.20.405 PM f3dmvyhb80a3c 568 6.348 6.062 .000 .011 421,907.1 0 37373 1 10-AUG-16 06.00.32.517 PM f3dmvyhb80a3c 543 6.629 6.331 .000 .012 423,088.8 0 37379 1 11-AUG-16 01.00.01.148 AM f3dmvyhb80a3c 2260335173 1 2,215.397 2,131.329 3.398 .988 178,247,532.0 0 37379 1 11-AUG-16 01.00.01.148 AM f3dmvyhb80a3c 3249808128 1 1,387.308 1,338.448 .978 .476 186,764,546.0 0 37383 1 11-AUG-16 05.00.04.139 AM f3dmvyhb80a3c 2260335173 1 3,620.944 3,510.651 4.073 1.250 483,276,428.0 0 12 rows selected.
The above ran in 6 seconds, but in 12c switched to 2 different plans that were longer than 1.3K seconds. Checking to see if it is available in the STS. See find_sts.sql :
SQL> @find_sts_sql Enter SQL_ID: f3dmvyhb80a3c SQLSET_NAME SQLSET_OWNER PLAN_HASH_VALUE ------------------------------ ------------------------------ --------------- STS_11G_AWR STS_OWNER 3596954981 STS_11G_CURCACHE_1 STS_OWNER 3596954981 STS_11G_CURCACHE_2 STS_OWNER 3596954981
Now it’s just a matter of creating the baseline. See create_spb_from_sts.sql and baselines.sql:
SQL> @create_spb_from_sts.sql Enter SQL_ID: f3dmvyhb80a3c Enter SQL Set Name: STS_11G_CURCACHE_1 Enter SQL Set Owner: STS_OWNER Enter optional Plan Hash Value: 3596954981 SQL> @baselines SQL_HANDLE SIGNATURE PLAN_NAME CREATED LAST_EXECUTED ACC FIX ENA ------------------------------ --------------------------- ------------------------------ ------------------------------ ------------------------------ --- --- --- ... SQL_c8415cc67a8e1de7 14429916688601193959 SQL_PLAN_chhawstx8w7g7c24dff08 11-AUG-16 09.27.54.000000 AM 11-AUG-16 09.34.43.000000 AM YES YES YES ... 16 rows selected.
Another extra step that we have been doing is flushing the cursor(s) to force a new hard parse (and hopefully immediately start using the baseline). See flush_cursor.sql and sq.sql.
SQL> @flush_cursor Enter value for sql_id: f3dmvyhb80a3c 'EXECDBMS_SHARED_POOL.PURGE('''||ADDRESS||','||HASH_VALUE||''',''C'');--RUNFROMINSTANCE'||INST_ID ------------------------------------------------------------------------------------------------------------------------------------------------------------- exec DBMS_SHARED_POOL.PURGE ('0000013F89F95BB8, 377497708', 'C'); -- run from instance1 SQL> @sq f3dmvyhb80a3c INST_ID CHILD_NUMBER EXECUTIONS PLAN_HASH_VALUE SQL_PLAN_BASELINE SQL_PROFILE LAST_ACTIVE_TIME AVG_ELAPSED ---------- ------------ ---------- --------------- ------------------------------ ---------------------------------------------------------------- ----------------- ----------- 1 0 24 3596954981 SQL_PLAN_chhawstx8w7g7c24dff08 20160811 09:43:56 6.39465699
There you go. Now it’s just a matter of doing those steps until the database performance stabilizes while continuing to test and noting the baselines that might be required in production. You are set for production.