Upgrade Stability (Part 2 Electric Boogaloo)

September 07, 2016 ❖ Tags: oracle, performance, baselines, upgrade

In the previous post, I described the method we used to mitigate the performance risks associated with upgrading from 11.2.0.3 to 12.1. The plan was to capture SQL cursors in AWR and the cursor caches of each of our RAC instances in SQL Tuning Sets.

So, what to capture? It first depends on how large your share pool is and how often your SQL gets aged out. In our case, this wasn’t problem since our shared pool is huge (225G). Second, what is too much? As stated before, we have over 200K cursors in GV$SQL. Do we really need every one? This is how I determined what to grab:

-- Totals for everything, by APP SCHEMA (renamed schemas to protect the innocent) 
SQL>  select parsing_schema_name,count(distinct sql_id) from gv$sql where parsing_schema_name in ('APP1','APP2','APP3','APP4') and last_active_time > sysdate-5 group by parsing_schema_name order by 2,1;

PARSING_SCHEMA_NAME            COUNT(DISTINCTSQL_ID)
------------------------------ ---------------------
APP1                                             105
APP2                                            2715
APP3                                           79425
APP4                                          226309

-- Totals for everything run in the last day
SQL>  select parsing_schema_name,count(distinct sql_id) from gv$sql where parsing_schema_name in ('APP1','APP2','APP3','APP4') and last_active_time > sysdate-1 group by parsing_schema_name order by 2,1;

PARSING_SCHEMA_NAME            COUNT(DISTINCTSQL_ID)
------------------------------ ---------------------
APP1                                             100
APP2                                             913
APP3                                            4446
APP4                                           65223

-- Totals for everything executed over 50 times
SQL> select parsing_schema_name, count(distinct sql_id) from gv$sql where parsing_schema_name in ('APP1','APP2','APP3','APP4') and executions > 50 group by parsing_schema_name;

PARSING_SCHEMA_NAME            COUNT(DISTINCTSQL_ID)
------------------------------ ---------------------
APP1                                              14
APP2                                              33
APP3                                             207
APP4                                            2063

SQL>  select parsing_schema_name,count(distinct sql_id) from gv$sql where parsing_schema_name in ('APP1','APP2','APP3','APP4') and executions > 1000 group by parsing_schema_name order by 2,1;

PARSING_SCHEMA_NAME            COUNT(DISTINCTSQL_ID)
------------------------------ ---------------------
APP1                                               1
APP2                                               2
APP3                                              46
APP4                                             591

SQL>  select parsing_schema_name,count(distinct sql_id) from dba_hist_sqlstat where parsing_schema_name in ('APP1','APP2','APP3','APP4') and loaded_versions > 0 group by parsing_schema_name order by 2,1;

PARSING_SCHEMA_NAME            COUNT(DISTINCTSQL_ID)
------------------------------ ---------------------
APP1                                               1
APP2                                              38
APP4                                            1342

We opted to capture anything that was executed more than 50 times. Noting, these numbers were captured prior to us making the decision to load a separate STS for each instance. And we captured everything that was loaded into AWR.

The two scripts that were used to load all of the above were: create_sql_tuning_set_from_cursor_cache.sql create_sql_tuning_set_from_awr.sql

If you want to divert from the choices that I made (and you probably will) you’ll have to modify these scripts accordingly. I did create params.sql that is used to define:

SQLSET_OWNER <<< who owns your STS (needs to have ADMINISTER ANY SQL TUNING SET privilege
SQLSET_TAG <<< used as the prefix of your STS. I used STS_11G for our upgrades
PARSING_SCHEMAS <<< comma delimited list of the schemas you care about. For example, APP1, APP2,APP3,APP4

Couple of other things that you could do differently.

  1. Use DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET which periodically polls the shared pool to find statements to add. This would be helpful if SQL ages out before you can load it
  2. Use a different LOAD_OPTION since you can either INSERT (the default), UPDATE, or MERGE statements that are discovered. In our case, once our application has been running for a week there will be no new SQLs introduced, so a one time capture was sufficient. Also, rather than dealing with Plan Hash Value (PHV) differences between instances we opted to just store one STS for each instance. This was okay for us since on our largest database, collectively across our six nodes, all the statements captured required only 100 MB to store. Less headache and the duplicates could be considered to be a backup if a STS wasn’t able to create the baseline. This last tidbit ended up proving to be useful.

After running the create_tuning_set* scripts, mine looked like this: (From spm.sql)

SQL> select SQLSET_NAME, SQLSET_OWNER, count(*) from dba_sqlset_statements group by SQLSET_NAME, SQLSET_OWNER order by 2,1;
SQLSET_NAME                    SQLSET_OWNER                     COUNT(*)
------------------------------ ------------------------------ ----------
STS_11G_AWR                    STS_OWNER                            1430
STS_11G CURCACHE_1             STS_OWNER                            2522
STS_11G_CURCACHE_2             STS_OWNER                            2457
STS_11G_CURCACHE_3             STS_OWNER                            2344
STS_11G_CURCACHE_4             STS_OWNER                            2025
STS_11G_CURCACHE_5             STS_OWNER                            2020
STS_11G_CURCACHE_6             STS_OWNER                            1772

Note, that the latest version in github tacks on a unique identifier to the SQLSET_NAME, so the same scripts can be run more than once and not have a name collision.

Now, if you do this on your production database you should be all set for your upgrade. Well, that’s only if you trust me. And you shouldn’t. In the next post we will cover staging these STS and exporting them so you can set them up in your test database.