Server Error in '/' applicationwebserver. The resource can

Trying to explain complex things in simple terms
I am speaking…
Email Subscription
Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Join 3,291 other followers
All content is (C) Martin Bach and "Martin's Blog", . Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Martin Bach and "Martin's Blog" with appropriate and specific direction to the original content.
Content is provided "as-is" without guarantee or warranty that it works-if you find an article useful, test first!
In the first part of this little series I explained how a missing (default) tablespace in a PDB can have an impact when opening it. In this instalment I’ll add another common cause for PDBs not opening in read-write mode: different patch levels between the PDB to be plugged in and the CDB.
Assume for a moment that you upgraded Oracle binaries from 12.1.0.2.5 to 12.1.0.2.160419. In other words, you applied the most current PSU + OJVM patch at the time of writing to your database. Once complete, you would see output similar to this for a single instance Oracle 12c installation without Oracle Restart:
[oracle@server2 ~]$ opatch lspatches
;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
;Database Patch Set Update : 12.1.0.2.291127)
OPatch succeeded.
I am using database CDB1 in this blog post to indicate the patched CDB:
SYS@CDB$ROOT-CDB1& select patch_id,action,status,description
from dba_registry_
PATCH_ID ACTION
---------- --------------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
Database Patch Set Update : 12.1.0.2.5 ()
Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
Database Patch Set Update : 12.1.0.2.291127)
NB: I double-checked twice and the above output seems correct: there is no rollback of the 12.1.0.2.5 DB PSU part prior to the installation of the new one.
Let’s also assume that you would like to plug a PDB into the newly patched CDB. The PDB to be unplugged from the other CDB (named CDB2) has a lower patch level:
SYS@CDB$ROOT-CDB2& select patch_id,action,status,description
from dba_registry_
PATCH_ID ACTION
---------- --------------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
Database Patch Set Update : 12.1.0.2.5 ()
Plugging in
The steps needed to plug the PDB into its new home have been discussed so many times that I have decided against showing them again. Please refer to the official documentation set (Database Administrator’s Guide) for all the detail. The PDB I would like to plug in to CDB1 is named PLUGINTEST.
SYS@CDB$ROOT-CDB1& create pluggable database plugintest
using '/home/oracle/plugintest.xml'
Pluggable database created.
SYS@CDB$ROOT-CDB1& alter pluggable data
Warning: PDB altered with errors.
With the information I provided previously you can probably guess what’s wrong, but if you are a production DBA who has been tasked to “plug in a PDB” you might not have the background knowledge about patch levels of all the databases in the estate. How can you find out what went wrong? First stop is the alert.log:
alter pluggable database plugintest open
Thu Jul 07 11:02:30 2016
Pluggable database PLUGINTEST dictionary check beginning
Pluggable Database PLUGINTEST Dictionary check complete
Database Characterset for PLUGINTEST is WE8MSWIN1252
***************************************************************
WARNING: Pluggable Database PLUGINTEST with pdb id - 5 is
altered with errors or warnings. Please look into
PDB_PLUG_IN_VIOLATIONS view for more details.
***************************************************************
Opening pdb PLUGINTEST (5) with no Resource Manager plan active
Pluggable database PLUGINTEST opened read write
Completed: alter pluggable database plugintest open
This sounds a bit tragic, but not all is lost. The primary issue is that the PDB is opened in restricted mode, which is not terribly useful for ordinary users:
SYS@CDB$ROOT-CDB1& select con_id, name, open_mode, restricted
2* from v$pdbs where name = 'PLUGINTEST'
CON_ID NAME
---------- ------------------------------ ---------- ---
5 PLUGINTEST
READ WRITE YES
SYS@CDB$ROOT-CDB1& conn user1/user1@localhost/plugintest
ORA-01035: ORACLE only available to users with RESTRICTED SESSION privilege
Warning: You are no longer connected to ORACLE.
The detail is – again – in PDB_PLUG_IN_VIOLATIONS:
SYS@CDB$ROOT-CDB1& select cause, type, message
from PDB_PLUG_IN_VIOLATIONS
where name = 'PLUGINTEST';
---------------------------------------------------------------- ---------
--------------------------------------------------------------------------------
PSU bundle patch 160419 (Database Patch Set Update : 12.1.0.2.291127))
: Installed in the CDB but not in the PDB.
PSU bundle patch 5 (Database Patch Set Update : 12.1.0.2.5 ()): Installe
d in the PDB but not in the CDB.
SQL patch ID/UID 61790 (Database PSU 12.1.0.2.5, Oracle JavaVM Compo
nent (Oct2015)): Installed in the PDB but not in the CDB.
SQL patch ID/UID 57886 (Database PSU 12.1.0.2.160419, Oracle JavaVM
Component (Apr2016)): Installed in the CDB but not in the PDB.
Although the formatting is a bit of a pain to the eye, it should become clear that there are different patch levels preventing the PDB from opening read-write. If the number of options were different between the system a similar error would be thrown. But how does Oracle know about all that? It’s encoded in the XML file that you create when you unplug the PDB. In my case, here are the options used:
&option&APS=12.1.0.2.0&/option&
&option&CATALOG=12.1.0.2.0&/option&
&option&CATJAVA=12.1.0.2.0&/option&
&option&CATPROC=12.1.0.2.0&/option&
&option&CONTEXT=12.1.0.2.0&/option&
&option&DV=12.1.0.2.0&/option&
&option&JAVAVM=12.1.0.2.0&/option&
&option&OLS=12.1.0.2.0&/option&
&option&ORDIM=12.1.0.2.0&/option&
&option&OWM=12.1.0.2.0&/option&
&option&SDO=12.1.0.2.0&/option&
&option&XDB=12.1.0.2.0&/option&
&option&XML=12.1.0.2.0&/option&
&option&XOQ=12.1.0.2.0&/option&
&/options&
Likewise, patches are recorded in the XML file, too:
&sqlpatches&
&sqlpatch&PSU bundle patch 5 (Database Patch Set Update : 12.1.0.2.5 ()):
APPLY SUCCESS&/sqlpatch&
&sqlpatch&SQL patch ID/UID 61790 (Database PSU 12.1.0.2.5, Oracle JavaVM
Component (Oct2015)): APPLY SUCCESS&/sqlpatch&
&/sqlpatches&
Corrective Action
The suggested action is conveniently presented as well.
SYS@CDB$ROOT-CDB1& select cause, type, action from pdb_plug_in_violations where name = 'PLUGINTEST';
---------- --------- ------------------------------------------------------------
Call datapatch to install in the PDB or the CDB
Call datapatch to install in the PDB or the CDB
Call datapatch to install in the PDB or the CDB
Call datapatch to install in the PDB or the CDB
But you don’t follow blindly what you read,) So I headed over to the official documentation set and My Oracle Support. I found MOS note
it explains a similar situation (PDB was closed while datapatch ran) to the one I am seeing. So I decided to try running datapatch again.
SYS@CDB$ROOT-CDB1& alter pluggable database plugin
Pluggable database altered.
SYS@CDB$ROOT-CDB1& alter pluggable database plu
Pluggable database altered.
SYS@CDB$ROOT-CDB1&
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@server2 ~]$ cd $ORACLE_HOME/OPatch
[oracle@server2 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Thu Jul
7 11:20:00 2016
Copyright (c) 2015, Oracle.
All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch__07_07_11_20_00/sqlpatch_invocation.log
Connecting to database...OK
Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of SQL patches:
(Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)):
Installed in PLUGINTEST only
(Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)):
Installed in binary and CDB$ROOT PDB$SEED SWINGBENCH1
Bundle series PSU:
ID 160419 in the binary registry and ID 160419 in PDB CDB$ROOT, ID 160419 in PDB PDB$SEED,
ID 160419 in PDB SWINGBENCH1, ID 5 in PDB PLUGINTEST
Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED SWINGBENCH1
Nothing to roll back
Nothing to apply
For the following PDBs: PLUGINTEST
The following patches will be rolled back:
(Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015))
The following patches will be applied:
(Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016))
(Database Patch Set Update : 12.1.0.2.291127))
Installing patches...
Patch installation complete.
Total patches installed: 3
Validating logfiles...
rollback (pdb PLUGINTEST): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/60_rollback_CDB1_PLUGINTEST_2016Jul07_11_20_35.log (no errors)
apply (pdb PLUGINTEST): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/09_apply_CDB1_PLUGINTEST_2016Jul07_11_21_03.log (no errors)
apply (pdb PLUGINTEST): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27_apply_CDB1_PLUGINTEST_2016Jul07_11_21_04.log (no errors)
SQL Patching tool complete on Thu Jul
7 11:21:14 2016
This fixed my problem:
SYS@CDB$ROOT-CDB1& alter pluggable database plugin
Pluggable database altered.
SYS@CDB$ROOT-CDB1& alter pluggable data
Pluggable database altered.
SYS@CDB$ROOT-CDB1& select con_id, name, open_mode, restricted
2 from v$pdbs where name = 'PLUGINTEST';
CON_ID NAME
---------- ------------------------------ ---------- ---
5 PLUGINTEST
READ WRITE NO
Interestingly the entries in PDB_PLUG_IN_VIOLATIONS do not disappear, but there is a timestamp in the view that should help you find out of the message is current or not.
Posted in ,
Container Databases have been an area that I have researched intensively over the past years. With this post (and hopefully some others that follow) I would like to demonstrate some of the new situations the DBA might be confronted with. Please don’t use this post to give the new 12c architecture a hard time: standardised deployments (which I love) help you a lot. Not only do your DBA scripts work reliably everywhere, but the error condition I am showing in this post should be a lot less likely.
At the end of the post I’ll show an alternative approach using a standardised way of creating PDBs.
Environment
Setting the scene, my environment is as follows:
Oracle Linux 7.2 with UEK4 (4.1.12-37.5.1.el7uek.x86_64 GNU/Linux)
Single Instance databases CDB1 and CDB3
Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
Database Patch Set Update : 12.1.0.2.291127)
Oracle Managed Files (OMF) but no use of ASM
These aren’t in VMs for a change.
The following steps lead me to the discovery of the plug-in violation. In CDB1 I created a common user, let’s call it c##martin for the sake of argument.
SQL& select sys_context('userenv','con_name') as con_name, name, cdb
from v$database
------------------------------ --------- ---
SQL& create user c##martin identified by xxxxxxxxxx
default tablespace tools
temporary tablespace temp
quota unlimited on tools account unlock
container = ALL;
User created.
So far nothing too exciting. Grant privileges as needed.
The Problem
Some time later I tried to create a PDB from the seed, and it failed:
SQL& create pluggable database pdb10
admin user pdbadmin identified by xxxxxxxxxx
Pluggable database created.
SQL& alter pluggable database pdb10
Warning: PDB altered with errors.
Altered with errors? The command I just typed is probably the simplest and most basic way to create a PDB, what could possibly go wrong with it? Fist stop is the alert.log, and this is what has been recorded:
09:48:38.:00
create pluggable database pdb10
admin user pdbadmin identified by *
APEX_040200.WWV_FLOW_ADVISOR_CHECKS (CHECK_STATEMENT) - CLOB populated
09:48:44.:00
****************************************************************
Pluggable Database PDB10 with pdb id - 3 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Database Characterset for PDB10 is WE8MSWIN1252
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#19 to file$(old file#5)
Adding new file#20 to file$(old file#7)
Successfully created internal service pdb10 at open
09:48:45.:00
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
****************************************************************
Post plug operations are now complete.
Pluggable database PDB10 with pdb id - 3 is now marked as NEW.
****************************************************************
Completed: create pluggable database pdb10
admin user pdbadmin identified by *
09:53:48.:00
alter pluggable database pdb10 open
Pluggable database PDB10 dictionary check beginning
Pluggable Database PDB10 Dictionary check complete
Database Characterset for PDB10 is WE8MSWIN1252
09:53:51.:00
Opening pdb PDB10 (3) with no Resource Manager plan active
Pluggable database PDB10 opened read write
Completed: alter pluggable database pdb10 open
OK so there’s nothing in there. But wait-am I not trying to create a new PDB and plug it into the CDB? And aren’t problems in there recorded in a view? So let’s give that a try:
SQL& select cause, message from pdb_plug_in_violations where name = 'PDB10';
----------------------------------------------------------------
--------------------------------------------------------------------------------
Sync Failure
Sync PDB failed with ORA-959 during 'create user c##martin identified by *defaul
t tablespace tools
temporary tablespace temp
quota unlimited on tools account unlock
container = ALL'
Got you! A sync failure: the common user I created earlier (container = ALL) can’t be created on the PDB because of a missing tablespace. Is the PDB in an unrecoverable state? No, but it is opened in restricted mode. Connections to it won’t be possible:
SQL& select name,open_mode,restricted from v$pdbs
where name = 'PDB10';
------------------------------ ---------- ---
READ WRITE YES
SQL& conn pdbadmin/xxxxxxxxxx@localhost/PDB10
ORA-01035: ORACLE only available to users with RESTRICTED SESSION privilege
Warning: You are no longer connected to ORACLE.
NB: the same can happen when you create a common user in the CDB while a PDB that doesn’t meet the criteria for it to created is closed. For example, if all PDBs are open and you try to create c#martin and there is a PDB without the tools tablespace the create user command fails. It will not fail if a PDB doesn’t have a tools tablepsace but is closed at the time the user is created. The command to open the PDB will throw the same error as shown above when you try to open it.
Dead easy-a common user cannot be created due to a missing tablespace. So let’s create it and see if we can open the PDB:
SQL& conn / as sysdba
Connected.
SQL& alter session set container = PDB10;
Session altered.
SQL& create tablespace tools datafile size 10m;
Tablespace created.
SQL& conn / as sysdba
Connected.
SQL& alter pluggable database pdb10
Pluggable database altered.
SQL& alter pluggable database pdb10
Pluggable database altered.
SQL& select name,open_mode,restricted from v$pdbs
where name = 'PDB10';
------------------------------ ---------- ---
READ WRITE NO
Fixed! But I would still argue that a different approach – a “gold image PDB” might be better suited for production work than a clone from the seed.
A potentially better Approach
I personally think that creating a PDB from the seed is not something that should be done in production, but that is – a personal opinion…
The problem you just read about could be avoided with a Golden Image CDB/PDB. I wrote about a way to create such a golden database image in my last post. Following the procedure I created CDB3 using dbca feeding it my template. It contains the goldenImage PDB which has been created as follows:
SQL& select sys_context('userenv','con_name') as con_name, name, cdb
------------------------------ --------- ---
SQL& create pluggable database goldenImage
admin user pdbadmin identified by xxxxxxxx
default tablespace users datafile size 10m
storage (maxsize 20G);
Pluggable database created.
SQL& alter pluggable database goldenI
Pluggable database altered.
SQL& alter session set container = goldenI
Session altered.
SQL& create tablespace tools datafile size 10m;
Tablespace created.
CDB3 also features a user C##MARTIN which has been created exactly as in CDB1:
SQL& select u.username, u.account_status, nvl(p.name, 'CDB$ROOT') con_name
from cdb_users u, v$pdbs p
where u.username = 'C##MARTIN'
4 and u.con_id = p.con_id(+);
ACCOUNT_STATUS
-------------------- -------------------- ------------------------------
GOLDENIMAGE
With everything in place I can go about cloning goldenImage and avoid the problem I wrote about in the first part of this post altogether.
SQL& create pluggable database pdb10 from goldenI
Pluggable database created.
SQL& alter pluggable database pdb10
Pluggable database altered.
SQL& select cause,message from pdb_plug_in_violations where name = 'PDB10';
no rows selected
SQL& conn c##martin/xxxxxxxxxx@localhost/pdb10
Connected.
SQL& show con_name
------------------------------
Standards win again.
Posted in ,
| Tagged: , ,
Posted in , ,
Posted in ,
Posted in ,
Prompted by comments made by readers about my posts describing how to add a second SCAN in 12c Release 1 and problems regarding the listener_networks parameter I thought it was time to revisit the situation. I’m still running Oracle Linux 7.1/UEK3 (although that should not matter) but upgraded to 12.1.0.2.5. This is the release that is reportedly showing the odd behaviour. I don’t recall my exact version when I wrote the original posts back in April 2014, but by looking at them I think it all worked ok at the time. Here is my current patchlevel after the upgrade to the troublesome PSU.
SQL& select patch_id,description,status from DBA_REGISTRY_SQLPATCH;
PATCH_ID DESCRIPTION
---------- -------------------------------------------------------------------------------- ---------------
Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
Database Patch Set Update : 12.1.0.2.5 ()
This is not the latest patch level! After having seen a presentation by Ludovico Caldara (@ludodba) just this week I would think that the April 2016 patch which was current at the time of writing is the one you want to be on :) I have an upgrade to the April 2016 PSU planned but first wanted to test against 12.1.0.2.5 to see if I could reproduce the issue.
Problem Statement
The parameter listener_networks should be populated automatically by CRS when the database comes up-and indeed you can see evidence of that in the respective node’s crsd_oraagent_oracle.trc file. However, with 2 SCANs present, there is something not quite right. You can see this in the usual locations:
– SCAN listeners
– listener_networks parameter
– CRSD trace
The most obvious clue is that you cannot connect to the database any more using one of the SCANs. To avoid you having to flick back and forth between my posts, here is the network configuration again:
[oracle@ron12cprinode2 ~]$ srvctl config scan -netnum 1
SCAN name: ron12cpri-scan, Network: 1
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.100.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 192.168.100.66
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 192.168.100.64
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
[oracle@ron12cprinode2 ~]$ srvctl config scan -netnum 2
SCAN name: ron12cpri-dg-scan, Network: 2
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.102.66
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 192.168.102.64
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 192.168.102.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
It’s essentially using the 192.168.100/24 network for the “public” traffic and 192.168.102/24 for Data Guard. I still use my RAC One Node database RON, which is currently active on node 2. All of my SCAN listeners should know about its services, RON_SRV and RON_DG_SRV. However, that’s not the case, as a quick check reveals:
[oracle@ron12cprinode1 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-:14
Copyright (c) , Oracle.
All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
LISTENER_SCAN1
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date
14-MAY-:39
0 days 0 hr. 34 min. 35 sec
Trace Level
ON: Local OS Authentication
Listener Parameter File
/u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/ron12cprinode1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.65)(PORT=1521)))
Services Summary...
Service &-MGMTDBXDB& has 1 instance(s).
Instance &-MGMTDB&, status READY, has 1 handler(s) for this service...
Service &RON& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RONXDB& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RON_DG_SRV& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RON_SRV& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &_mgmtdb& has 1 instance(s).
Instance &-MGMTDB&, status READY, has 1 handler(s) for this service...
Service &ron12cpri& has 1 instance(s).
Instance &-MGMTDB&, status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@ron12cprinode1 ~]$ lsnrctl status DGSCANLSNR_SCAN2_NET2
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-:23
Copyright (c) , Oracle.
All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=DGSCANLSNR_SCAN2_NET2)))
STATUS of the LISTENER
------------------------
DGSCANLSNR_SCAN2_NET2
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date
14-MAY-:42
0 days 0 hr. 34 min. 41 sec
Trace Level
ON: Local OS Authentication
Listener Parameter File
/u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/ron12cprinode1/dgscanlsnr_scan2_net2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=DGSCANLSNR_SCAN2_NET2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.64)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@ron12cprinode1 ~]$
Before generating the above output I specifically “reset” the listener_networks settings on both instances, and ensured that they were created dynamically. After the database restart I couldn’t make out ANY entry for listener_networks:
SQL& select inst_id,name,value from gv$parameter where name = 'listener_networks';
INST_ID NAME
---------- ------------------------------ ------------------------------
1 listener_networks
SQL& select inst_id,name,value from gv$spparameter where name = 'listener_networks';
INST_ID NAME
---------- ------------------------------ ------------------------------
1 listener_networks
Nothing at all. I manage to reach the system using the RON_SRV service that’s known to the first (public) network’s SCAN:
[oracle@ron12cprinode1 ~]$ sqlplus a/b@ron12cpri-scan/RON_SRV
SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:18:02 2016
Copyright (c) , Oracle.
All rights reserved.
ORA-01017: invalid username/ logon denied
Enter user-name:
[oracle@ron12cprinode1 ~]$
The same test failed for connections against ron12cpri-dg-scan, stating that the listener didn’t know about the service. Checking the CRSD trace (on the node the instance runs!) I could see the reason:
19:47:49.637611 : USRTHRD:: {1:} Endp=ron12cpri-dg-scan:1521
19:47:49.637638 : USRTHRD:: {1:} Final Endp=(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521)), remoteEndp= ron12cpri-scan:1521','ron12cpri-scan:1521','ron12cpri-scan:1521
19:47:49.637662 : USRTHRD:: {1:} sqlStmt = ALTER SYSTEM SET REMOTE_LISTENER=' ron12cpri-scan:1521','ron12cpri-scan:1521','ron12cpri-scan:1521' SCOPE=MEMORY SID='RON_1' /* db agent *//* {1:} */
19:47:49.645739 : USRTHRD:: {1:} sqlStmt = ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1' /* db agent *//* {1:} */
19:47:49.655035 : USRTHRD:: {1:} ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521'
19:47:49.655191 : USRTHRD:: {1:} DbAgent::DedicatedThread::run setRemoteListener Exception OCIException
19:47:49.655207 : USRTHRD:: {1:} ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521'
Looking at this output it appears that setting the remote_listener worked, although I thought we’d only set the host once and not thrice? This looks fishy. It appears to work though, as confirmed in v$parameter and the fact that I can connect against the system.
Interestingly setting listener_networks fails with an ORA-02097: parameter cannot be modified because specified value is invalid. This makes sense: there are white spaces missing in the alter system command, and even if the spaces were correct, the command would fail. Trying manually confirms that thought:
SQL& ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1';
ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1'
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521
ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521'
OK – that’s all I need to know. When changing the command to look like what I thought it should look like in the first place (the remote listener specifies only 1 host:port) it works:
SQL& ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1';
System altered.
System altered.
What does that mean? Back to my SCAN listeners again:
[oracle@ron12cprinode1 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-:19
Copyright (c) , Oracle.
All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
LISTENER_SCAN1
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date
14-MAY-:39
0 days 0 hr. 45 min. 40 sec
Trace Level
ON: Local OS Authentication
Listener Parameter File
/u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/ron12cprinode1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.65)(PORT=1521)))
Services Summary...
Service &-MGMTDBXDB& has 1 instance(s).
Instance &-MGMTDB&, status READY, has 1 handler(s) for this service...
Service &RON& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RONXDB& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RON_DG_SRV& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RON_SRV& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &_mgmtdb& has 1 instance(s).
Instance &-MGMTDB&, status READY, has 1 handler(s) for this service...
Service &ron12cpri& has 1 instance(s).
Instance &-MGMTDB&, status READY, has 1 handler(s) for this service...
The command completed successfully
No negative change there, but would the DG_SCAN listener also pick it up?
[oracle@ron12cprinode1 ~]$ lsnrctl status DGSCANLSNR_SCAN2_NET2
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-:21
Copyright (c) , Oracle.
All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=DGSCANLSNR_SCAN2_NET2)))
STATUS of the LISTENER
------------------------
DGSCANLSNR_SCAN2_NET2
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date
14-MAY-:42
0 days 0 hr. 45 min. 39 sec
Trace Level
ON: Local OS Authentication
Listener Parameter File
/u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/ron12cprinode1/dgscanlsnr_scan2_net2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=DGSCANLSNR_SCAN2_NET2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.64)(PORT=1521)))
Services Summary...
Service &RON& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RONXDB& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RON_DG_SRV& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
Service &RON_SRV& has 1 instance(s).
Instance &RON_1&, status READY, has 1 handler(s) for this service...
The command completed successfully
Well it seems it did. Now the question is: can I connect?
[oracle@ron12cprinode1 ~]$ sqlplus system/secretPWD@ron12cpri-scan/RON_SRV
SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:08:45 2016
Copyright (c) , Oracle.
All rights reserved.
ORA-28002: the password will expire within 7 days
Last Successful login time: Sat May 14 :28 +01:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL& Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@ron12cprinode1 ~]$ sqlplus system/secretPWD@ron12cpri-dg-scan/RON_DG_SRV
SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:08:54 2016
Copyright (c) , Oracle.
All rights reserved.
ORA-28002: the password will expire within 7 days
Last Successful login time: Sat May 14 :45 +01:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Well I can! So that should solve the problem for the active instance, however there are problems bound to happen when the instance restarts. Since I don’t really have control over the instance name in RAC One Node (RON_1 can be started on node 1 and node 2) I can’t hard-code the value for listener_networks into the spfile. As an end result I’d lock myself out just like CRS did. This is likely a similar issue for multi-node RAC using policy managed databases.
I have repeated the test with the latest version of the stack (upgraded in place), and got the same result. Here are the version numbers:
[oracle@ron12cprinode2 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch/opatch lspatches
;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
;OCW Patch Set Update : 12.1.0.2.502555)
;Database Patch Set Update : 12.1.0.2.291127)
OPatch succeeded.
[oracle@ron12cprinode2 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
;OCW Patch Set Update : 12.1.0.2.502555)
;Database Patch Set Update : 12.1.0.2.291127)
OPatch succeeded.
SQL& select action_time, patch_id,description,status from DBA_REGISTRY_SQLPATCH order by action_
ACTION_TIME
PATCH_ID DESCRIPTION
---------------------------------------- ---------- ---------------------------------------------------------------------- ---------------
29-OCT-15 12.46.55.763581
Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
29-OCT-15 12.46.55.939750
Database Patch Set Update : 12.1.0.2.5 ()
14-MAY-16 21.32.15.211167
Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)
14-MAY-16 21.32.15.233105
Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
14-MAY-16 21.32.15.591460
Database Patch Set Update : 12.1.0.2.291127)
The same values for remote_listener and listener_networks as with 12.1.0.2.5 have been observed, and the error about setting listener_networks in the CRSD log was identical to the previous release. I guess that needs to be raised with Oracle …
Posted in , ,
While preparing to teach a class this week I have had the time to look into the In Memory (cost) option a bit closer. I noticed a few interesting things along the way and would like to share one of these here.
Background
One of the questions I was asking myself was:
“What happens if I scan segments that are within the IM area, and some are not?”
I was primarily thinking of joins in a DWH environment, but in order to keep the test case simple enough and reproducible I decided to go with a partitioned table where the current partition is assigned to the IMCS, and the rest is not. For this test I’m relying on the Swingbench SH schema. All of this runs on my laptop in a VM so I had to be selective when it comes to indexes. I also chose to NOT partition the tables at this stage, I wanted to chose my own partitioning scheme. For reference, here is the command that created the SH schema:
[oracle@oraclelinux7 bin]$ ./shwizard -cl -create -cs //localhost/sh_pdb \
& -dba system -dbap somePassword -nopart -pkindexes -scale 1 -tc 2 -u sh -p sh
============================================
Datagenerator Run Stats
============================================
Connection Time
0:00:00.005
Data Generation Time
0:04:35.890
DDL Creation Time
0:00:14.063
Total Run Time
0:04:49.962
Rows Inserted per sec
Data Generated (MB) per sec
Actual Rows Generated
17,848,007
I wanted to create a range partitioned version of the sales table. After the SH-wizard completed the data distribution is as shown:
SQL& select count(*), trunc(time_id,'yy') year from sales group by trunc(time_id,'yy')
COUNT(*) YEAR
---------- ---------
19 rows selected.
So based on this information I can create a table that has data from 2013 in the IMCS and everything else will be excluded from it. For the sake of the following discussion 2013 is considered the “current” partition. Here is the partitioned sales table’s DDL with my changes to enable my partitioning scheme.
CREATE TABLE SALES_PART
NUMBER NOT NULL ENABLE,
NUMBER NOT NULL ENABLE,
DATE NOT NULL ENABLE,
CHANNEL_ID
NUMBER NOT NULL ENABLE,
NUMBER NOT NULL ENABLE,
QUANTITY_SOLD
NUMBER(10,2) NOT NULL ENABLE,
NUMBER(6,0) NOT NULL ENABLE,
FULFILLMENT_CENTER NUMBER(6,0) NOT NULL ENABLE,
COURIER_ORG
NUMBER(6,0) NOT NULL ENABLE,
TAX_COUNTRY
VARCHAR2(3) NOT NULL ENABLE,
TAX_REGION
VARCHAR2(3),
AMOUNT_SOLD
NUMBER(10,2) NOT NULL ENABLE
partition BY range (time_id)
partition p_old VALUES less than
(DATE ''),
partition p_ VALUES less than (DATE ''),
partition p_2011 VALUES less than
(DATE ''),
partition p_2012 VALUES less than
(DATE ''),
partition p_2013 VALUES less than
(DATE '') inmemory
TABLESPACE SH;
SQL& alter session
Session altered.
SQL& insert /*+ parallel append */ into sales_part select /*+ parallel */ * from sales
rows created.
Commit complete.
SQL& exec dbms_stats.gather_table_stats(user,'SALES_PART')
PL/SQL procedure successfully completed.
SQL& select partition_name, num_rows, inmemory
from user_tab_partitions
where table_name = 'SALES_PART'
order by partition_
PARTITION_NAME
NUM_ROWS INMEMORY
------------------------------ ---------- --------
1612443 DISABLED
7369918 DISABLED
1690302 DISABLED
2028048 DISABLED
2028048 ENABLED
So that should give me what I needed. The IMCS was now populated with the segment as well:
SQL& select segment_name, partition_name, bytes_not_populated, populate_status from v$im_user_
SEGMENT_NAME
PARTITION_NAME
BYTES_NOT_POPULATED POPULATE_
-------------------- ------------------------------ ------------------- ---------
SALES_PART
0 COMPLETED
With the setup work complete I am ready to test. First of all, a simple SQL trace should show me what is happening. Note that I am specifically targeting the “current” partition.
SQL& sho user
USER is &SH&
SQL& alter session set events 'sql_trace level 8';
Session altered.
SQL& select /* current_partition */ count(*) from sales_part
where time_id & DATE ''
and promo_id = 316;
----------
SQL& alter session set events 'sql_trace off';
Session altered.
SQL& select value from v$diag_info where name like 'Def%';
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_14034.trc
The tkprof’d trace shows this result:
********************************************************************************
select /* current_partition */ count(*) from sales_part
where time_id & DATE ''
and promo_id = 316
------- ------
-------- ---------- ---------- ---------- ----------
----------
------- ------
-------- ---------- ---------- ---------- ----------
----------
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 117
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max)
Row Source Operation
---------- ---------- ----------
---------------------------------------------------
SORT AGGREGATE (cr=3 pr=0 pw=0 time=8441 us)
PARTITION RANGE SINGLE PARTITION: 5 5 (cr=3 pr=0 pw=0 time=7583 us cost=214 size=48252 card=4021)
TABLE ACCESS INMEMORY FULL SALES_PART PARTITION: 5 5 (cr=3 pr=0 pw=0 time=6972 us cost=214 size=48252 card=4021)
Elapsed times include waiting on following events:
Event waited on
Total Waited
----------------------------------------
----------
------------
SQL*Net message to client
SQL*Net message from client
********************************************************************************
As you can see, my partitioning strategy paid off-only the 5th partition is accessed (that’s p_2013 or the “current” partition based on the data). This partition is the one entirely found in the IMCS.
A result like the one above is what I’d expect and the access path “TABLE ACCESS INMEMORY FULL” does it justice. But now the actual reason for the blog post: mixing scans against segments in memory and on disk. Logging on again I ran the same query but without the restriction to 2013.
SQL& alter session set events 'sql_trace level 8';
Session altered.
SQL& select /* every_partition */ count(*) from sales_part
where promo_id = 316;
----------
SQL& alter session set events 'sql_trace off';
Session altered.
SQL& select value from v$diag_info where name like 'Def%';
----------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_14437.trc
The tkprof’d result is shown here:
select /* every_partition */ count(*) from sales_part
where promo_id = 316
------- ------
-------- ---------- ---------- ---------- ----------
----------
------- ------
-------- ---------- ---------- ---------- ----------
----------
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 117
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max)
Row Source Operation
---------- ---------- ----------
---------------------------------------------------
SORT AGGREGATE (cr=94901 pr=94878 pw=0 time=3848392 us)
PARTITION RANGE ALL PARTITION: 1 5 (cr=94901 pr=94878 pw=0 time=4503583 us cost=26217 size=117128 card=29282)
TABLE ACCESS INMEMORY FULL SALES_PART PARTITION: 1 5 (cr=94901 pr=94878 pw=0 time=4254867 us cost=26217 size=117128 card=29282)
Elapsed times include waiting on following events:
Event waited on
Total Waited
----------------------------------------
----------
------------
SQL*Net message to client
Disk file operations I/O
db file sequential read
direct path read
SQL*Net message from client
********************************************************************************
Again the access path is clearly indicated as “TABLE ACCESS INMEMORY FULL SALES_PART”. This time it references all 5 partitions (which is expected since I didn’t have a date in my predicate list). And I can see direct path reads plus some other I/O related information! Direct Path Reads are quite likely going to be Smart Scans on Exadata by the way…
Of course “TABLE ACCESS INMEMORY FULL” is correct (because one partition is accessed that way), but I guess that you cannot see the disk-based I/O against the other segments from the regular execution plan as shown in SQLPlus for example .
Is there any other instrumentation I could use?
The tkprof output shows that data retrieval is possible for segments that are not (entirely) found in the IMCS. Where else could I learn about this fact? Session statistics (V$STATNAME, V$SESSTAT, etc) are another useful source of information. I am a great fan of session snapper (who isn’t?). Snapper can be used to display the change in the session counters while the session you are monitoring is running. Another option is Adrian Billington’s mystats script. It can help you capture the changes in session statistics during the execution of a SQL command. I executed the same query again, sandwiched into calls to mystats. Note that the statistics shown next do not match those of to the execution of the query above. I also tried to limit the output only to the relevant ones, output is sorted by statistic name.
------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------
Statistic Name
----------------------------------------------------------------
----------------
IM scan CUs columns accessed
IM scan CUs columns theoretical max
IM scan CUs memcompress for query low
IM scan CUs predicates applied
IM scan CUs predicates received
IM scan CUs split pieces
IM scan bytes in-memory
47,683,308
IM scan bytes uncompressed
99,118,124
IM scan rows
IM scan rows projected
IM scan rows valid
IM scan segments minmax eligible
consistent gets
consistent gets direct
consistent gets examination
consistent gets examination (fastpath)
consistent gets from cache
consistent gets pin
consistent gets pin (fastpath)
logical read bytes from cache
no work - consistent read gets
physical read IO requests
physical read bytes
777,191,424
physical read total IO requests
physical read total bytes
777,191,424
physical read total multi block requests
physical reads
physical reads direct
session logical reads
session logical reads - IM
table scan blocks gotten
table scan disk non-IMC rows gotten
12,700,711
table scan rows gotten
14,728,759
table scans (IM)
table scans (direct read)
table scans (long tables)
temp space allocated (bytes)
workarea executions - optimal
workarea memory allocated
------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis
==========================================================================================
End of report
==========================================================================================
The highlights are:
You can see the compression ratio of the data in the IMCS (IM scan bytes in-memory and IM scan bytes uncompressed)
All rows in the partition were valid (the number of IM Scan rows equals the number of rows in the partition)
One segment was scanned using IM, and 4 were scanned as direct (path) reads. A total of 5 segments were scanned
Hope this helps!
Posted in ,
I demonstrated an example how to use a PL/SQL package to monitor a workload in order to identify correlated columns. Such columns are good candidates for the creation of extended statistics since the optimiser does not assume that information stored in different columns may be related. Before starting my tests I invoked DBMS_STATS.SEED_COL_USAGE, which in turn increased Oracle’s attention level trying to find correlated columns. Eventually these have been identified (as shown in the output of DBMS_STATS.REPORT_COL_USAGE) and a subsequent call to DBMS_STATS.GATHER_TABLE_STATS caused extended statistics to be created, including histograms. This is one way you can get extended statistics automatically, but it requires you to enable monitoring of a workload by invoking a PL/SQL API. In other words, it is a conscious effort.
More Ways to Extended Statistics
The other way of ending up with extended statistics is more subtle as it does not require user intervention. As described in the documentation (Database SQL Tuning Guide, chapter 13 “Managing Optimizer Statistics: Advanced Topics”) there is another way to gather extended statistics automatically. It is based on statistics feedback and SQL Plan Directives, both new to 12c (actually statistics feedback is not quite so new, it seems to be an evolution of cardinality feedback).
To start with a clean sheet I dropped the table I had used previously, which got rid of all the extended stats and SQL Plan Directives (SPD) from the previous example. To be really thorough I also flushed the shared pool.
SQL& drop table martin.
Table dropped.
SQL& select * from dba_stat_extensions where owner =
no rows selected
SQL& select count(*) from DBA_SQL_PLAN_DIR_OBJECTS where owner =
----------
SQL& alter system flush shared_
System altered.
BTW I have switched environments to an Oracle Restart 12.1.0.2 database with the JUL 2015 PSU applied. The second patch you see here is the OJVM patch.
SQL& select patch_id, version, action from dba_registry_
PATCH_ID VERSION
---------- -------------------- ---------------
Back to creating the table… I am using the same technique as before, but this time without the call to DBMS_STATS.SEED_COL_USAGE:
SQL& create table martin.customers as select * from sh.
Table created.
SQL& select num_rows, last_analyzed from user_tables where table_name = 'CUSTOMERS';
NUM_ROWS LAST_ANALYZED
---------- -------------------
I love the 12c feature where stats are gathered during a CTAS operation …
And now a few queries to tickle the optimiser. Apologies for the wide output but that makes it so much easier to use RLWRAP and command line history. Credit again to
for the query examples.
SQL& select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;
----------
SQL& select * from table(dbms_xplan.display_cursor(format =& 'allstats last'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
34zmr3acgz06g, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id
Plan hash value:
---------------------------------------------------------------------------------------------------
| Operation
| Starts | E-Rows | A-Rows |
| Buffers | Reads
---------------------------------------------------------------------------------------------------
0 | SELECT STATEMENT
1 |00:00:00.03 |
SORT AGGREGATE
1 |00:00:00.03 |
TABLE ACCESS FULL| CUSTOMERS |
932 |00:00:00.04 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((&CUST_CITY&='Los Angeles' AND &CUST_STATE_PROVINCE&='CA' AND
&COUNTRY_ID&=52790))
22 rows selected.
There is nothing too new here-the optimiser’s cardinality estimate is not great. Following the example from the SQL Tuning Guide-chapter 13 (btw country_id is a number, not a varchar2 field) we can now check if there is anything fancy going to happen next. And sure enough there is:
SQL& select sql_id, child_number, is_obsolete, is_reoptimizable, is_resolved_adaptive_plan from v$sql where sql_id = '34zmr3acgz06g';
CHILD_NUMBER I I I
------------- ------------ - - -
34zmr3acgz06g
The second flag, is_reoptimisable, is interesting. It indicates the optimiser’s intention to apply information found at run-time to the next execution of the cursor. Let’s run the original query again.
SQL& select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;
----------
So what did that do? Let’s have a look at the diagnostics:
SQL& select sql_id, child_number, is_obsolete, is_reoptimizable, is_resolved_adaptive_plan from v$sql where sql_id = '34zmr3acgz06g';
CHILD_NUMBER I I I
------------- ------------ - - -
34zmr3acgz06g
34zmr3acgz06g
SQL& select * from table(dbms_xplan.display_cursor(sql_id=&'34zmr3acgz06g',cursor_child_no=&1,format =& 'allstats last'));
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------
34zmr3acgz06g, child number 1
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id
Plan hash value:
------------------------------------------------------------------------------------------
| Operation
| Starts | E-Rows | A-Rows |
| Buffers |
------------------------------------------------------------------------------------------
0 | SELECT STATEMENT
1 |00:00:00.01 |
SORT AGGREGATE
1 |00:00:00.01 |
TABLE ACCESS FULL| CUSTOMERS |
932 |00:00:00.01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((&CUST_CITY&='Los Angeles' AND &CUST_STATE_PROVINCE&='CA' AND
&COUNTRY_ID&=52790))
- statistics feedback used for this statement
26 rows selected.
As you can see another child cursor has been created. Why?
SQL& select xmltype(reason) from v$sql_shared_cursor where dbms_lob.getlength(reason) & 1 and sql_id = '34zmr3acgz06g';
XMLTYPE(REASON)
-----------------------------------------------------------------------------------------------------------------------------
&ChildNode&
&ChildNumber&0&/ChildNumber&
&ID&49&/ID&
&reason&Auto Reoptimization Mismatch(1)&/reason&
&size&3x4&/size&
&kxscflg&32&/kxscflg&
&kxscfl4&4194560&/kxscfl4&
&dnum_kksfcxe&262144&/dnum_kksfcxe&
&/ChildNode&
Ah – the cursor was created because of a reoptimisation mismatch. Checking against v$sql you can see that the optimiser is now happy with the cardinality estimate (not anticipating further reoptimisation for this statement). Note that depending on the query’s complexity many more child cursors can be created as part of the reoptimisation. Also note that for child cursor 1 the A and E rows are perfect matches. Our work here is done. But wait- what about that column usage?
SQL& select dbms_stats.report_col_usage(user, 'CUSTOMERS')
DBMS_STATS.REPORT_COL_USAGE(USER,'CUSTOMERS')
--------------------------------------------------------------------------------
: Used in single table EQuality predicate
: Used in single table RANGE predicate
: Used in single table LIKE predicate
: Used in single table is (not) NULL predicate
: Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
: Used in single table FILTER predicate
: Used in JOIN predicate
: Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR MARTIN.CUSTOMERS
........................................
1. COUNTRY_ID
2. CUST_CITY
3. CUST_STATE_PROVINCE
###############################################################################
SQL& select * from dba_stat_extensions where owner =
no rows selected
Nothing here except some equality predicates! This is the big difference to part I of this article where the filter on the 3 columns was detected thanks to the call to DBMS_STATS.SEED_COL_USAGE.
It took me a while to connect the dots and understand what needed to be done next. In the meantime, hidden from view, and in the background, Oracle created a SQL Plan Directive for that table which I failed to notice for a while. SQL Plan Directives are not persisted immediately, the documentation states they are written to SYSAUX every 15 minutes. I must have thought about this for more than 15 minutes, as you will see shortly. If you are a bit more impatient then force the write of the SPD to disk using DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE now.
To verify if there were any SPDs I used a query taken more or less literally from the documentation, and yes, there are:
SQL& SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OBJECT_NAME,
o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON
DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o
d.DIRECTIVE_ID=o.DIRECTIVE_ID
o.OWNER = user
ORDER BY 1,2,3,4,5;
OBJECT_NAME
OBJECT TYPE
------------------------------ -------------------- -------------------- ------ ---------------- -------- ------------------------------------
COUNTRY_ID
COLUMN DYNAMIC_SAMPLING USABLE
SINGLE TABLE CARDINALITY MISESTIMATE
COLUMN DYNAMIC_SAMPLING USABLE
SINGLE TABLE CARDINALITY MISESTIMATE
CUST_STATE_PROVINCE
COLUMN DYNAMIC_SAMPLING USABLE
SINGLE TABLE CARDINALITY MISESTIMATE
DYNAMIC_SAMPLING USABLE
SINGLE TABLE CARDINALITY MISESTIMATE
Once you can see the SPD in the dictionary, you can also see them in action when you run a statement with the same predicates but different SQL_ID, as in this example.
SQL& select /*+ gather_plan_statistics new_parse_please */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;
----------
SQL& select * from table(dbms_xplan.display_cursor(format =& 'allstats last'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
85qvryzgzj57q, child number 0
-------------------------------------
select /*+ gather_plan_statistics new_parse_please */ count(*) from
customers where cust_city = 'Los Angeles' and cust_state_province =
'CA' and country_id = 52790
Plan hash value:
------------------------------------------------------------------------------------------
| Operation
| Starts | E-Rows | A-Rows |
| Buffers |
------------------------------------------------------------------------------------------
0 | SELECT STATEMENT
1 |00:00:00.01 |
SORT AGGREGATE
1 |00:00:00.01 |
TABLE ACCESS FULL| CUSTOMERS |
932 |00:00:00.01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((&CUST_CITY&='Los Angeles' AND &CUST_STATE_PROVINCE&='CA' AND
&COUNTRY_ID&=52790))
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
27 rows selected.
The SQL Plan directive stays even if you flush the cursor with SQL ID 34zmr3acgz06g from the cursor cache or supply a different set of predicates. I used
SQL& @flush_sql
Enter value for sql_id: 34zmr3acgz06g
where sql_id like '&sql_id';
where sql_id like '34zmr3acgz06g';
PL/SQL procedure successfully completed.
SQL& select sql_id, child_number, is_obsolete, is_reoptimizable, is_resolved_adaptive_plan from v$sql where sql_id = '34zmr3acgz06g';
no rows selected
SQL& select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;
----------
SQL& select * from table(dbms_xplan.display_cursor(format=&'allstats last'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
34zmr3acgz06g, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id
Plan hash value:
------------------------------------------------------------------------------------------
| Operation
| Starts | E-Rows | A-Rows |
| Buffers |
------------------------------------------------------------------------------------------
0 | SELECT STATEMENT
1 |00:00:00.01 |
SORT AGGREGATE
1 |00:00:00.01 |
TABLE ACCESS FULL| CUSTOMERS |
932 |00:00:00.01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((&CUST_CITY&='Los Angeles' AND &CUST_STATE_PROVINCE&='CA' AND
&COUNTRY_ID&=52790))
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
27 rows selected.
SQL& select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Palmdale' and cust_state_province = 'FL' and country_id = 52790;
----------
SQL& select * from table(dbms_xplan.display_cursor(format=&'allstats last'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dbkfpchpfwap3, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Palmdale' and cust_state_province = 'FL' and country_id =
Plan hash value:
------------------------------------------------------------------------------------------
| Operation
| Starts | E-Rows | A-Rows |
| Buffers |
------------------------------------------------------------------------------------------
0 | SELECT STATEMENT
1 |00:00:00.01 |
SORT AGGREGATE
1 |00:00:00.01 |
TABLE ACCESS FULL| CUSTOMERS |
250 |00:00:00.02 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((&CUST_CITY&='Palmdale' AND &CUST_STATE_PROVINCE&='FL' AND
&COUNTRY_ID&=52790))
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
27 rows selected.
And NOW you get the update to REPORT_COL_USAGE:
SQL& select dbms_stats.report_col_usage(user, 'CUSTOMERS')
DBMS_STATS.REPORT_COL_USAGE(USER,'CUSTOMERS')
--------------------------------------------------------------------------------
: Used in single table EQuality predicate
: Used in single table RANGE predicate
: Used in single table LIKE predicate
: Used in single table is (not) NULL predicate
: Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
: Used in single table FILTER predicate
: Used in JOIN predicate
: Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR MARTIN.CUSTOMERS
........................................
1. COUNTRY_ID
2. CUST_CITY
3. CUST_STATE_PROVINCE
4. (CUST_CITY, CUST_STATE_PROVINCE,
COUNTRY_ID)
###############################################################################
From here on it’s the same as in part 1 of this article. You gather stats, either manually like me or automatically like Oracle would, and the end result are extended statistics on the “filter” shown in line 4.
SQL& exec dbms_stats.gather_table_stats(user, 'CUSTOMERS')
PL/SQL procedure successfully completed.
SQL& select table_name, extension_name, extension from dba_stat_extensions where owner =
TABLE_NAME
EXTENSION_NAME
------------------------------ ---------------------------------------- --------------------------------------------------------------------------------
SYS_STSMZ$C3AIHLPBROI#SKA58H_N
(&CUST_CITY&,&CUST_STATE_PROVINCE&,&COUNTRY_ID&)
Voila! Extended statistics.
There are many more interesting implications to this whole concept, which is something I’ll write about in another post. It’s incredibly interesting, I can’t believe I’m writing optimiser posts …
PS: thanks to @Mautro for casting an eye over this article!
Posted in ,
Follow &Martins Blog&
Get every new post delivered to your Inbox.
Join 3,291 other followers}

我要回帖

更多关于 resource error 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信