Michael Dinh

Subscribe to Michael Dinh feed Michael Dinh
Michael T. Dinh, Oracle DBA
Updated: 16 hours 10 min ago

Create Delete RAC DB Using dbca silent

Tue, 2021-06-08 17:18

Tested version.

[oracle@ol7-19-lax1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)

OPatch succeeded.
[oracle@ol7-19-lax1 ~]$

Create database using dbca silent.

--- Set environment variables to be used by dbca.
export ORACLE_UNQNAME=owl
export PDB_NAME=mice
export NODE1=ol7-19-lax1
export NODE2=ol7-19-lax2
export SYS_PASSWORD=Oracle_4U
export PDB_PASSWORD=Oracle_4U

Note: -gdbName global_database_name (-gdbname oradb.example.com)

dbca -silent -createDatabase \
  -templateName General_Purpose.dbc \
  -gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \
  -characterSet AL32UTF8 \
  -sysPassword ${SYS_PASSWORD} \
  -systemPassword ${SYS_PASSWORD} \
  -createAsContainerDatabase true \
  -numberOfPDBs 1 \
  -pdbName ${PDB_NAME} \
  -pdbAdminPassword ${PDB_PASSWORD} \
  -databaseType MULTIPURPOSE \
  -automaticMemoryManagement false \
  -totalMemory 1024 \
  -redoLogFileSize 50 \
  -emConfiguration NONE \
  -ignorePreReqs \
  -nodelist ${NODE1},${NODE2} \
  -storageType ASM \
  -diskGroupName +DATA \
  -recoveryGroupName +RECO \
  -useOMF true \
  -asmsnmpPassword ${SYS_PASSWORD}

[oracle@ol7-19-lax1 ~]$ dbca -silent -createDatabase \
>   -templateName General_Purpose.dbc \
>   -gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \
>   -characterSet AL32UTF8 \
>   -sysPassword ${SYS_PASSWORD} \
>   -systemPassword ${SYS_PASSWORD} \
>   -createAsContainerDatabase true \
>   -numberOfPDBs 1 \
>   -pdbName ${PDB_NAME} \
>   -pdbAdminPassword ${PDB_PASSWORD} \
>   -databaseType MULTIPURPOSE \
>   -automaticMemoryManagement false \
>   -totalMemory 1024 \
>   -redoLogFileSize 50 \
>   -emConfiguration NONE \
>   -ignorePreReqs \
>   -nodelist ${NODE1},${NODE2} \
>   -storageType ASM \
>   -diskGroupName +DATA \
>   -recoveryGroupName +RECO \
>   -useOMF true \
>   -asmsnmpPassword ${SYS_PASSWORD}
Prepare for db operation
7% complete
Copying database files
27% complete
Creating and starting Oracle instance
28% complete
31% complete
35% complete
37% complete
40% complete
Creating cluster database views
41% complete
53% complete
Completing Database Creation
57% complete
59% complete
60% complete
Creating Pluggable Databases
64% complete
80% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/owl.
Database Information:
Global Database Name:owl
System Identifier(SID) Prefix:owl
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/owl/owl.log" for further details.
[oracle@ol7-19-lax1 ~]$

Log files are located at ORACLE_BASE/cfgtoollogs/dbca/${ORACLE_UNQNAME}

[oracle@ol7-19-lax1 ~]$ cd $ORACLE_BASE/cfgtoollogs/dbca/${ORACLE_UNQNAME}
[oracle@ol7-19-lax1 owl]$ pwd
/u01/app/oracle/cfgtoollogs/dbca/owl

[oracle@ol7-19-lax1 owl]$ ls -l
total 23576
-rw-r-----. 1 oracle oinstall    10422 Jun  8 21:20 catclust0.log
-rw-------. 1 oracle oinstall   201621 Jun  8 21:20 catclust_catcon_31776.lst
-rw-r-----. 1 oracle oinstall     2450 Jun  8 21:14 cloneDBCreation.log
-rw-r-----. 1 oracle oinstall      380 Jun  8 20:57 CloneRmanRestore.log
-rw-r-----. 1 oracle oinstall    44272 Jun  8 21:20 CreateClustDBViews.log
-rw-r-----. 1 oracle oinstall     1711 Jun  8 21:49 DBDetails.log
-rw-r-----. 1 oracle oinstall     9948 Jun  8 21:19 execemx0.log
-rw-------. 1 oracle oinstall   200759 Jun  8 21:19 execemx_catcon_31544.lst
-rw-r-----. 1 oracle oinstall      910 Jun  8 21:20 lockAccount.log
-rw-r-----. 1 oracle oinstall     9560 Jun  8 21:18 ordlib0.log
-rw-------. 1 oracle oinstall   200561 Jun  8 21:18 ordlib_catcon_31269.lst
-rw-r-----. 1 oracle oinstall      796 Jun  8 21:51 owl0.log
-rw-r-----. 1 oracle oinstall      952 Jun  8 21:34 owl.log
-rw-r-----. 1 oracle oinstall        0 Jun  8 21:33 PDBCreation.log
-rw-r-----. 1 oracle oinstall       28 Jun  8 21:34 plugDatabase1R.log
-rw-r-----. 1 oracle oinstall     4105 Jun  8 21:18 plugDatabase.log
-rw-r-----. 1 oracle oinstall    46082 Jun  8 21:33 postDBCreation.log
-rw-r-----. 1 oracle oinstall       24 Jun  8 21:34 postPDBCreation.log
-rw-r-----. 1 oracle oinstall    88296 Jun  8 21:19 postScripts.log
-rw-r-----. 1 oracle oinstall        0 Jun  8 21:50 rmanUtil
-rw-r-----. 1 oracle oinstall     1479 Jun  8 21:49 ShutdownInst.log
-rw-r-----. 1 oracle oinstall 18726912 Jun  8 20:57 tempControl.ctl
-rw-r-----. 1 oracle oinstall  2670547 Jun  8 21:34 trace.log_2021-06-08_08-55-55PM
-rw-r-----. 1 oracle oinstall  1649779 Jun  8 21:51 trace.log_2021-06-08_09-49-34PM
-rw-r-----. 1 oracle oinstall    15420 Jun  8 21:32 utlrp0.log
-rw-------. 1 oracle oinstall   200463 Jun  8 21:32 utlrp_catcon_3004.lst
[oracle@ol7-19-lax1 owl]$

Here is the created database.

[oracle@ol7-19-lax1 owl]$ srvctl config database -d owl
Database unique name: owl
Database name: owl
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/OWL/PARAMETERFILE/spfile.311.1074720717
Password file: +DATA/OWL/PASSWORD/pwdowl.298.1074718605
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: RECO,DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: owl1,owl2
Configured nodes: ol7-19-lax1,ol7-19-lax2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

[oracle@ol7-19-lax1 owl]$ srvctl status database -d owl
Instance owl1 is running on node ol7-19-lax1
Instance owl2 is running on node ol7-19-lax2

--- Newly created database was not added to oratab.
[oracle@ol7-19-lax1 owl]$ cat /etc/oratab
#Backup file is  /u01/app/oracle/product/19.0.0/dbhome_1/srvm/admin/oratab.bak.ol7-19-lax1.oracle line added by Agent
+ASM1:/u01/app/19.0.0/grid:N
hawk1:/u01/app/oracle/product/19.0.0/dbhome_1:N
[oracle@ol7-19-lax1 owl]$ 

Delete database using dbca silent.

dbca -silent -deleteDatabase -sourceDB ${ORACLE_UNQNAME} -sysDBAUserName sys -sysDBAPassword ${SYS_PASSWORD}

[oracle@ol7-19-lax1 ~]$ dbca -silent -deleteDatabase -sourceDB ${ORACLE_UNQNAME} -sysDBAUserName sys -
sysDBAPassword ${SYS_PASSWORD}

[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
39% complete
42% complete
45% complete
48% complete
52% complete
55% complete
58% complete
65% complete
Updating network configuration files
68% complete
Deleting instances and datafiles
77% complete
87% complete
97% complete
100% complete
Database deletion completed.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/owl/owl0.log" for further details.
[oracle@ol7-19-lax1 ~]$
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-60c0156296154', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', onClick: function() { window.__tcfapi && window.__tcfapi( 'showUi' ); }, } } }); });

Data Pump Compatible Version

Fri, 2021-06-04 19:51

Import failed as shown below:

Import: Release 18.0.0.0.0 - Production on Fri Jun 4 13:07:19 2021
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
ORA-39002: invalid operation
ORA-39358: Export dump file version 18.0.0.0 not compatible with target version 12.1.0.2.0

Here are the compatible settings for source and target.

Source is 18.6.0.0.0 with compatible=18.0.0.0
Target is 18.6.0.0.0 with compatible=12.1.0.2

Run export with version.

expdp version=12.1.0.2

That’s all folks.

__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-60baca8712286', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); });

emcli dg_verify_config create_srls

Wed, 2021-06-02 17:36

Convention.

DB_NAME=ORACLE_SID=XXXXXXX

Validate Data Guard Configuration.

$ ./dgmgrl_validate_srl.sh
======================== XXXXXXX1 ========================
XXXXXXX_PHOENIX

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (XXXXXXX_SYDNEY)        (XXXXXXX_PHOENIX)
    1         4                       4                       Insufficient SRLs
    2         4                       4                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (XXXXXXX_PHOENIX)       (XXXXXXX_SYDNEY)
    1         4                       4                       Insufficient SRLs
    2         4                       4                       Insufficient SRLs

--------------------------------------------------

Using emcli to verify Data Guard Configuration and Create Missing SRLs.



[oracle@emhost bin]$ $OMS_HOME/bin/emcli version
Oracle Enterprise Manager 13c EM CLI Version 13.4.0.0.0

[oracle@emhost bin]$ $OMS_HOME/bin/emcli login -username=sysman
Enter password :
Login successful

[oracle@emhost bin]$ $OMS_HOME/bin/emcli sync
Synchronized successfully

--- Find target name and verify targets do not have domain name as there is BUG what will cause failure.

[oracle@emhost bin]$ cd $OMS_HOME/bin/

[oracle@emhost bin]$ ./emcli get_targets -targets=rac_database -format=name:csv | grep XXXXXXX
1,Up,rac_database,XXXXXXX_PHOENIX
1,Up,rac_database,XXXXXXX_SYDNEY

--- Verify Data Guard configuration and create missing SRLs.
[oracle@emhost bin]$ ./emcli dg_verify_config -primary_target_name="XXXXXXX_SYDNEY" -primary_target_type="rac_database" -create_srls
Parsing command line arguments...
Verify Data Guard Configuration procedure VERIFY_DG_CONFIG_20210526091058954 has been submitted for primary database XXXXXXX_SYDNEY.

--- Check progress for VERIFY_DG_CONFIG_20210526091058954 
[oracle@emhost bin]$ ./emcli get_instances | grep -B1 VERIFY_DG_CONFIG_20210526091058954
GUID                              Execution GUID                    Type  Name                                Status     Owner   Instance Status
C32FCFCACD2E2650E05317279D0A7830  C32FCFCACD312650E05317279D0A7830  HA    VERIFY_DG_CONFIG_20210526091058954  Running    SYSMAN  Running

[oracle@emhost bin]$ ./emcli get_instances | grep -B1 VERIFY_DG_CONFIG_20210526091058954
GUID                              Execution GUID                    Type  Name                                Status     Owner   Instance Status
C32FCFCACD2E2650E05317279D0A7830  C32FCFCACD312650E05317279D0A7830  HA    VERIFY_DG_CONFIG_20210526091058954  Succeeded  SYSMAN  Succeeded

--- View results using GUID = C32FCFCACD2E2650E05317279D0A7830  
[oracle@emhost bin]$ ./emcli get_instance_status -instance=C32FCFCACD2E2650E05317279D0A7830 -xml -details -showJobOutput | grep -B50 "Data Guard configuration verification complete."
Processing is 0% complete.
Processing is 0% complete.
Processing is 0% complete.
Initializing
Connected to instance exasydad1x8anz-1nn7a1:XXXXXXX1
Starting alert log monitor...
Updating Data Guard link on database homepage...
Skipping verification of fast-start failover static services check.

Data Protection Settings:
  Protection mode : Maximum Performance
  Redo Transport Mode settings:
    XXXXXXX_PHOENIX: ASYNC
    XXXXXXX_SYDNEY: ASYNC

  Checking standby redo log files.....Done
   (Standby redo log files needed : 4)

Checking Data Guard status
  XXXXXXX_PHOENIX : Normal
  Write operation in progress
  XXXXXXX_SYDNEY : Normal

Checking inconsistent properties

Checking agent status
  XXXXXXX_PHOENIX
    exaphxad3x8na-kvnxa1.dbexaphoad3.phx.oraclevcn.com ... OK
    exaphxad3x8na-kvnxa2.dbexaphoad3.phx.oraclevcn.com ... OK

  XXXXXXX_SYDNEY
    exasydad1x8anz-1nn7a1.excsad1client.syd.oraclevcn.com ... OK
    exasydad1x8anz-1nn7a2.excsad1client.syd.oraclevcn.com ... OK


Checking applied log on XXXXXXX_PHOENIX...OK


Processing completed.

Standby Redo Log Files
Standby redo log files are recommended for all transport modes. 
They are required for certain features such as real-time apply and elevated protection modes.
Database Host/Cluster Size (MB) Log File Location Thread
XXXXXXX_PHOENIX  Unknown  4000.0  Oracle-managed file  2
XXXXXXX_PHOENIX  Unknown  4000.0  Oracle-managed file  1
XXXXXXX_SYDNEY   Unknown  4000.0  Oracle-managed file  2
XXXXXXX_SYDNEY   Unknown  4000.0  Oracle-managed file  1

--- Insufficient SRLs were create for both primary and standby.
Successfully created the required standby redo log files for all databases.
Successfully resolved the issues detected during verification.
Data Guard configuration verification complete.
$

Example of failure due to BUG or inconsistency for target name.

WARNING: 

DB_NAME=ORACLE_SID=YYYYYYY


$ $OMS_HOME/bin/emcli get_targets -targets=rac_database -format=name:csv | grep YYYYYYY
1,Up,rac_database,YYYYYYY.excsad2client.oraclevcn.com
1,Up,rac_database,YYYYYYY_ASHBURN


The below message in verify configuration due to Bug 32252460 in 13.4 and fixed in 14.1 GC

WARNING: Broker name (YYYYYYY) and target name (YYYYYYY.excsad2client.oraclevcn.com) do not match.
Checking standby redo log files.....not checked due to broker name mismatch. 

Need to match target name and broker name as a workaround to have emcli create srls.
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-60b8ca47ed9ba', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', onClick: function() { window.__tcfapi && window.__tcfapi( 'showUi' ); }, } } }); });

Query OEM mgmt$(target|target_properties)

Tue, 2021-06-01 16:41

TARGET_TYPE oracle_database can be a database or an instance; however, rac_database is a database

select t.TARGET_TYPE, t.TYPE_QUALIFIER3, count(*)
from mgmt$target_properties p, mgmt$target t
where p.TARGET_GUID=t.TARGET_GUID
and p.PROPERTY_NAME='DataGuardStatus'
group by t.TARGET_TYPE, t.TYPE_QUALIFIER3
order by 2,1 desc
;

There is 1 single instance database, 105 RAC databases, 210 RAC instances.

TARGET_TYPE                    TYPE_QUALIFIER3        COUNT(*)
------------------------------ -------------------- ----------
rac_database                   DB                          105
oracle_database                DB                            1
oracle_database                RACINST                     210

DataGuardStatus does not mean Data Guard exist unless PROPERTY_VALUE is populated

select 
t.TARGET_TYPE, t.TYPE_QUALIFIER3, 
NVL(REGEXP_REPLACE(p.property_value,'[[:space:]]'),'NO DataGuard') PROPERTY_VALUE, count(*)
from mgmt$target_properties p, mgmt$target t
where p.TARGET_GUID=t.TARGET_GUID
and p.PROPERTY_NAME='DataGuardStatus'
group by t.TARGET_TYPE, t.TYPE_QUALIFIER3, PROPERTY_VALUE
order by 2,1 desc
;

There are 48 RAC Primary and 49 RAC Physical Standby because 1 RAC database has 2 Physical Standby.

TARGET_TYPE                    TYPE_QUALIFIER3      PROPERTY_VALUE                   COUNT(*)
------------------------------ -------------------- ------------------------------ ----------
rac_database                   DB                   NO DataGuard                            8
rac_database                   DB                   PhysicalStandby                        49
rac_database                   DB                   Primary                                48
oracle_database                DB                   NO DataGuard                            1
oracle_database                RACINST              NO DataGuard                           16
oracle_database                RACINST              PhysicalStandby                        98
oracle_database                RACINST              Primary                                96

Here’s how to determine the values for TYPE_QUALIFIER1-4

SQL> select distinct NVL(REGEXP_REPLACE(TYPE_QUALIFIER1,'[[:space:]]'),NULL) TYPE_QUALIFIER from mgmt$target order by 1;

SQL> c/TYPE_QUALIFIER1/TYPE_QUALIFIER2
SQL> c/TYPE_QUALIFIER2/TYPE_QUALIFIER3
SQL> c/TYPE_QUALIFIER3/TYPE_QUALIFIER4

SQL to gather primary and standby targets.

-- db.sql
set echo off lines 300 pages 500 trimsp on tab off
col HOST_NAME       for a30
col TARGET_TYPE     for a20
col TYPE1           for a9
col TYPE3           for a9
col TYPE4           for a9
col PROPERTY_VALUE  for a23
col PROPERTY_NAME   for a17
col TARGET_NAME     for a60
BREAK ON HOST_NAME SKIP 1 ON PROPERTY_VALUE ON TARGET_TYPE ON PROPERTY_NAME ON TYPE3
select
  REGEXP_SUBSTR(t.HOST_NAME,'[^.]+',1,1) host_name,
--  REGEXP_SUBSTR(t.TARGET_NAME,'[^.]+',1,1) target_name, t.TARGET_TYPE,
  t.TARGET_NAME, t.TARGET_TYPE,
  NVL(REGEXP_REPLACE(property_value,'[[:space:]]'), 'Primary: NO DataGuard') PROPERTY_VALUE,
  p.PROPERTY_NAME,
  TYPE_QUALIFIER1 type1, TYPE_QUALIFIER3 type3,
  (CASE TYPE_QUALIFIER4 WHEN 'FullLLFile+CDB' THEN 'CDB' WHEN 'FullLLFile' THEN 'DB' ELSE NULL END) type4
from mgmt$target_properties p, mgmt$target t
where p.TARGET_GUID=t.TARGET_GUID
and   p.PROPERTY_NAME='DataGuardStatus' -- Find Data Guard
and   t.TYPE_QUALIFIER3='DB'            -- Find Database
order by PROPERTY_VALUE desc, t.TARGET_TYPE, t.HOST_NAME, type1 ASC, type4
;

emcli dg_verify_config

Fri, 2021-05-21 20:43

I must love looking for trouble.

There are 47 RAC databases with Data Guard distributed among 7 clusters.

Being as lazy as I am, I did not want to connect to all the hosts to verify Data Guard configuration and create standby redo logs (SRL) for environments with Insufficient SRLs.

After some searching, I have found emcli dg_verify_config can be used; however, the documentation is not that great.

Currently working with Oracle Support and crossing my fingers.

$ $OMS_HOME/bin/emcli login -username=sysman
Enter password :

Login successful
$ $OMS_HOME/bin/emcli sync
Synchronized successfully

$ $OMS_HOME/bin/emcli dg_verify_config -primary_target_name="APEX18_XXXXXXX" -primary_target_type="rac_database" -verify_only
Parsing command line arguments...
Verify Data Guard Configuration procedure VERIFY_DG_CONFIG_20210520114056703 has been submitted for primary database APEX18_XXXXXXX.

$ $OMS_HOME/bin/emcli dg_verify_config -primary_target_name="APEX18_XXXXXXX" -primary_target_type="rac_database" -create_srls

The intention is to run emcli from OMS host to verify config and create srls where required.

However, there’s no documentation for how to retrieve results from verify_only at the OS level.

Seeing that I am struggling with Oracle support, Lead Database Consultant shared his implementation to verify Data Guard status before performing switchover.

Using the same concept, check for Insufficient SRLs from standby databases.

--- Prerequisite: Data Guard Broker is implemented.

--- Find Oracle Data Guard monitor process (DMON) and write to sids.txt file.
ps -ef|grep [d]mon|grep -v ASM|sort -k8 | awk -F "_" '{print $3}' > sids.txt

--- Create shell script dgmgrl_validate_srl.sh
arr=(`grep '^[A-Z].*' < sids.txt`)
for i in "${arr[@]}"
do
  echo ======================== $i ========================
  . oraenv <<< $i >/dev/null
  stby=`dgmgrl / "show configuration" | grep "Physical standby" | awk  '{print $1;}'`
  echo $stby
  dgmgrl / "validate database verbose '$stby'" | grep -B4 "Insufficient SRLs"
done


--- Run dgmgrl_validate_srl.sh
$ ./dgmgrl_validate_srl.sh
======================== ORACLE_SID ========================
XXXXXXXX_SPYBRACDR

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (XXXXXXXX_SPYBRAC)      (XXXXXXXX_SPYBRACDR)
    0         10                      0                       Insufficient SRLs
--

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (XXXXXXXX_SPYBRACDR)    (XXXXXXXX_SPYBRAC)
    0         10                      0                       Insufficient SRLs


Future Log File Groups Configuration - When current standby becomes primary.

Checking is different from fixing as 50%+ of RAC databases have issues with Insufficient SRLs.

Currently both processes (shell script and emcli) are deficient since only standby databases are checked.

When there is switchover or failover for primary databases, then check will have to be performed yet again.

At least, using emcli dg_verify_config -create_srls, does simplify the process since it can be execute from OMS host if it works.

Gather Your Session Info For Killing

Thu, 2021-05-13 23:40

So there I was, running emremove.sql as part of pre-upgrade task; however, it was taking longer than expected.

Session was stuck at the output shown below and desperately CTRL-C did not work.

14:35:05 472  /
old  70:     IF (upper('&LOGGING') = 'VERBOSE')
new  70:     IF (upper('VERBOSE') = 'VERBOSE')

^C

^C^C

^C

I checked for blocking session and there were blocking locks from SYS which was really strange.

I made a gutsy call and kill SYS session from OS prompt based on timestamp.

~ $ ps -ef|grep sysdba
oracle    57147 231962  0 May12 pts/4    00:00:00 sqlplus   as sysdba
oracle   155919 139352  0 14:34 pts/1    00:00:00 sqlplus   as sysdba
oracle   244619 216760  0 15:25 pts/5    00:00:00 grep --color=auto sysdba

~ $ kill -9 155919

As it turns out, another DBA was logged in as sysdba causing havoc.

I was lucky to have killed the correct SYS session and will you be as lucky as I was?

Based on my near disaster, it would be better to create good practice of gathering your session info to be able to kill the correct session.

Here is current session info.

SQL> @my
SQL> select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session b
  2  where a.addr = b.paddr
  3  and b.audsid = userenv('sessionid')
  4  and b.sid=userenv('sid')
  5  ;

       SID    SERIAL# PROCESSID                CLIENTPID
---------- ---------- ------------------------ ------------------------
         5        101 16428                    16427

SQL>

[oracle@ol7-112-dg1 ~]$ ps -ef|grep 16427
oracle   16427  8573  0 03:44 pts/0    00:00:00 sqlplus   as sysdba
oracle   16428 16427  0 03:44 ?        00:00:00 oraclehawk (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   16461 11677  0 03:45 pts/1    00:00:00 grep --color=auto 16427
[oracle@ol7-112-dg1 ~]$

Kill OS process using sqlplus PROCESSID – don’t know session is killed until DML is performed.

[oracle@ol7-112-dg1 ~]$ kill -9 16428

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-03135: connection lost contact
Process ID: 16428
Session ID: 5 Serial number: 101


SQL>

Another test

--- Session Info
SQL> @my
SQL> select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session b
  2  where a.addr = b.paddr
  3  and b.audsid = userenv('sessionid')
  4  and b.sid=userenv('sid')
  5  ;

       SID    SERIAL# PROCESSID                CLIENTPID
---------- ---------- ------------------------ ------------------------
         5        103 16533                    16532

SQL>

--- From another session, check waits for above session
SQL> r
  1  select NVL(s.username,'(oracle)') AS username, s.sid, s.serial#,
  2  sw.event, sw.seconds_in_wait, sw.state
  3  from v$session_wait sw, v$session s
  4  where s.sid = sw.sid and s.sid=&sid
  5*
Enter value for sid: 5
old   4: where s.sid = sw.sid and s.sid=&sid
new   4: where s.sid = sw.sid and s.sid=5

USERNAME               SID    SERIAL# EVENT                          SECONDS_IN_WAIT STATE
--------------- ---------- ---------- ------------------------------ --------------- -------------------
SYS                      5        115 SQL*Net message from client                169 WAITING

SQL>

Kill OS process using sqlplus CLIENTPID – immediate feedback –

[oracle@ol7-112-dg1 ~]$ ps -ef|grep 16532
oracle   16532  8573  0 03:46 pts/0    00:00:00 sqlplus   as sysdba
oracle   16533 16532  0 03:46 ?        00:00:00 oraclehawk (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   16557 11677  0 03:47 pts/1    00:00:00 grep --color=auto 16532
[oracle@ol7-112-dg1 ~]$


[oracle@ol7-112-dg1 ~]$ kill -9 16532


SQL> Killed
[oracle@ol7-112-dg1 ~]$

Hopefully you will never have to kill your own session.

When you need kill your session, it’s better to have the correct information versus guessing.

Extract DB User Password

Fri, 2021-04-30 19:40

For some reason, I had a mental block in trying to extract password for database users.

Here are some options.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

SQL>

-- From notes:
SQL> set echo off head off verify off feedb off pages 0 long 10000 longchunk 10000 trimspool on lines 2000 timing off term off
SQL> exec dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,'SQLTERMINATOR',TRUE);
SQL> exec dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,'PRETTY',TRUE);

SQL> r
  1  select 'alter user '||username||' identified by values '||REGEXP_SUBSTR(DBMS_METADATA.get_ddl ('USER',USERNAME), '''[^'']+''')||';' ddl
  2  from dba_users where username='CTXSYS'
  3*
alter user CTXSYS identified by values 'S:29174843DD6989AA921A152BF37264659F042C2D8C216D97F6176AF13E4F;T:7B311EEA53E028F937CFABC4D6F6142E3027195E099798CD9E67910ABE6C621E9C23780121F208451B95AB558A4862F206A917C93B50D96E8F573FE7E1A4B2E98D77B9504BC2EBB457B63600127E34D';

SQL>


-- From colleague
SQL> select name, spare4 from sys.user$ where name='CTXSYS';
CTXSYS
S:29174843DD6989AA921A152BF37264659F042C2D8C216D97F6176AF13E4F;T:7B311EEA53E028F937CFABC4D6F6142E3027195E099798CD9E67910ABE6C621E9C23780121F208451B95AB558A4862F206A917C93B50D96E8F573FE7E1A34B2E98D77B9504BC2EBB457B63600127E34D

SQL>

-- From asktom
SQL> r
  1  with t as
  2  (select TO_CHAR(dbms_metadata.get_ddl('USER','CTXSYS')) ddl from dual )
  3  select replace(substr(ddl,1,instr(ddl,'DEFAULT')-1),'CREATE','ALTER')||';' sql
  4* from t

ALTER USER "CTXSYS" IDENTIFIED BY VALUES 'S:29174843DD6989AA921A152BF37264659F042C2D8C216D97F6176AF13E4F;T:7B311EEA53E028F937CFABC4D6F6142E3027195E099798CD9E67910ABE6C621E9C23780121F208451B95AB558A4862F206A917C93B50D96E8F573FE7E1A34B2E98D77B9504BC2EBB457B63600127E34D';

SQL>
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-608d63d21b1a2', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', onClick: function() { window.__tcfapi && window.__tcfapi( 'showUi' ); }, } } }); });

RMAN Backup & Restore Review

Tue, 2021-04-27 17:08

I have seen many backup reviews without any information on Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

The environment I am reviewing has backup to tape only.

Here is the RMAN script I am using. Since I don’t know RTO and RPO, I chose an arbitrary RPO.

$ cat /tmp/restore_validate_$ORACLE_SID.out

Recovery Manager: Release 11.2.0.4.0 - Production on Tue Apr 27 12:02:26 2021

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to restore_validate.log
2> set echo on
3> connect target;
4> show all;
5> list backup by file;
6> restore spfile validate;
7> restore controlfile validate;
8> restore database until time "TRUNC(sysdate)+1/24" validate preview summary;
9> report schema;
10> exit

There are six development databases on the host.

After running restore validate here are pertinent results from mining the log.

The following databases do not look to have successful restore since there is referenced to disk.

What do you think is required from disks for restore?

$ grep "scanning archived log" restore_validate*.log|grep ORA_DISK|awk -F ":" '{print $1}'|sort -u
restore_validate_DEVAAAD.log
restore_validate_DEVAAAQ.log
restore_validate_DEVBBBD.log
restore_validate_DEVBBBQ.log

Here is the crontab entries for archived log backup and the time restore testing was performed.

--- Restore until time TRUNC(sysdate)+1/24 will failed since archived logs are backed up every 12H.

FAILED: 
--- Archived logs backup starts 12:30. Restore test started at 10:35; hence, archived logs on disk is required.
30 0,12 * * * /home/oracle/bin/rman_backup.sh DEVAAAD arch
-rw-r--r--    1 oracle   dba           38478 Apr 27 10:05 restore_validate_DEVAAAD.log

--- Archived logs backup starts 12:55. Restore test started at 12:48; hence, archived logs on disk is required.
55 0,12 * * * /home/oracle/bin/rman_backup.sh DEVAAAQ arch
-rw-r--r--    1 oracle   dba           33440 Apr 26 12:48 restore_validate_DEVAAAQ.log

--- Archived logs backup starts 12:10. Restore test started at 12:02; hence, archived logs on disk is required.
10 0,12 * * * /home/oracle/bin/rman_backup.sh DEVBBBD arch
-rw-r--r--    1 oracle   dba           16152 Apr 26 12:02 restore_validate_DEVBBBD.log

--- Archived logs backup starts 12:10. Restore test started at 11:45; hence, archived logs on disk is required.
10 0,12 * * * /home/oracle/bin/rman_backup.sh DEVBBBQ arch
-rw-r--r--    1 oracle   dba           50474 Apr 22 11:48 restore_validate_DEVBBBQ.log


PASSED:
--- Archived logs backup starts 12:55. Restore test started at 13:22.
55 0,12 * * * /home/oracle/bin/rman_backup.sh DEVAAAU arch
-rw-r--r--    1 oracle   dba           38967 Apr 26 13:22 restore_validate_DEVAAAU.log

--- Archived logs backup starts 12:10. Restore test started at 12:16.
10 0,12 * * * /home/oracle/bin/rman_backup.sh DEVBBBU arch
-rw-r--r--    1 oracle   dba           43777 Apr 27 12:16 restore_validate_DEVBBBU.log

Is it acceptable to lose up to 12H of data?

Do You Tag RMAN Backups?

Tue, 2021-04-27 14:53

There were discussions if RMAN backups should be tagged.

Basically, I have been tagging backup as L0, L1, AL for backups to disk or tape only.

Then there are backups to both disk & tape and was planning to tag backups as L0_DISK, L1_DISK, AL_DISK, L0_TAPE, L1_TAPE, AL_TAPE.

I was provided the following info.

Tags can’t be used for this purpose. Tag does not indicate if backup is on disk. Tag should be used if you want to create a specific backup for specific purpose, and use this tag for copy, keep or something like this.

The above recommendations differ from Oracle’s documentation – Backup Tags

Next, I was validating backup using – restore database until time “TRUNC(sysdate)+1/24” validate preview summary;

Here is the result for mining the log and noticed the info for backup pieces.

Does it matter or important to know what the backup piece is?

$ grep "piece handle" restore_validate_$ORACLE_SID.log|grep tag|sort -u
channel ORA_SBT_TAPE_1: piece handle=5avt57n5_1_1 tag=LEVEL0
channel ORA_SBT_TAPE_1: piece handle=76vt7sog_1_1 tag=LEVEL1
channel ORA_SBT_TAPE_1: piece handle=78vt8rmb_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=79vt8rmc_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7avt8rmd_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7bvt8rmg_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7cvt8rmh_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7dvt8rmi_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7evt8rmj_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7fvt8rmk_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7gvt8rml_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7hvt8rmn_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7ivt8rmo_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7jvt8rmp_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7kvt8rmq_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7lvt8rmu_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7mvt8rmv_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7nvt8rn0_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7ovt8rn1_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7qvta5sc_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7rvta5t5_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7svta5t8_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7tvta5tc_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7uvta5te_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=7vvta5tg_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=80vta5tk_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=81vta5tm_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=82vta5tp_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=83vta5tr_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=84vta5tv_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=85vta5u3_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=86vta5u7_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=87vta5uc_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=88vta5ug_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=89vta5ui_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8avta5un_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8bvta5ur_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8cvta5vb_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8dvta5vg_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8evta600_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8fvta602_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8gvta605_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=8hvta609_1_1 tag=ARCH
channel ORA_SBT_TAPE_1: piece handle=c-3547077149-20210427-01 tag=TAG20210427T034102

RMAN list backup

Fri, 2021-04-23 21:01

I am been used to using TAG when listing backup and today I learned something new.

### Change disk to sbt for device type
c/disk/sbt

### Change database with any of the followings:
c/database/archivelog all/archivelog sequence/controlfile/spfile/tablespace/datafile

list backup summary device type=disk;
list backup summary device type=disk;
list backup of database summary device type=disk;
list backup of database summary completed after 'sysdate-1' device type=disk;
list backup of database summary completed between 'sysdate-1' and 'sysdate' device type=disk;
list backup of tablespace 'SYSTEM' summary device type=disk;
list backup of datafile 1 summary device type=disk;
list backup of archivelog sequence between 46270 and 46274 summary device type=disk;
list backup of archivelog sequence 1 summary device type=disk;
list backup of archivelog all summary completed after 'sysdate-1' device type=disk;
list backup of archivelog all summary completed between 'sysdate-1' and 'sysdate' device type=disk;


[oracle@ol7-112-dg1 ~]$ rman checksyntax @ list.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Sat Apr 24 01:54:10 2021

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> set echo on
2> list backup summary device type=disk;
3> list backup of database summary device type=disk;
4> list backup of database summary completed after 'sysdate-1' device type=disk;
5> list backup of database summary completed between 'sysdate-1' and 'sysdate' device type=disk;
6> list backup of tablespace 'SYSTEM' summary device type=disk;
7> list backup of datafile 1 summary device type=disk;
8> list backup of archivelog sequence between 46270 and 46274 summary device type=disk;
9> list backup of archivelog sequence 1 summary device type=disk;
10> list backup of archivelog all summary completed after 'sysdate-1' device type=disk;
11> list backup of archivelog all summary completed between 'sysdate-1' and 'sysdate' device type=disk;
12> exit
The cmdfile has no syntax errors

Recovery Manager complete.
[oracle@ol7-112-dg1 ~]$

If you are interested in using TAG then see post Simplify RMAN Restore With Meaningful Tag

__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-60837d3d10dc3', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); });

Use Different Listener For Standby Duplication?

Sun, 2021-04-18 11:59

I know what you are thinking!

Why make this more complicate than it needs to be and is an ounce of prevention worth a pound of cure?

Oracle support started patching Oracle Exadata Cloud@Customer (ExaCC) environment.

After patching, the listener did not start because there were entries in the listener referencing database and oracle home that have been removed.

There are multiple database homes for the same database versions and this is how it was implemented.

Primary database (DB_ASHBURN) was used to create a second standby (DB_PHOENIX).

The primary database (DB_ASHBURN) was switchover to second standby (DB_PHOENIX)

DB_PHOENIX is now the new primary and DB_ASHBURN is the standby.

DB_ASHBURN (standby) was decommissioned and ORACLE_HOME was removed.

Unfortunately, listener.ora was not modified and failed to start after patching was completed.

Here is an example for LISTENER failed to start.

[oracle@ol7-112-dg1 admin]$ lsnrctl start

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 18-APR-2021 16:19:49

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.4.0 - Production
System parameter file is /u01/app/oracle/product/11.2.0.4/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-112-dg1/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-112-dg1.local)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
TNS-01201: Listener cannot find executable /u01/app/oracle/product/11.2.0.4/dbhome_2/bin/oracle for SID DB_PHOENIX
[oracle@ol7-112-dg1 admin]$

---------------------------------------------------------
--- ORACLE_HOME may be in /etc/oratab but does not exist.
---------------------------------------------------------
[oracle@ol7-112-dg1 admin]$ sort -u -t : -k 2,2 /etc/oratab | grep -v "^#" | awk -F ":" '{print $2}'
/u01/app/oracle/product/11.2.0.4/dbhome_1
/u01/app/oracle/product/11.2.0.4/dbhome_2

[oracle@ol7-112-dg1 admin]$ ls -ld /u01/app/oracle/product/11.2.0.4/
drwxr-xr-x. 3 oracle oinstall 22 Apr 14 18:29 /u01/app/oracle/product/11.2.0.4/
[oracle@ol7-112-dg1 admin]$

Having separate listeners, LISTENER started without issues

[oracle@ol7-112-dg1 admin]$ lsnrctl start LISTENER

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 18-APR-2021 16:32:24

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.4.0 - Production
System parameter file is /u01/app/oracle/product/11.2.0.4/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-112-dg1/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-112-dg1.local)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-112-dg1.local)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                18-APR-2021 16:32:25
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ol7-112-dg1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-112-dg1.local)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "hawk" has 1 instance(s).
  Instance "hawk", status UNKNOWN, has 1 handler(s) for this service...
Service "hawk_DGMGRL" has 1 instance(s).
  Instance "hawk", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-112-dg1 admin]$

What’s your preference, having separate listeners to play it safe or decommissioning the environment properly and completely?

Port Forwarding Using SSH Config File

Sun, 2021-04-11 15:03

Here is a good reference SSH config file for OpenSSH client

From a secured server, I am able to connect to 2 different environments which seems counter intuitive but I digress.

Since there are 2 different environments, the same ~/.ssh/config cannot be used as there may be IP overlap.

One environment will use ~/.ssh/config and ssh emhost

Other environment will use ~/.ssh/cbconfig and ssh -F ~/.ssh/cbconfig emhost

The default EM port for both hosts is 7803.

Using firefox https://localhost:7803/em to access EM does not work well when saving username and password as they will be overwritten.

One solution to save sysman’s password is to use URL with different port.

Hence, config will have EM port 7803 forward to 7803 while cbconfig will have EM port 7803 forward to 7804.

========================================
This is on cloud and looks complicated. 
========================================
I did not create the configuration and don't know how many hours it took.
~/.ssh/config

Host emhost
     HostName 10.157.38.66
     LocalForward 7001 10.157.38.66:7001
     LocalForward 7102 10.157.38.66:7102
     LocalForward 7803 10.157.38.66:7803
     LocalForward 9803 10.157.38.66:9803
     LocalForward 9851 10.157.38.66:9851

# DEFAULTS:
Host *
User dinh


========================================
This is on premise and looks simpler. 
========================================
ssh -F ~/.ssh/cbconfig emhost

Host emhost
     HostName 10.10.72.254
     # Forward port need to use IP address.
     # Equivalent to ssh -L 7804:10.10.72.254:7803 mdinh@10.10.72.254
     LocalForward 7804 10.131.28.227:7803

# DEFAULTS:
Host *
User mdinh

Who Can Access

Wed, 2021-04-07 19:22

I had a request to list the users who have read access (or greater) to the APP schema.

Base on the results below:

User DINH has SELECT on APP.INTERVAL table (view)

Role APP_ROLE has SELECT/UPDATE on APP.INTERVAL table (view)

User DINH/APP/SYS has APP_ROLE

SQL> show con_name

CON_NAME
------------------------------
ORCLPDB1
SQL> show user
USER is "SYS"
SQL> @priv.sql

SQL> select username from dba_users where created  > (select created from v$database) order by 1;

USERNAME
------------------------------
APP
AUDIT_TEST
DINH
DINH099PD
PDBADMIN
WMS099PD

6 rows selected.

SQL> select * from DBA_TAB_PRIVS where owner='APP';

GRANTEE              OWNER                TABLE_NAME                GRANTOR              PRIVILEGE            GRANTABLE HIERARCHY COM TYPE                     INH
-------------------- -------------------- ------------------------- -------------------- -------------------- --------- --------- --- ------------------------ ---
DINH                 APP                  INTERVAL                  APP                  SELECT               NO        NO        NO  TABLE                    NO
APP_ROLE             APP                  INTERVAL                  APP                  SELECT               NO        NO        NO  TABLE                    NO
APP_ROLE             APP                  INTERVAL                  APP                  UPDATE               NO        NO        NO  TABLE                    NO

SQL> --- ROLE_TAB_PRIVS describes table privileges granted to roles.
SQL> --- Information is provided only about roles to which the user has access.
SQL> select * from ROLE_TAB_PRIVS where OWNER='APP';

ROLE                           OWNER                TABLE_NAME                COLUMN_NAME               PRIVILEGE            GRANTABLE COM INH
------------------------------ -------------------- ------------------------- ------------------------- -------------------- --------- --- ---
APP_ROLE                       APP                  INTERVAL                                            UPDATE               NO        NO  NO
APP_ROLE                       APP                  INTERVAL                                            SELECT               NO        NO  NO

SQL> select * from DBA_ROLE_PRIVS where GRANTED_ROLE='APP_ROLE' order by 1;

GRANTEE              GRANTED_ ADMIN DEL DEFAULT COM INH
-------------------- -------- ----- --- ------- --- ---
APP                  APP_ROLE YES   NO  YES     NO  NO
DINH                 APP_ROLE NO    NO  YES     NO  NO
SYS                  APP_ROLE NO    NO  YES     NO  NO

SQL>

I also used Pete Finnigan’s who_can_access.sql for comparison.

Note who_can_access.sql is per object vs per schema.

If there were hundreds / thousands of table, then not sure how this will scale.

who_can_access: Release 1.0.3.0.0 - Production on Wed Apr 07 19:00:04 2021
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

NAME OF OBJECT TO CHECK       [USER_OBJECTS]: INTERVAL
OWNER OF THE OBJECT TO CHECK          [USER]: APP
OUTPUT METHOD Screen/File                [S]:
FILE NAME FOR OUTPUT              [priv.lst]:
OUTPUT DIRECTORY [DIRECTORY  or file (/tmp)]:
EXCLUDE CERTAIN USERS                    [N]:
USER TO SKIP                         [TEST%]:

Checking object => APP.INTERVAL
====================================================================


Object type is => TABLE (TAB)
        Privilege => SELECT is granted to =>
        Role => APP_ROLE (ADM = NO) which is granted to =>
                User => DINH (ADM = NO)
                User => SYS (ADM = NO)
                User => APP (ADM = YES)
        User => DINH (ADM = NO)
        Privilege => UPDATE is granted to =>
        Role => APP_ROLE (ADM = NO) which is granted to =>
                User => DINH (ADM = NO)
                User => SYS (ADM = NO)
                User => APP (ADM = YES)

PL/SQL procedure successfully completed.


For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

Did I do this right?

Detect Linux Host Restart

Mon, 2021-04-05 22:15

Sometime ago I had blogged about Monitor Linux Host Restart

The simple solution: How to email admins automatically after a Linux server starts?

Here is the example from root’s cron:

# crontab -l
@reboot su oracle -c '/home/oracle/scripts/host_restart_alert.sh' > /tmp/host_restart_alert.out 2>&1

Shell script is used because mail cannot be sent from local host and will need to be sent from remote host.

#!/bin/bash -x
MAILFROM=
MAILTO=
SUBJECT="Node reboot detected for $(hostname)"
EMAILMESSAGE="$(hostname) was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`"

# uptime reports minutely and need to sleep for at least 60s after host restart
sleep 63

ssh oracle@remotehost /bin/bash <<EOF
/home/oracle/scripts/send_email.sh "$EMAILMESSAGE" "$SUBJECT" "$MAILFROM" "$MAILTO"
EOF

exit

Why is there a need to detect host restart and isn’t there monitoring for the host?

This is Oracle Exadata Cloud@Customer (ExaCC) environment.

When Oracle support performs patching, they do not provide any sort of communication or status and monitoring is disable for all hosts beforehand.

OPatchAuto to Patch a GI/RAC Environment.

After the patching is complete and your servers are restarted, you should check your product software to verify that the issue has been resolved.

This is why there is a need to detect and be notified for server restart.

Linux Find Week# Of Month

Sun, 2021-04-04 16:02

Unfortunately, Linux does not have parameter for Week Number Of Month

I found the solution at https://serverfault.com/questions/383666/how-to-determine-number-of-week-of-the-month

echo $((($(date +%-d)-1)/7+1))

Here is how I have tested.

[oracle@oracle-12201-vagrant ~]$ date -d '20210404'
Sun Apr  4 00:00:00 -05 2021

[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210404' +%-d)-1)/7+1))
1

[oracle@oracle-12201-vagrant ~]$ date -d '20210411'
Sun Apr 11 00:00:00 -05 2021

[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210411' +%-d)-1)/7+1))
2

[oracle@oracle-12201-vagrant ~]$ date -d '20210418'
Sun Apr 18 00:00:00 -05 2021

[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210418' +%-d)-1)/7+1))
3

[oracle@oracle-12201-vagrant ~]$ date -d '20210425'
Sun Apr 25 00:00:00 -05 2021

[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210425' +%-d)-1)/7+1))
4

Why is this even useful?

It can be a continuation of Simplify Log Management For Backup

Simplify Log Management For Backup

Sat, 2021-04-03 13:39

Currently, there is a cronjob to delete backup logs older than 7 days.

30 23 * * * find /home/oracle/scripts/logs -name "*.log" -mtime +7 -exec rm {} \;

Typically, it not’s a big deal; however, having to update crontab with 50 entries can be cumbersome when not necesary.

$ crontab -l|wc -l
50

Furthermore, there are 1,044 logs accumulated in the directory.

$ ls -l /home/oracle/scripts/logs/*.log|wc -l
1044

Here is an example for level0 / level1 backup log and seems redundant to have timestamp as part of log name.

$ ls -lt backup_$ORACLE_SID_level0*.log
-rw-r--r-- 1 oracle oinstall 1004854 Apr  3 07:04 backup_$ORACLE_SID_level0_sbt_202104030700_Sat.log
-rw-r--r-- 1 oracle oinstall  839713 Mar 28 05:08 backup_$ORACLE_SID_level0_202103280500_Sun.log
-rw-r--r-- 1 oracle oinstall 1292709 Mar 27 07:04 backup_$ORACLE_SID_level0_sbt_202103270700_Sat.log

$ ls -lt backup_$ORACLE_SID_level1*.log
-rw-r--r-- 1 oracle oinstall   31694 Apr  3 05:11 backup_$ORACLE_SID_level1_202104030510_Sat.log
-rw-r--r-- 1 oracle oinstall  801491 Apr  2 07:33 backup_$ORACLE_SID_level1_sbt_202104020730_Fri.log
-rw-r--r-- 1 oracle oinstall   31711 Apr  2 05:11 backup_$ORACLE_SID_level1_202104020510_Fri.log
-rw-r--r-- 1 oracle oinstall  767509 Apr  1 07:33 backup_$ORACLE_SID_level1_sbt_202104010730_Thu.log
-rw-r--r-- 1 oracle oinstall   31587 Apr  1 05:11 backup_$ORACLE_SID_level1_202104010510_Thu.log
-rw-r--r-- 1 oracle oinstall  733961 Mar 31 07:32 backup_$ORACLE_SID_level1_sbt_202103310730_Wed.log
-rw-r--r-- 1 oracle oinstall   32797 Mar 31 05:11 backup_$ORACLE_SID_level1_202103310510_Wed.log
-rw-r--r-- 1 oracle oinstall  700145 Mar 30 07:32 backup_$ORACLE_SID_level1_sbt_202103300730_Tue.log
-rw-r--r-- 1 oracle oinstall   31591 Mar 30 05:11 backup_$ORACLE_SID_level1_202103300510_Tue.log
-rw-r--r-- 1 oracle oinstall  666291 Mar 29 07:32 backup_$ORACLE_SID_level1_sbt_202103290730_Mon.log
-rw-r--r-- 1 oracle oinstall   31731 Mar 29 05:11 backup_$ORACLE_SID_level1_202103290510_Mon.log
-rw-r--r-- 1 oracle oinstall  631891 Mar 28 07:32 backup_$ORACLE_SID_level1_sbt_202103280730_Sun.log
-rw-r--r-- 1 oracle oinstall   32925 Mar 27 05:11 backup_$ORACLE_SID_level1_202103270510_Sat.log
-rw-r--r-- 1 oracle oinstall 1091718 Mar 26 07:33 backup_$ORACLE_SID_level1_sbt_202103260730_Fri.log
-rw-r--r-- 1 oracle oinstall   31523 Mar 26 05:11 backup_$ORACLE_SID_level1_202103260510_Fri.log

Here’s how to improve logging for level0 and level1 backup using date function.

This will keep backup level0 and level1 logs for 7 days.
%a - locale's abbreviated weekday name (e.g., Sun)

$ echo "backup_${ORACLE_SID}_level0_$(date +%a).log"
backup_ORCLCDB_level0_Sat.log

$ echo "backup_${ORACLE_SID}_level0_sbt_$(date +%a).log"
backup_ORCLCDB_level0_sbt_Sat.log

$ echo "backup_${ORACLE_SID}_level1_$(date +%a).log"
backup_ORCLCDB_level1_Sat.log

$ echo "backup_${ORACLE_SID}_level1_sbt_$(date +%a).log"
backup_ORCLCDB_level1_sbt_Sat.log

Basically, the log will be overwritten on a weekly basis.

What will happen Level0 backup failed on the weekend and is performed on Monday?

It’s a one off and will you lose sleep over it?

Next, there are 204 logs for archived log backup.

$ ls -lt backup_$ORACLE_SID_arch*.log|wc -l
204

$ ls -lt backup_$ORACLE_SID_arch*.log|tail
-rw-r--r-- 1 oracle oinstall 14104 Mar 26 08:51 backup_$ORACLE_SID_arch_202103260850_Fri.log
-rw-r--r-- 1 oracle oinstall 14103 Mar 26 07:51 backup_$ORACLE_SID_arch_202103260750_Fri.log
-rw-r--r-- 1 oracle oinstall 14108 Mar 26 06:51 backup_$ORACLE_SID_arch_202103260650_Fri.log
-rw-r--r-- 1 oracle oinstall 13081 Mar 26 05:51 backup_$ORACLE_SID_arch_202103260550_Fri.log
-rw-r--r-- 1 oracle oinstall 14109 Mar 26 04:51 backup_$ORACLE_SID_arch_202103260450_Fri.log
-rw-r--r-- 1 oracle oinstall 14104 Mar 26 03:51 backup_$ORACLE_SID_arch_202103260350_Fri.log
-rw-r--r-- 1 oracle oinstall 14108 Mar 26 02:51 backup_$ORACLE_SID_arch_202103260250_Fri.log
-rw-r--r-- 1 oracle oinstall 14104 Mar 26 01:51 backup_$ORACLE_SID_arch_202103260150_Fri.log
-rw-r--r-- 1 oracle oinstall 14108 Mar 26 00:51 backup_$ORACLE_SID_arch_202103260050_Fri.log
-rw-r--r-- 1 oracle oinstall 14104 Mar 25 23:51 backup_$ORACLE_SID_arch_202103252350_Thu.log

$ ls -lt backup_WEUMIX1_arch*.log|head
-rw-r--r-- 1 oracle oinstall 14094 Apr  3 10:51 backup_$ORACLE_SID_arch_202104031050_Sat.log
-rw-r--r-- 1 oracle oinstall 14091 Apr  3 09:50 backup_$ORACLE_SID_arch_202104030950_Sat.log
-rw-r--r-- 1 oracle oinstall 14092 Apr  3 08:51 backup_$ORACLE_SID_arch_202104030850_Sat.log
-rw-r--r-- 1 oracle oinstall 14096 Apr  3 07:51 backup_$ORACLE_SID_arch_202104030750_Sat.log
-rw-r--r-- 1 oracle oinstall 14100 Apr  3 06:51 backup_$ORACLE_SID_arch_202104030650_Sat.log
-rw-r--r-- 1 oracle oinstall 13631 Apr  3 05:51 backup_$ORACLE_SID_arch_202104030550_Sat.log
-rw-r--r-- 1 oracle oinstall 14082 Apr  3 04:51 backup_$ORACLE_SID_arch_202104030450_Sat.log
-rw-r--r-- 1 oracle oinstall 14084 Apr  3 03:51 backup_$ORACLE_SID_arch_202104030350_Sat.log
-rw-r--r-- 1 oracle oinstall 14104 Apr  3 02:51 backup_$ORACLE_SID_arch_202104030250_Sat.log
-rw-r--r-- 1 oracle oinstall 14093 Apr  3 01:51 backup_$ORACLE_SID_arch_202104030150_Sat.log

Here’s how to improve logging for archived log using date function.

This will keep backup archived logs for 7 days.
%H - hour (00..23)

$ echo "backup_${ORACLE_SID}_arch_$(date +%a"_H"%H).log"
backup_ORCLCDB_arch_Sat_H11.log

What if 2 versions of log are required? Rename current log file and append .1 before creating new one.

What if 3 versions of log are required? Hmm?

How can there be 3 versions for level0 and level1 backup logs when backup logs older than 7 days are deleted?

Cleanup Trace Files For Multiple Oracle Homes

Sat, 2021-03-27 11:36

I know what you are probably thinking. What’s the big deal and how many homes can there be?

For Exadata Cloud, I recalled seeing as many as 18 database homes.

As shown below, there are 5 database homes with version 12.2 and 1 database home with version 19.0.

# dbaascli dbhome info
DBAAS CLI version 21.1.1.0.1
Executing command dbhome info
Enter a homename or just press enter if you want details of all homes

1.HOME_NAME=OraHome101
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_4
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=
   OH Backup=NOT Configured 

2.HOME_NAME=OraHome100
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_7
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=*****
   Agent DB IDs=d21b07df-20f2-439e-bc40-78a9597af362
 OH Backup=NOT Configured

3.HOME_NAME=OraHome105_12201_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_6
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=******
   Agent DB IDs=f7d46615-a223-4002-9270-fa69465a7f2a
 OH Backup=NOT Configured

4.HOME_NAME=OraHome102_12201_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_3
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=*****
   Agent DB IDs=dceed071-9655-4c84-bef4-74b20180c99b
 OH Backup=NOT Configured

5.HOME_NAME=OraHome101_12201_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_2
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=*******
   Agent DB IDs=b2a5220d-844b-49b6-9351-7c72cf3c9d9b
 OH Backup=NOT Configured

6.HOME_NAME=OraHome100_19800_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/19.0.0.0/dbhome_2
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0
  DBs installed=********
   Agent DB IDs=feedb0e0-2d10-4db7-997a-a78e4ab083ef

Checking oratab for Oracle Homes

$ sort -u -t : -k 2,2 /etc/oratab | grep -v "^#" | awk -F ":" '{print $2}'
/u01/app/19.0.0.0/grid
/u02/app/oracle/product/12.2.0/dbhome_2
/u02/app/oracle/product/12.2.0/dbhome_3
/u02/app/oracle/product/12.2.0/dbhome_4
/u02/app/oracle/product/12.2.0/dbhome_6
/u02/app/oracle/product/12.2.0/dbhome_7
/u02/app/oracle/product/19.0.0.0/dbhome_2

Here is the crontab schedule:

00 01 * * * find /u01/app/grid/diag/crs/*/crs/trace -name "*.tr?" -mtime +30 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/audit -name "*.aud" -mtime +366 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "*.tr?" -mtime +200 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "cdmp*" -mtime +200 -exec rm -rf {} \;
00 04 * * * find /u02/app/oracle/diag/rdbms/*/*/cdump -name "core*" -mtime +200 -exec rm -rf {} \;

Here is the explanation for what (*) represents and examples:

00 01 * * * find /u01/app/grid/diag/crs/*/crs/trace -name "*.tr?" -mtime +30 -exec rm -f {} \;

ls -ld /u01/app/grid/diag/crs/*/crs/trace
* = hostname

Example:
$ ls -ld /u01/app/grid/diag/crs/*/crs/trace
drwxrwxr-x 2 grid oinstall 135168 Mar 26 18:40 /u01/app/grid/diag/crs/hostname/crs/trace

==============================

00 01 * * * find /u02/app/oracle/product/*/*/rdbms/audit -name "*.aud" -mtime +366 -exec rm -f {} \;

ls -ld /u02/app/oracle/product/*/*/rdbms/audit
*/* = version/dbhome

Example:
$ ls -ld /u02/app/oracle/product/*/*/rdbms/audit
drwxr-xr-x 9 oracle oinstall  614400 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/audit
drwxr-xr-x 2 oracle oinstall  253952 Mar 26 18:40 /u02/app/oracle/product/12.2.0/dbhome_3/rdbms/audit
drwxr-xr-x 2 oracle oinstall  294912 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/audit
drwxr-xr-x 4 oracle oinstall   94208 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_6/rdbms/audit
drwxr-xr-x 2 oracle oinstall    4096 Mar  1 02:31 /u02/app/oracle/product/12.2.0/dbhome_7/rdbms/audit
drwxr-xr-x 3 oracle oinstall 5783552 Mar 26 18:32 /u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/audit

==============================

00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "*.tr?" -mtime +200 -exec rm -f {} \;

ls -l /u02/app/oracle/product/*/*/rdbms/log/*.tr?
*/* = version/dbhome

Example:
$ ls -l /u02/app/oracle/product/*/*/rdbms/log/*.tr?
-rw-r----- 1 oracle asmadmin 868 Feb 19 17:41 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/log/*******2_ora_57506.trc
-rw-r----- 1 oracle asmadmin 868 Dec  4 18:06 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/log/*******2_ora_66404.trc
-rw-r----- 1 oracle asmadmin 862 Mar 24 19:38 /u02/app/oracle/product/12.2.0/dbhome_3/rdbms/log/*****2_ora_217755.trc
-rw-r----- 1 oracle asmadmin 869 Feb 18 21:51 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/log/*****2_ora_351349.trc
-rw-r----- 1 oracle asmadmin 867 Feb 19 17:41 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/log/*****2_ora_57519.trc
-rw-r----- 1 oracle asmadmin 866 Mar  1 20:01 /u02/app/oracle/product/12.2.0/dbhome_6/rdbms/log/******2_ora_167170.trc
-rw-r----- 1 oracle asmadmin 831 Mar  1 02:31 /u02/app/oracle/product/12.2.0/dbhome_7/rdbms/log/*****2_ora_314160.trc

==============================

00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "cdmp*" -mtime +200 -exec rm -rf {} \;

ls -ld /u02/app/oracle/diag/rdbms/*/*/cdump
*/* = db_unique_name/db_name

Example:
$ ls -ld /u02/app/oracle/diag/rdbms/*/*/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep  3  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep  2  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 21  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Feb 17 02:35 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 21  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Feb 18 21:51 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 25 07:13 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump

It’s also possible to use adrci to configure SHORTP_POLICY and LONGP_POLICY.

If new homes are created, then would SHORTP_POLICY and LONGP_POLICY need up be updated for the new homes?

Alternatively, can download and use purgeLogs: Cleanup traces, logs in one command (Doc ID 2081655.1)

Oracle9i Release 2 Data Guard Broker NF

Mon, 2021-03-22 09:11

Hopefully, you did not judge blog post by it’s title as this may be a gem.

From 12.2, Broker Controlled Database Initialization Parameters and SQL Statements

The following database initialization parameters are controlled by broker configurable properties. Therefore, you should not set these parameters manually:
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT

What’s missing are information from prior releases not carried forward to future release.

Removed the FAL_SERVER and FAL_CLIENT properties; these are managed automatically by Data Guard broker.

https://docs.oracle.com/cd/B10501_01/server.920/a96629/whatsnew.htm

Unfortunately for me, I started with DG in 10g and totally missed 9i NF for DG.

Best To Rename Both Table And Index Partitions

Sat, 2021-03-13 14:21

I have been working on compressing table and index partitions.

Currently, system generated partition names are the same for table and index.

Find partition size for TABLE (SYS_P2321):

SQL> @ partition_size_tab.sql
SQL> select s.owner, s.segment_name, s.segment_type,
  2  s.partition_name, s.tablespace_name tbs,
  3  -- t.partitioning_type type, t.interval,
  4  s.bytes/1024/1024/1024 size_gb
  5  from dba_segments s, dba_part_tables t
  6  where s.segment_type in ('TABLE PARTITION')
  7  and s.owner=UPPER('&&owner')
  8  and t.table_name=UPPER('&&table')
  9  and s.partition_name=UPPER('&&partition')
 10  and s.owner=t.owner
 11  and s.segment_name=t.table_name
 12  order by s.partition_name asc
 13  ;
Enter value for owner: app
Enter value for table: interval
Enter value for partition: SYS_P2321

OWNER           SEGMENT_NAME    SEGMENT_TYPE       PARTITION TBS                 SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP             INTERVAL        TABLE PARTITION    SYS_P2321 USERS                   .01
SQL> set echo off

Find partition size for INDEX (SYS_P2321):

-- There is no need to enter variables.
SQL> @ partition_size_idx.sql
SQL> select s.owner, s.segment_name, s.segment_type,
  2  s.partition_name, s.tablespace_name tbs,
  3  -- i.partitioning_type type, i.interval,
  4  s.bytes/1024/1024/1024 size_gb
  5  from dba_segments s, dba_part_indexes i
  6  where s.segment_type in ('INDEX PARTITION')
  7  and s.owner=UPPER('&&owner')
  8  and i.table_name=UPPER('&&table')
  9  and s.partition_name=UPPER('&&partition')
 10  and s.owner=i.owner
 11  and s.segment_name=i.index_name
 12  order by s.partition_name asc
 13  ;

OWNER           SEGMENT_NAME    SEGMENT_TYPE       PARTITION TBS                 SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP             INTERVAL_IDX    INDEX PARTITION    SYS_P2321 USERS                   .00
SQL> set echo off

When renaming system generated partition name, it is best to rename both table and index with the same partition name in order to simplify finding partition size for table and index.

Here is an example where table partitions were renamed while index partitions were not renamed.

TABLE partitions were renamed.

SQL> @ partition_interval_delta_tab.sql
SQL> set echo off
SQL> declare
  2    l_date date;
  3  begin
  4    for x in (
  5    select p.partition_name, p.high_value
  6    from dba_tab_partitions p
  7    where table_owner=UPPER('&&owner') and table_name=UPPER('&&table')
  8    and p.compression='DISABLED' and p.interval='YES' order by 1
  9    )
 10    loop
 11      execute immediate
 12      'begin :h := ' || x.high_value || '; end;' using OUT l_date;
 13      if months_between(sysdate, l_date) > 36 then
 14        dbms_output.put_line(x.partition_name||' : '||to_char(l_date, 'DD-MON-YYYY'));
 15      end if;
 16    end loop;
 17  end;
 18  /
P201612 : 01-JAN-2017
P201701 : 01-FEB-2017
P201702 : 01-MAR-2017
P201703 : 01-APR-2017
P201704 : 01-MAY-2017
P201705 : 01-JUN-2017
P201706 : 01-JUL-2017
P201707 : 01-AUG-2017
P201708 : 01-SEP-2017
P201709 : 01-OCT-2017
P201710 : 01-NOV-2017
P201711 : 01-DEC-2017
P201712 : 01-JAN-2018
P201801 : 01-FEB-2018
P201802 : 01-MAR-2018
SQL> set echo off

INDEX partitions were NOT renamed.

SQL> @ partition_interval_delta_idx.sql
SQL> declare
  2    l_date date;
  3  begin
  4    for x in (
  5    select p.partition_name, p.high_value
  6    from dba_ind_partitions p, dba_part_indexes i
  7    where i.owner=UPPER('&&owner') and i.table_name=UPPER('&&table')
  8    and p.index_owner=i.owner and p.index_name=i.index_name
  9    and  p.compression='DISABLED' and p.interval='YES' order by 1
 10    )
 11    loop
 12      execute immediate
 13      'begin :h := ' || x.high_value || '; end;' using OUT l_date;
 14      if months_between(sysdate, l_date) > 36 then
 15        dbms_output.put_line(x.partition_name||' : '||to_char(l_date, 'DD-MON-YYYY'));
 16      end if;
 17    end loop;
 18  end;
 19  /
SYS_P2321 : 01-JAN-2017
SYS_P2322 : 01-FEB-2017
SYS_P2323 : 01-MAR-2017
SYS_P2324 : 01-APR-2017
SYS_P2325 : 01-MAY-2017
SYS_P2326 : 01-JUN-2017
SYS_P2327 : 01-JUL-2017
SYS_P2328 : 01-AUG-2017
SYS_P2329 : 01-SEP-2017
SYS_P2330 : 01-OCT-2017
SYS_P2331 : 01-NOV-2017
SYS_P2332 : 01-DEC-2017
SYS_P2333 : 01-JAN-2018
SYS_P2334 : 01-FEB-2018
SYS_P2335 : 01-MAR-2018
SQL> set echo off

Let’s find the size for partitioned table and index.

Find partition size for TABLE.

SQL> @ partition_size_tab.sql
SQL> select s.owner, s.segment_name, s.segment_type,
  2  s.partition_name, s.tablespace_name tbs,
  3  -- t.partitioning_type type, t.interval,
  4  s.bytes/1024/1024/1024 size_gb
  5  from dba_segments s, dba_part_tables t
  6  where s.segment_type in ('TABLE PARTITION')
  7  and s.owner=UPPER('&&owner')
  8  and t.table_name=UPPER('&&table')
  9  and s.partition_name=UPPER('&&partition')
 10  and s.owner=t.owner
 11  and s.segment_name=t.table_name
 12  order by s.partition_name asc
 13  ;
Enter value for partition: P201612

OWNER           SEGMENT_NAME    SEGMENT_TYPE       PARTITION TBS                 SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP             INTERVAL        TABLE PARTITION    P201612   USERS                   .01
SQL> set echo off

Find partition size for INDEX.

Need to enter different partition name for INDEX (SYS_P2321).

SQL> @ partition_size_idx.sql
SQL> select s.owner, s.segment_name, s.segment_type,
  2  s.partition_name, s.tablespace_name tbs,
  3  -- i.partitioning_type type, i.interval,
  4  s.bytes/1024/1024/1024 size_gb
  5  from dba_segments s, dba_part_indexes i
  6  where s.segment_type in ('INDEX PARTITION')
  7  and s.owner=UPPER('&&owner')
  8  and i.table_name=UPPER('&&table')
  9  and s.partition_name=UPPER('&&partition')
 10  and s.owner=i.owner
 11  and s.segment_name=i.index_name
 12  order by s.partition_name asc
 13  ;
SQL> set echo off

SQL> undefine partition

-- How did I know to use SYS_P2321 for index partitions?
-- Both table and index partition have the same date (01-JAN-2017).
TABLE : P201612   : 01-JAN-2017
INDEX : SYS_P2321 : 01-JAN-2017

SQL> @ partition_size_idx.sql
SQL> select s.owner, s.segment_name, s.segment_type,
  2  s.partition_name, s.tablespace_name tbs,
  3  -- i.partitioning_type type, i.interval,
  4  s.bytes/1024/1024/1024 size_gb
  5  from dba_segments s, dba_part_indexes i
  6  where s.segment_type in ('INDEX PARTITION')
  7  and s.owner=UPPER('&&owner')
  8  and i.table_name=UPPER('&&table')
  9  and s.partition_name=UPPER('&&partition')
 10  and s.owner=i.owner
 11  and s.segment_name=i.index_name
 12  order by s.partition_name asc
 13  ;
Enter value for partition: SYS_P2321

OWNER           SEGMENT_NAME    SEGMENT_TYPE       PARTITION TBS                 SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP             INTERVAL_IDX    INDEX PARTITION    SYS_P2321 USERS                   .00
SQL> set echo off

Compress Historical Interval Partitions

Sat, 2021-03-06 23:20

There is a requirement to compress monthly interval partition older that 36 months.

First, interval partitions were renamed to more intuitive names using Renaming Interval Partitions

Even though the rename is not necessary, it does provide more clarity.

There is a demo to find and compress partitions older that 36 months.

SQL> @ partition_interval_delta.sql
SQL> select
  2  partition_name, compression, high_value,
  3  TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) months_delta
  4  from dba_tab_partitions
  5  where table_owner = UPPER('&owner')
  6  and table_name = UPPER('&table')
  7  and compression='DISABLED'
  8  and TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) >= &months_delta
  9  order by 1
 10  ;
Enter value for owner: app
Enter value for table: interval
Enter value for months_delta: 36

PARTITION_NAME        COMPRESS HIGH_VALUE                                               MONTHS_DELTA
--------------------- -------- -------------------------------------------------------- ------------
P201702               DISABLED TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           49
P201705               DISABLED TO_DATE(' 2017-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           46
P201706               DISABLED TO_DATE(' 2017-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           45
P201707               DISABLED TO_DATE(' 2017-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           44
P201708               DISABLED TO_DATE(' 2017-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           43
P201709               DISABLED TO_DATE(' 2017-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           42
P201710               DISABLED TO_DATE(' 2017-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           41
P201711               DISABLED TO_DATE(' 2017-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           40
P201712               DISABLED TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           39
P201801               DISABLED TO_DATE(' 2018-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           38
P201802               DISABLED TO_DATE(' 2018-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           37
P201803               DISABLED TO_DATE(' 2018-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           36
SQL> 
SQL> set echo off

SQL> set verify on
SQL> define partition_name = P201702
SQL> alter table APP.INTERVAL modify partition &partition_name compress for OLTP;
old   1: alter table APP.INTERVAL modify partition &partition_name compress for OLTP
new   1: alter table APP.INTERVAL modify partition P201702 compress for OLTP

SQL> alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8;
old   1: alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8
new   1: alter table APP.INTERVAL move partition P201702 ONLINE UPDATE INDEXES PARALLEL 8

SQL> define partition_name = P201705
SQL> alter table APP.INTERVAL modify partition &partition_name compress for OLTP;
old   1: alter table APP.INTERVAL modify partition &partition_name compress for OLTP
new   1: alter table APP.INTERVAL modify partition P201705 compress for OLTP

SQL> alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8;
old   1: alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8
new   1: alter table APP.INTERVAL move partition P201705 ONLINE UPDATE INDEXES PARALLEL 8

SQL> @ partition_interval_delta.sql
SQL> select
  2  partition_name, compression, high_value,
  3  TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) months_delta
  4  from dba_tab_partitions
  5  where table_owner = UPPER('&owner')
  6  and table_name = UPPER('&table')
  7  and compression='DISABLED'
  8  and TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) >= &months_delta
  9  order by 1
 10  ;
Enter value for owner: app
Enter value for table: interval
Enter value for months_delta: 36

PARTITION_NAME        COMPRESS HIGH_VALUE                                               MONTHS_DELTA
--------------------- -------- -------------------------------------------------------- ------------
P201706               DISABLED TO_DATE(' 2017-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           45
P201707               DISABLED TO_DATE(' 2017-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           44
P201708               DISABLED TO_DATE(' 2017-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           43
P201709               DISABLED TO_DATE(' 2017-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           42
P201710               DISABLED TO_DATE(' 2017-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           41
P201711               DISABLED TO_DATE(' 2017-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           40
P201712               DISABLED TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           39
P201801               DISABLED TO_DATE(' 2018-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           38
P201802               DISABLED TO_DATE(' 2018-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           37
P201803               DISABLED TO_DATE(' 2018-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'           36
SQL> set echo off

Pages