Feed aggregator

Oracle Platinum Services for Engineered Systems—No Additional Charge

Chris Warticki - Fri, 2019-05-31 16:19
Fully Realize the Benefits of Your Oracle Engineered Systems with Oracle Platinum Services

We launched Oracle Platinum Services in 2012, and what began as a breakthrough support offering for Exadata customers has evolved into an integral aspect of Oracle Support, designed to help customers fully realize the benefits of their Oracle Engineered Systems.

Today there are over 5,700 Oracle Engineered Systems implemented on Oracle Platinum Services—all entitled to 24x7 remote fault monitoring, quarterly patch deployment services, and unprecedented response times for fault resolution and restoration—at no additional charge. The Oracle Platinum Services page has more information on what Oracle Platinum Services offers and how other customers are leveraging the features of the service.

Considering Oracle Platinum Services? Have Questions?

Topics to consider before boarding your systems to Oracle Platinum Services. You must ensure that it addresses the needs of your organization while meeting any security or other guidelines your organization might have in place. Your next consideration should focus on how you can meet the requirements to qualify for Oracle Platinum Services.

The Oracle Platinum Services FAQ is an excellent resource to start with, to get answers to general questions. Further inquiries can be made of your Technical Account Manager, if you wish to further investigate your options. They can help direct you to an Oracle Platinum Driver, a resource skilled in helping customers navigate questions related to implementing Oracle Platinum Services on their Oracle Engineered Systems.  

Learn More

You can find more information in the Oracle Platinum Services Onboarding document, which lays out what customers can expect when boarding their systems on to Oracle Platinum Services.

 

 

 

 

 

 

 

Resources

PostgreSQL 12: Control when generic plans are used

Yann Neuhaus - Fri, 2019-05-31 13:02

When you are using prepared statements in PostgreSQL you might get a custom or a generic plan. Custom plans come with overhead of re-planning while generic plans avoid re-planning of the statement. A soon as you get a generic plan that plan will be used for the lifetime of your session and there are situations when you do not want to see this as a generic plan might be more expensive than a custom plan. Starting with PostgreSQL 12 (which currently is in beta) you have more control over this.

Let use the same simple test case as in the previous post about custom and generic plans:

postgres=# select version();
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

postgres=# create table demo ( a int, b text );
CREATE TABLE
postgres=# insert into demo select i, 'aaa' from generate_series (1,100) i;
INSERT 0 100
postgres=# insert into demo select i, 'bbb' from generate_series (101,200) i;
INSERT 0 100
postgres=# insert into demo select i, 'ccc' from generate_series (201,300) i;
INSERT 0 100
postgres=# analyze demo;
ANALYZE
postgres=# 

When we prepare and then execute the blow statement 6 times we will see a generic plan:

postgres=# prepare my_stmt as select * from demo where b = $1;
PREPARE
postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.027..0.107 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.431 ms
 Execution Time: 0.198 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.032..0.113 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.196 ms
 Execution Time: 0.155 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.032..0.113 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.168 ms
 Execution Time: 0.154 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.041..0.135 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.238 ms
 Execution Time: 0.193 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.031..0.112 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.169 ms
 Execution Time: 0.154 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.033..0.120 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning Time: 0.163 ms
 Execution Time: 0.163 ms
(5 rows)

PostgreSQL 12 will come with a new parameter to control that:

postgres=# show plan_cache_mode;
 plan_cache_mode 
-----------------
 auto
(1 row)
postgres=# select extra_desc from pg_settings where name = 'plan_cache_mode';
                                                                            extra_desc                                                                             
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Prepared statements can have custom and generic plans, and the planner will attempt to choose which is better.  This can be set to override the default behavior.
(1 row)

The default value is “auto” which means the same behavior is in previous versions of PostgreSQL. If you want to force custom plans you can set it to “force_custom_plan” or you can set it to “force_generic_plan” if you want to only see generic plans. Using the same example from above but setting the parameter to “force_custom_plan” we will not see a generic plan anymore:

postgres=# set plan_cache_mode = force_custom_plan;
SET
postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.034..0.127 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.209 ms
 Execution Time: 0.172 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.152..0.236 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.170 ms
 Execution Time: 0.279 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.031..0.112 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.170 ms
 Execution Time: 0.152 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.029..0.122 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.206 ms
 Execution Time: 0.162 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.038..0.133 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.200 ms
 Execution Time: 0.244 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.032..0.114 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.169 ms
 Execution Time: 0.155 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.034..0.117 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.301 ms
 Execution Time: 0.162 ms
(5 rows)

Nice addition that can help when you have generic plans that actually perform worse than custom plans.

Cet article PostgreSQL 12: Control when generic plans are used est apparu en premier sur Blog dbi services.

Great Football Managers and Oracle Unified Auditing

The Anti-Kyte - Fri, 2019-05-31 11:48

It’s quite a good time for English football at the moment. Not only have English clubs monopolised the finals of the two main European Club competitions this year, but Manchester City have made history by winning all three domestic competitions in the same season.
Note that this isn’t a British footballing first. Glasgow Rangers managed it way back in 1949. And whilst the European Cup ( Champions League if you must) has eluded City this season, Celtic managed that particular clean sweep in 1967.
In English football however, this particular treble is unprecedented. In fact, there are remarkably few managers who have been able to win every one of the major domestic honours in their entire career.
All of which will come in handy when looking for examples to illustrate the topic at hand, namely Oracle Unified Auditing.
With the aid of 18c Express Edition, we’ll be looking at :

  • The Oracle supplied Unified Auditing Policies that are enabled by default
  • Where to find the Audit Trail
  • How to create our own Unified Auditing Policy to monitor DML operations on specific objects

Unified Auditing default settings

Unified Autiding was introduced in 12c. By default, it’s configured to work alongside auditing from earlier database versions.
We can check that this the case – and that traditional auditing has not been disabled by checking that the Unified Auditing parameter is set to FALSE :

Looking at the audit initialization parameters is also useful at this point :

select name, value, description
from v$parameter
where name like '%audit%'
and value is not null
order by name
/

NAME                           VALUE                          DESCRIPTION                                       
------------------------------ ------------------------------ --------------------------------------------------
audit_file_dest                /opt/oracle/admin/XE/adump     Directory in which auditing files are to reside   
audit_sys_operations           TRUE                           enable sys auditing                               
audit_trail                    DB                             enable system auditing                            
unified_audit_sga_queue_size   1048576                        Size of Unified audit SGA Queue                   

We can see that we have a directory setup for audit files to be written to. However, the audit_trail is set to DB – i.e. a table in the database.
To start with, we can have a look at the audit_file_dest directory and see what, if anything, is being written there :

ls -lrt /opt/oracle/admin/XE/adump

...
-rw-r-----. 1 oracle oinstall     866 May 30 13:16 XE_ora_3880_20190530131639680705403060.aud
-rw-r-----. 1 oracle oinstall    1777 May 30 13:16 XE_ora_4375_20190530131639737578276396.aud
-rw-r-----. 1 oracle oinstall    1202 May 30 13:16 XE_ora_4381_20190530131643953498040745.aud

Closer inspection of these files reveal that they contain details of the operations performed when Oracle is started but before the database is open. For example :

cat XE_ora_3880_20190530131639680705403060.aud

Audit file /opt/oracle/admin/XE/adump/XE_ora_3880_20190530131639680705403060.aud
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
Build label:    RDBMS_18.4.0.0.0DBRU_LINUX.X64_181017
ORACLE_HOME:    /opt/oracle/product/18c/dbhomeXE
System name:    Linux
Node name:	frea.virtualbox
Release:        3.10.0-957.1.3.el7.x86_64
Version:        #1 SMP Thu Nov 29 14:49:43 UTC 2018
Machine:        x86_64
Instance name: XE
Redo thread mounted by this instance: 0 <none>
Oracle process number: 299
Unix process pid: 3880, image: oracle@frea.virtualbox (TNS V1-V3)

Thu May 30 13:16:39 2019 +01:00
LENGTH : '247'
ACTION :[7] 'STARTUP'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[7] 'UNKNOWN'
STATUS:[1] '0'
DBID:[0] ''
SESSIONID:[1] '0'
USERHOST:[15] 'frea.virtualbox'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[3] '138'

Well that could come in useful, but is there any auditing of operations once the database is up and running ?

Oracle supplied Unified Audit Policies

There are a number of policies that are set up by default :

select owner, object_name
from dba_objects
where object_type = 'UNIFIED AUDIT POLICY'
and oracle_maintained = 'Y'
order by object_name;

OWNER      OBJECT_NAME                                       
---------- --------------------------------------------------
SYS        ORA_ACCOUNT_MGMT                                  
SYS        ORA_CIS_RECOMMENDATIONS                           
SYS        ORA_DATABASE_PARAMETER                            
SYS        ORA_DV_AUDPOL                                     
SYS        ORA_DV_AUDPOL2                                    
SYS        ORA_LOGON_FAILURES                                
SYS        ORA_RAS_POLICY_MGMT                               
SYS        ORA_RAS_SESSION_MGMT                              
SYS        ORA_SECURECONFIG                                  

9 rows selected. 

We can confirm which of these are active by running :

select policy_name, user_name, 
    enabled_option, entity_name, entity_type, 
    success, failure
from audit_unified_enabled_policies
order by policy_name
/

POLICY_NAME          ENABLED_OPTION       ENTITY_NAME  ENTITY_TYPE  SUCCESS    FAILURE   
-------------------- -------------------- ------------ ------------ ---------- ----------
ORA_LOGON_FAILURES   ALL USERS            ALL USERS    USER         NO         YES       
ORA_SECURECONFIG     ALL USERS            ALL USERS    USER         YES        YES       

One point to note is that the ORA_LOGON_FAILURES policy is configured to record statements on failure and not on success.

ORA_LOGON_FAILURES

We can see which activities are covered by this policy by running :

select audit_option, audit_option_type 
from audit_unified_policies
where policy_name = 'ORA_LOGON_FAILURES';

AUDIT_OPTION         AUDIT_OPTION_TYPE 
-------------------- ------------------
LOGON                STANDARD ACTION   

From this, we can infer that this policy will record any failed logon attempts in the audit trail. Let’s test that hypothesis…

First off, I’m going to issue a failed login attempt :

sqlplus hr/thisisnotthepassword@xepdb1

SQL*Plus: Release 12.2.0.1.0 Production on Thu May 30 14:10:13 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: 

Right, that should do it. Now to check the audit trail in the database :

select event_timestamp, audit_type, dbusername, action_name, return_code
from unified_audit_trail uat
where unified_audit_policies = 'ORA_LOGON_FAILURES'
and uat.event_timestamp > systimestamp - ((1/24/60) *5) -- happened in the last 5 minutes
order by event_timestamp desc
/

Note here that the record includes the return code, which looks to be the Oracle Error that was generated.
Also note that, in a multitenant environment, you need to be in the relevant container when you query the audit trail.
In this example, I was attempting to login to the XEPDB1 PDB. I would not see this logon failure if I was querying the audit trail from the CDB.

The UNIFIED_AUDIT_TRAIL contains 99 columns and the relevance of each of them will vary according to what you’re looking at in the audit trail.
As usual, the columns themselves have been documented with comments which you can find by running :

select column_name, comments
from dba_col_comments
where owner = 'AUDSYS'
and table_name = 'UNIFIED_AUDIT_TRAIL'
order by column_name
/

COLUMN_NAME                    COMMENTS                                          
------------------------------ --------------------------------------------------
ACTION_NAME                    Name of the action executed by the user           
ADDITIONAL_INFO                Text comment on the audit trail entry             
APPLICATION_CONTEXTS           SemiColon seperate list of Application Context Nam
                               espace, Attribute, Value information in (APPCTX_NS
                               PACE,APPCTX_ATTRIBUTE=<value>) format             

AUDIT_OPTION                   Auditing option set with the audit statement      
AUDIT_TYPE                     Type of the Audit Record                          
AUTHENTICATION_TYPE            Type of Authentication for the session user       
CLIENT_IDENTIFIER              Client identifier in each Oracle session        
***snip***
XS_SESSIONID                   Real Application User Session Identifier          
XS_TARGET_PRINCIPAL_NAME       Target principal name in Real Application Security
                                operations                                       

XS_USER_NAME                   Real Application User name                        
ORA_SECURECONFIG

By contrast, ORA_SECURECONFIG audits a number of system privileges :

select audit_option, audit_option_type, object_schema
from audit_unified_policies
where policy_name = 'ORA_SECURECONFIG'
order by 1
/

AUDIT_OPTION                             AUDIT_OPTION_TYPE  OBJECT_SCHEMA                 
---------------------------------------- ------------------ ------------------------------
ADMINISTER KEY MANAGEMENT                SYSTEM PRIVILEGE   NONE                          
ALTER ANY PROCEDURE                      SYSTEM PRIVILEGE   NONE                          
ALTER ANY SQL TRANSLATION PROFILE        SYSTEM PRIVILEGE   NONE                          
ALTER ANY TABLE                          SYSTEM PRIVILEGE   NONE                          
ALTER DATABASE                           SYSTEM PRIVILEGE   NONE                          
ALTER DATABASE DICTIONARY                STANDARD ACTION    NONE                          
ALTER DATABASE LINK                      STANDARD ACTION    NONE                          
ALTER PLUGGABLE DATABASE                 STANDARD ACTION    NONE                          
ALTER PROFILE                            STANDARD ACTION    NONE                          
ALTER ROLE                               STANDARD ACTION    NONE                          
ALTER SYSTEM                             SYSTEM PRIVILEGE   NONE                          
ALTER USER                               STANDARD ACTION    NONE                          
AUDIT SYSTEM                             SYSTEM PRIVILEGE   NONE                          
BECOME USER                              SYSTEM PRIVILEGE   NONE                          
CREATE ANY JOB                           SYSTEM PRIVILEGE   NONE                          
CREATE ANY LIBRARY                       SYSTEM PRIVILEGE   NONE                          
CREATE ANY PROCEDURE                     SYSTEM PRIVILEGE   NONE                          
CREATE ANY SQL TRANSLATION PROFILE       SYSTEM PRIVILEGE   NONE                          
CREATE ANY TABLE                         SYSTEM PRIVILEGE   NONE                          
CREATE DATABASE LINK                     STANDARD ACTION    NONE                          
CREATE DIRECTORY                         STANDARD ACTION    NONE                          
CREATE EXTERNAL JOB                      SYSTEM PRIVILEGE   NONE                          
CREATE PLUGGABLE DATABASE                STANDARD ACTION    NONE                          
CREATE PROFILE                           STANDARD ACTION    NONE                          
CREATE PUBLIC SYNONYM                    SYSTEM PRIVILEGE   NONE                          
CREATE ROLE                              STANDARD ACTION    NONE                          
CREATE SQL TRANSLATION PROFILE           SYSTEM PRIVILEGE   NONE                          
CREATE USER                              SYSTEM PRIVILEGE   NONE                          
DROP ANY PROCEDURE                       SYSTEM PRIVILEGE   NONE                          
DROP ANY SQL TRANSLATION PROFILE         SYSTEM PRIVILEGE   NONE                          
DROP ANY TABLE                           SYSTEM PRIVILEGE   NONE                          
DROP DATABASE LINK                       STANDARD ACTION    NONE                          
DROP DIRECTORY                           STANDARD ACTION    NONE                          
DROP PLUGGABLE DATABASE                  STANDARD ACTION    NONE                          
DROP PROFILE                             STANDARD ACTION    NONE                          
DROP PUBLIC SYNONYM                      SYSTEM PRIVILEGE   NONE                          
DROP ROLE                                STANDARD ACTION    NONE                          
DROP USER                                SYSTEM PRIVILEGE   NONE                          
EXECUTE                                  OBJECT ACTION      SYS                           
EXECUTE                                  OBJECT ACTION      REMOTE_SCHEDULER_AGENT        
EXEMPT ACCESS POLICY                     SYSTEM PRIVILEGE   NONE                          
EXEMPT REDACTION POLICY                  SYSTEM PRIVILEGE   NONE                          
GRANT ANY OBJECT PRIVILEGE               SYSTEM PRIVILEGE   NONE                          
GRANT ANY PRIVILEGE                      SYSTEM PRIVILEGE   NONE                          
GRANT ANY ROLE                           SYSTEM PRIVILEGE   NONE                          
LOGMINING                                SYSTEM PRIVILEGE   NONE                          
PURGE DBA_RECYCLEBIN                     SYSTEM PRIVILEGE   NONE                          
SET ROLE                                 STANDARD ACTION    NONE                          
TRANSLATE ANY SQL                        SYSTEM PRIVILEGE   NONE   

It so happens that I’ve installed the HR demo application on this database by following Oracle’s instructions and running $ORACLE_HOME/demo/schema/human_resources/hr_main.sql as SYSTEM.

This policy has captured the DDL from this activity :

select scn, dbusername, system_privilege_used, action_name, object_schema, object_name,
from unified_audit_trail
where unified_audit_policies = 'ORA_SECURECONFIG'
and scn >= 2038863
order by event_timestamp
/

       SCN DBUSERNAME   SYSTEM_PRIVILEGE_USED                    ACTION_NAME          OBJECT_SCHEM OBJECT_NAME         
---------- ------------ ---------------------------------------- -------------------- ------------ --------------------
   2038863 SYSTEM       CREATE USER                              CREATE USER                       HR                  
   2038869 SYSTEM       ALTER USER                               ALTER USER                        HR                  
   2038874 SYSTEM       ALTER USER                               ALTER USER                        HR                  
   2038880 SYSTEM       GRANT ANY PRIVILEGE                      GRANT                                                 
   2038886 SYSTEM       GRANT ANY ROLE, GRANT ANY PRIVILEGE      GRANT                                                 
   2038897 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           REGIONS             
   2038910 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           REGIONS             
   2038923 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           COUNTRIES           
   2038929 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           COUNTRIES           
   2038937 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           LOCATIONS           
   2038949 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           LOCATIONS           
   2038962 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           DEPARTMENTS         
   2038976 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2038988 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           JOBS                
   2039000 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           JOBS                
   2039016 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           EMPLOYEES           
   2039030 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           EMPLOYEES           
   2039036 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2039052 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           JOB_HISTORY         
   2039068 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           JOB_HISTORY         
   2040134 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2040179 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2040303 SYSTEM       CREATE ANY PROCEDURE                     CREATE PROCEDURE     HR           SECURE_DML          
   2040318 SYSTEM       CREATE ANY PROCEDURE                     CREATE PROCEDURE     HR           ADD_JOB_HISTORY     
   2041099 SYSTEM       ALTER USER                               ALTER USER                        HR                  

25 rows selected. 

NOTE that the audit trail contains the System Change Number (SCN) as well as a timestamp, which can make it easier to select the part of the audit trail you are interested in at any given time.

As well as the details of who did what, the audit trail holds the SQL that was executed as part of this operation. For example, if we wanted to check what statement was run when the HR.REGIONS table was created, we could run :

select sql_text
from unified_audit_trail
where scn = 2038897;

SQL_TEXT                                                                        
--------------------------------------------------------------------------------
CREATE TABLE regions                                                            
    ( region_id      NUMBER                                                     
       CONSTRAINT  region_id_nn NOT NULL                                        
    , region_name    VARCHAR2(25)                                               
    )                            

Note that SCN is unique to a transaction. As we’re looking for a DDL statement, which is almost always contained in it’s own transaction (except when it isn’t), it’s probably not such a good idea to rely on it as a de facto Primary Key when you’re searching for DML statements, which may share a transaction ( and therefore an SCN) with several others. Obviously, an SCN is completely useless when you’re searching for SELECT statements.
Whilst we’re on the subject of predicates when selecting from the audit trail, it’s worth bearing in mind that the underlying table is actually partitioned by EVENT_TIMESTAMP. By default, these partitions cover one month each. Including EVENT_TIMESTAMP in the predicate will cause the optimizer to perform partition pruning so may help query performance.

Anyway, speaking of auditing DML, it’s time to introduce…

The Demo Application

We have an application to hold details of each manager who has won each of England’s major domestic trophies. The application is owned by MOTTY, who is a bit of a footie geek. We also have an application user called TEDDY who hasn’t been around long and still thinks that football didn’t exist before the Premier League…

create user motty identified by motty
/
alter user motty quota unlimited on users
/

grant connect, create table, create sequence, create procedure to motty
/

create user teddy identified by teddy
/

grant connect to teddy
/

The application objects are the WINNING_MANAGERS table…

create table winning_managers(
    wm_name varchar2(100) constraint wm_pk primary key,
    league_title number(4),
    fa_cup number(4),
    league_cup number(4))
/

comment on table winning_managers is 'Managers who have won a career treble and the years in which they first won each major English domestic trophy'
/

comment on column winning_managers.wm_name is 'The name of the manager'
/

comment on column winning_managers.league_title is 'The year the manager first won the Title'
/

comment on column winning_managers.fa_cup is 'The year the manager first won the FA Cup'
/

comment on column winning_managers.league_cup is 
    'The year the manager first won the League (Coca-Cola/Worthington/Carling/Capital One/EFL/Carabao) Cup'
/

…and a package…

create or replace package managers_api as
    procedure ins( 
        i_name winning_managers.wm_name%type, 
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type);
        
    procedure upd(
        i_name winning_managers.wm_name%type,
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type);
        
    procedure del( i_name winning_managers.wm_name%type);
end managers_api;
/

create or replace package body managers_api is

    procedure ins( 
        i_name winning_managers.wm_name%type, 
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type)
    is
    begin
        insert into winning_managers( wm_name, league_title, fa_cup, league_cup)
        values( i_name, i_title, i_fac, i_lc);
    end ins;
        
    procedure upd(
        i_name winning_managers.wm_name%type,
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type)
    is
    begin
        if i_name is null then
            raise_application_error(-20000, 'Manager name must be specified');
        end if;
        
        update winning_managers
        set league_title = nvl(i_title, league_title),
            fa_cup = nvl( i_fac, fa_cup),
            league_cup = nvl(i_lc, league_cup)
        where wm_name = i_name;
    end upd;
        
    procedure del(i_name winning_managers.wm_name%type)
    is
    begin
        if i_name is null then
            raise_application_error(-20000, 'Manager name must be specified');
        end if;
        delete from winning_managers
        where wm_name = i_name;
    end del;
end managers_api;
/

For reasons which will shortly become apparent, Teddy has been granted access to the application as follows :

grant select, insert, update, delete on winning_managers to teddy
/

grant execute on managers_api to teddy
/

Teddy is working on a project to identify all managers who have won each of the major English domestic trophies in their carrer.
He immediately takes advantage of these new privileges to add all of the Premier League winning managers to the application…

begin
    motty.managers_api.ins('SIR ALEX FERGUSON', 1993, 1994, 2006);
    motty.managers_api.ins('KENNY DALGLISH', 1995, null, 2012);
    motty.managers_api.ins('ARSENE WENGER', 1998, 1998, null);
    motty.managers_api.ins('JOSE MOURINHO', 2005, 2007, 2006);
    motty.managers_api.ins('CARLO ANCELOTTI', 2010, 2010, null);
    motty.managers_api.ins('ROBERTO MANCINI', 2012, 2011, null);
    motty.managers_api.ins('MANUEL PELLIGRINI', 2014, null, 2014);
    motty.managers_api.ins('CLAUDIO RANIERI', 2016, null, null);
    motty.managers_api.ins('ANTONIO CONTI', 2017, 2018, null);
    motty.managers_api.ins('PEP GUARDIOLA', 2018, 2019, 2018);

    commit;
end;
/

When Motty checks the table, he can see that there are a few problems :

select wm_name, league_title, fa_cup, league_cup
from winning_managers
/

WM_NAME              LEAGUE_TITLE     FA_CUP LEAGUE_CUP
-------------------- ------------ ---------- ----------
SIR ALEX FERGUSON            1993       1994       2006
KENNY DALGLISH               1995                  2012
ARSENE WENGER                1998       1998           
JOSE MOURINHO                2005       2007       2006
CARLO ANCELOTTI              2010       2010           
ROBERTO MANCINI              2012       2011           
MANUEL PELLIGRINI            2014                  2014
CLAUDIO RANIERI              2016                      
ANTONIO CONTI                2017       2018           
PEP GUARDIOLA                2018       2019       2018

10 rows selected. 

It looks like someone has not accounted for anything prior to the 1992/93 season.
Motty needs to find out who is doing this so that he can explain that the FA Cup has been going since 1872, the League since 1888, and the League Cup since 1961.

Auditing DML

In order to create a policy, you need to have the AUDIT SYSTEM or AUDIT_ADMIN role.
We’ll connect to the PDB as system and grant the role to MOTTY now :

grant audit_admin to motty;

Now, connected as MOTTY, we can create an audit policy for the table :

create audit policy wm_audit
    actions all on motty.winning_managers
    when 'sys_context(''userenv'', ''session_user'') not in (''SYS'', ''SYSTEM'')' evaluate per session
/

Audit created

…and enable it…

audit policy wm_audit
/

Audit succeeded.

Note that the when condition of the policy needs to be formatted as in this example. If you use the standard SQL method for escaping quotes – i.e. :

when q'['sys_context('userenv', 'session_user') not in ('SYS', 'SYSTEM')']' evaluate per session

you’ll find yourself on the wrong end of :

ORA-46368: Audit policy does not have a simple rule condition.

In terms of specifying which users should be included or excluded from this policy, you could re-create the it as follows :

noaudit policy wm_audit
/

drop audit policy wm_audit
/

create audit policy wm_audit
    actions all on motty.winning_managers
/

audit policy wm_audit except sys, system	
/

Either way, with the policy in place and enabled, the next time Teddy connects and checks the table…

select wm_name 
from motty.winning_managers
/

…MOTTY gets to see an entry in the audit log :


select dbusername, sql_text
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
/

DBUSERNAME           SQL_TEXT                                          
-------------------- --------------------------------------------------
TEDDY                select wm_name                                    
                     from motty.winning_managers                       

Looking at the policy, we can confirm that any activities by SYS and SYSTEM on this table are excluded from any logging under this policy :


select audit_condition
from audit_unified_policies
where policy_name = 'WM_AUDIT'
/

AUDIT_CONDITION                                                       
----------------------------------------------------------------------
sys_context('userenv', 'session_user') not in ('SYS', 'SYSTEM')

So when SYSTEM runs a query against the table :

select wm_name
from motty.winning_managers
where league_title is not null
and fa_cup is not null
and league_cup is not null
/

WM_NAME                                 
----------------------------------------
SIR ALEX FERGUSON
JOSE MOURINHO
PEP GUARDIOLA

It does not show up in the Audit Trail :

select event_timestamp, sql_text
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
and dbusername = 'SYSTEM'
/

no rows selected

At this point, you may be wondering why you don’t simply dispense with any journalling triggers you have hanging around in your application and just use the audit trail instead. Well, to illustrate one major difference in how these mechanisms might behave consider the following…

Teddy has overheard Motty grumbling in the office and decides to insert another record into the table …

insert into motty.winning_managers(wm_name, league_title, fa_cup, league_cup)
values('MOTTY', 1888, 1872, 1961);

…before having second thoughts…

rollback;

and checking that the record has indeed not been committed :

select league_title, fa_cup, league_cup
from motty.winning_managers
where wm_name = 'MOTTY'
/

no rows selected

Unfortunately, when Motty checks the audit trail again, Teddy is busted…

select dbusername, action_name, sql_text
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
and event_timestamp > systimestamp - ((1/24/60) * 10) 
order by event_timestamp
/

DBUSERNAME           ACTION_NAME          SQL_TEXT                                
-------------------- -------------------- ----------------------------------------
TEDDY                INSERT               insert into motty.winning_managers(wm_na
                                          me, league_title, fa_cup, league_cup)   
                                          values('MOTTY', 1888, 1872, 1961)       
                                                                                 

TEDDY                SELECT               select league_title, fa_cup, league_cup 
                                          from motty.winning_managers             
                                          where wm_name = 'MOTTY'                 

So, even though the INSERT statement was not committed, it still appears in the audit trail. Probably not the sort of thing you want from a journalling trigger.

DML from stored procedure calls

Teddy has now started to clean up the data using the MANAGERS_API package :

begin
    -- Correct the dates for Sir Alex Ferguson's first FA Cup and League Cup wins as a manager
    motty.managers_api.upd(i_name => 'SIR ALEX FERGUSON', i_title => null, i_fac => 1990, i_lc => 1992);
    
    -- Delete Kenny Dalglish record...
    motty.managers_api.del(i_name => 'KENNY DALGLISH');
    
    --Re-insert the record to reflect the fact that he has been knighted
    motty.managers_api.ins(i_name => 'SIR KENNY DALGLISH', i_title => 1986, i_fac => 1986, i_lc => 2012);
end;
/

commit;

We can see each of the DML statements resulting from these stored program unit calls. However, whilst the SQL_TEXT shows the bind variable placeholders, the bind variable values themseleves can be found in the SQL_BINDS column :

select dbusername, action_name, sql_text, sql_binds
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
and event_timestamp > systimestamp - ((1/24/60) * 10)
order by event_timestamp
/

DBUSERNAME      ACTION_NAME     SQL_TEXT                                                     SQL_BINDS                     
--------------- --------------- ------------------------------------------------------------ ------------------------------
TEDDY           UPDATE          UPDATE WINNING_MANAGERS SET LEAGUE_TITLE = NVL(:B4 , LEAGUE_  #1(0):  #2(4):1990 #3(4):1992
                                TITLE), FA_CUP = NVL( :B3 , FA_CUP), LEAGUE_CUP = NVL(:B2 ,   #4(17):SIR ALEX FERGUSON     
                                LEAGUE_CUP) WHERE WM_NAME = :B1                                                           

TEDDY           DELETE          DELETE FROM WINNING_MANAGERS WHERE WM_NAME = :B1             #1(14):KENNY DALGLISH        
TEDDY           INSERT          INSERT INTO WINNING_MANAGERS( WM_NAME, LEAGUE_TITLE, FA_CUP,  #1(18):SIR KENNY DALGLISH #2(
                                 LEAGUE_CUP) VALUES( :B4 , :B3 , :B2 , :B1 )                4):1986 #3(4):1986 #4(4):2012 

Realising that not completing the list of managers will annoy anyone who has read this far in the hope of finding out, Teddy knuckles down and finishes the job. Once he’s done, we can see the full list :

select wm_name, league_title, fa_cup, league_cup, 
    greatest( league_title, fa_cup, league_cup) as "Career Treble"
from motty.winning_managers
order by 5
/

MANAGER                        LEAGUE_TITLE       FA_CUP   LEAGUE_CUP Career Treble
------------------------------ ------------ ------------ ------------ -------------
Joe Mercer                             1968         1969         1961          1969
Bill Nicholson                         1961         1961         1971          1971
Don Revie                              1969         1972         1968          1972
Sir Alex Ferguson                      1993         1990         1992          1993
George Graham                          1989         1993         1987          1993
Jose Mourinho                          2005         2007         2006          2007
Sir Kenny Dalglish                     1986         1986         2012          2012
Pep Guardiola                          2018         2019         2018          2019

8 rows selected. 
Useful Links and Acknowledgements

As ever, the Oracle documentation is a mine of useful information.

The page for the AUDIT command is a useful starting point, as is the page for CREATE AUDIT POLICY.

As usual, Tim Hall’s Oracle Base has a number of useful articles on this topic, this one being of particular use to me in putting this post together.

Monitoring & Troubleshooting Oracle Cloud at Customer

Syed Jaffar - Fri, 2019-05-31 08:33
The prime advantage of cloud at customer is to deliver all cloud benefits at your data center. Oracle cloud at customer provides the same. When Oracle cloud at customer is chosen, it is Oracle who is responsible to install, configure and manage the software and hardware required to run Oracle cloud at customer. However, customers are responsible for monitoring and troubleshooting resources instantiate on Oracle cloud at customer.

Customers are required to understand the difference between system and user-space monitoring and the tools required. The Oracle cloud at customer subscription consists of the below components:


  • Hardware and Software
  • Control panel software
  • The Oracle Advanced Support Gateway (OASW)
  • The Oracle Cloud Service

System monitoring vs User Space Monitoring

Typically, Oracle cloud at customer is monitored at two level:
  1. System
  2. User space
Oracle monitors the system and the customer monitors the user space.



























The systems or machine resources, such as : hardware, control panel and cloud services on Oracle cloud at Customer are managed by Oracle remotely using Oracle Advanced Gateway. The OAGW is only used and accessible to Oracle authorized personnel.

The user space components consists the below:


  • Oracle Cloud accounts
  • VMs instances on IaaS or PaaS
  • DB that are provisioned within the PaaS subscription
  • Applications (Oracle or any third-party)
Oracle manages the following hardware and software components:
  • Ethernet switches
  • Power Supplies
  • Exadata Storage Servers
  • Hypervisor running on the physical servers
Customers can assign administrators to manage cloud accounts. Customers also are free to use any external monitoring agents to monitor user-space components.


Partner Webcast – Oracle APEX on Cloud: architecture, functionality and features

Oracle Application Express (APEX) is a low-code development platform that enables you to build scalable, secure enterprise apps, with world-class features, that can be deployed anywhere. With...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Securely store passwords in PostgreSQL

Yann Neuhaus - Fri, 2019-05-31 00:25

Every application somehow needs to deal with passwords. Some use external authentication methods such as ldap, others us the the framework the database already provides and create users and roles. But it is also not uncommon that applications implement their own concept for managing users. If an application does this it should be done the right way and passwords should never be stored in plain text in the database. PostgreSQL comes with a handy extension that supports you with that.

You might be already aware that PostgreSQL comes with a lot of additional modules by default. One of these modules is pgcrypto and it can be used for the use case described above: En- and decryption of strings so you do not have to implement that on your own. Lets start with a simple table which contains usernames and their passwords:

postgres=# create table app_users ( id int generated always as identity ( cache 10 ) primary key
postgres(#                        , username text not null unique
postgres(#                        , password text not null
postgres(#                        );
CREATE TABLE
postgres=# \d app_users
                         Table "public.app_users"
  Column  |  Type   | Collation | Nullable |           Default            
----------+---------+-----------+----------+------------------------------
 id       | integer |           | not null | generated always as identity
 username | text    |           | not null | 
 password | text    |           | not null | 
Indexes:
    "app_users_pkey" PRIMARY KEY, btree (id)
    "app_users_username_key" UNIQUE CONSTRAINT, btree (username)
postgres=# 

Both, the username and password columns are implement as plain text. If you keep it like that and just insert data the password of course will be stored as plain text. So how can we use pgcrypto to improve that? Obviously the first step is to install the extension:

postgres=# create extension pgcrypto;
CREATE EXTENSION
postgres=# \dx
                   List of installed extensions
    Name    | Version |   Schema   |         Description          
------------+---------+------------+------------------------------
 pg_prewarm | 1.2     | public     | prewarm relation data
 pgcrypto   | 1.3     | public     | cryptographic functions
 plpgsql    | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)

Btw: There is a catalog view which you can use to list all available extensions:

postgres=# \d pg_available_extensions;
         View "pg_catalog.pg_available_extensions"
      Column       | Type | Collation | Nullable | Default 
-------------------+------+-----------+----------+---------
 name              | name |           |          | 
 default_version   | text |           |          | 
 installed_version | text | C         |          | 
 comment           | text |           |          | 

postgres=# select * from pg_available_extensions limit 3;
  name   | default_version | installed_version |                comment                 
---------+-----------------+-------------------+----------------------------------------
 plpgsql | 1.0             | 1.0               | PL/pgSQL procedural language
 plperl  | 1.0             |                   | PL/Perl procedural language
 plperlu | 1.0             |                   | PL/PerlU untrusted procedural language
(3 rows)

The function to use (provided by the pgcrypto module) for encrypting strings is crypt(). This function takes two arguments:

  • The actual string to encrypt
  • The salt to use (a random value) for encrpyption

Adding a user with an encrypted password is as easy as:

postgres=# insert into app_users (username, password) 
postgres-#        values ( 'myuser', crypt('mypassword', gen_salt('bf')) );
INSERT 0 1

In this case we used the Blowfish algorithm to generate the salt. You can also use md5, xdes and des.

When we look at the password for our user we will see that it is not plain text anymore:

postgres=# select password from app_users where username = 'myuser';
                           password                           
--------------------------------------------------------------
 $2a$06$8wu4VWVubv/RBYBSuj.1TOojPm0q7FkRwuDSoW0OTOC6FzBGEslIC
(1 row)

This is for the encryption part. For comparing this encrypted string against the plain text version of the string we use the crypt() function again:

postgres=# select (password = crypt('mypassword', password)) AS pwd_match 
postgres-#   from app_users
postgres-#  where username = 'myuser';
 pwd_match 
-----------
 t
(1 row)

Providing the wrong password of course returns false:

postgres=# select (password = crypt('Xmypassword', password)) AS pwd_match 
  from app_users
 where username = 'myuser';
 pwd_match 
-----------
 f
(1 row)

So finally, it is not much you need to do for storing encrypted strings in PostgreSQL. Just use it.

Cet article Securely store passwords in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle Instant Client 19c for AIX and HP-UX Itanium is on OTN

Christopher Jones - Thu, 2019-05-30 18:12

A quick announcement: Oracle Instant Client 19.3 is now available on OTN for:

Oracle Instant Client comprises tools, libraries and SDKs for building and connecting applications to Oracle Database.

I had a few comments here about 19.3.

New Release: ROracle 1.3-2, the DBI-compliant driver for R and Oracle Database

Christopher Jones - Thu, 2019-05-30 17:45

ROracle 1.3-2, the DBI-compliant driver for R and Oracle Database, has been released.

R is a powerful statistical language. ROracle can be used with Oracle Database to perform sophisticated analysis. Many users are also using it as part of Oracle R Enterprise,

ROracle 1.3-2 source code, documentation, and binaries (built for R-3.6.0 on Linux and Windows) are available for ROracle on OTN. This patch update isn't on CRAN due to an ongoing issue with CRAN infrastructure.

Don't all rush! There are only a couple of small changes in this update:

  • you can now specify the type of the Oracle Database column using the ora.type attribute in a dataframe when using dbWriteTable

  • ora.encoding is used for DML, when specified

Questions and comments can be posted on the OTN R Technologies forum

Workarounds for JPPD with view and table(kokbf$), xmltable or json_table functions

XTended Oracle SQL - Thu, 2019-05-30 09:05

You may know that table() (kokbf$ collection functions), xmltable and json_table functions block Join-Predicate PushDown(JPPD).

Simple example:

DDL

create table xtest(a, b, c) as
select mod(level,1000),level,rpad('x',100,'x')
from dual
connect by level<=1e4
/
create index itest on xtest(a)
/
create or replace view vtest as
select a,count(b) cnt
from xtest
group by a
/
call dbms_stats.gather_table_stats(user,'xtest');
/

[collapse]

select distinct v.* 
from table(sys.odcinumberlist(1,2,3)) c, vtest v
where v.a = c.column_value;

Plan hash value: 699667151

-------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |       |     1 |    19 |    80   (4)| 00:00:01 |
|   1 |  HASH UNIQUE                            |       |     1 |    19 |    80   (4)| 00:00:01 |
|*  2 |   HASH JOIN                             |       |     1 |    19 |    79   (3)| 00:00:01 |
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|       |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |    VIEW                                 | VTEST |  1000 | 17000 |    49   (3)| 00:00:01 |
|   5 |     HASH GROUP BY                       |       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   6 |      TABLE ACCESS FULL                  | XTEST | 10000 | 80000 |    48   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V"."A"=VALUE(KOKBF$))
same for json_table
select/*+ cardinality(c 1) use_nl(v) push_pred(v) */ * 
from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) c
    ,vtest v
where c.a = v.a;

Plan hash value: 664523328

--------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |     1 |    28 |    78   (2)| 00:00:01 |
|   1 |  NESTED LOOPS          |       |     1 |    28 |    78   (2)| 00:00:01 |
|   2 |   JSONTABLE EVALUATION |       |       |       |            |          |
|*  3 |   VIEW                 | VTEST |     1 |    26 |    49   (3)| 00:00:01 |
|   4 |    SORT GROUP BY       |       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | XTEST | 10000 | 80000 |    48   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("V"."A"="P"."A")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  SEL$F534CA49 / V@SEL$1
         U -  push_pred(v)

[collapse]
same for xmltable

select/*+ leading(c v) cardinality(c 1) use_nl(v) push_pred(v) */ v.*
from  xmltable('(1,3)' columns a int path '.') c,vtest v
where  c.a = v.a(+);

Plan hash value: 564839666

------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                       |     1 |    28 |    78   (2)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER                |                       |     1 |    28 |    78   (2)| 00:00:01 |
|   2 |   COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE |     1 |     2 |    29   (0)| 00:00:01 |
|*  3 |   VIEW                             | VTEST                 |     1 |    26 |    49   (3)| 00:00:01 |
|   4 |    SORT GROUP BY                   |                       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   5 |     TABLE ACCESS FULL              | XTEST                 | 10000 | 80000 |    48   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("V"."A"(+)=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),
              50,1,2)) AS int ))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  SEL$6722A2F6 / V@SEL$1
         U -  push_pred(v)

[collapse]

And compare with this:

create global temporary table temp_collection(a number);

insert into temp_collection select * from table(sys.odcinumberlist(1,2,3));

select/*+ cardinality(c 1) no_merge(v) */
   distinct v.* 
from temp_collection c, vtest v
where v.a = c.a;

Plan hash value: 3561835411

------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |     1 |    26 |    41   (3)| 00:00:01 |
|   1 |  HASH UNIQUE                             |                 |     1 |    26 |    41   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                           |                 |     1 |    26 |    40   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL                     | TEMP_COLLECTION |     1 |    13 |    29   (0)| 00:00:01 |
|   4 |    VIEW PUSHED PREDICATE                 | VTEST           |     1 |    13 |    11   (0)| 00:00:01 |
|*  5 |     FILTER                               |                 |       |       |            |          |
|   6 |      SORT AGGREGATE                      |                 |     1 |     8 |            |          |
|   7 |       TABLE ACCESS BY INDEX ROWID BATCHED| XTEST           |    10 |    80 |    11   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN                  | ITEST           |    10 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter(COUNT(*)>0)
   8 - access("A"="C"."A")

You can see that JPPD works fine in case of global temporary tables and, obviously, the first workaround is to avoid such functions with complex views.
But in such simple queries you have 2 other simple options:
1. you can avoid JPPD and get optimal plans using CVM(complex view merge) by just simply rewriting the query using IN or EXISTS:

select * 
from vtest v
where v.a in (select/*+ cardinality(c 1) */ c.* from table(sys.odcinumberlist(1,2,3)) c);

Plan hash value: 1474391442

---------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT                     |       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                            |       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS                           |       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE                           |       |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|       |     1 |     2 |    29   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                      | ITEST |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID            | XTEST |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"=VALUE(KOKBF$))
the same with json_table and xmltable

select * 
from vtest t
where t.a in (select/*+ cardinality(v 1) */ v.a from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) v);

Plan hash value: 2910004067

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT         |       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                |       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS               |       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE               |       |       |       |            |          |
|   5 |      JSONTABLE EVALUATION     |       |       |       |            |          |
|*  6 |     INDEX RANGE SCAN          | ITEST |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID| XTEST |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"="P"."A")

select v.*
from  vtest v
where exists(select/*+ cardinality(c 1) */ 1 from xmltable('(1,3)' columns a int path '.') c where c.a = v.a);

Plan hash value: 1646016183

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT                 |                       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                        |                       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS                       |                       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE                       |                       |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |      COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE |     1 |     2 |    29   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | ITEST                 |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID        | XTEST                 |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),50,1,2)) AS int ))

[collapse]

2. Avoid JPPD using lateral():

select/*+ cardinality(c 1) no_merge(lat) */
   distinct lat.* 
from table(sys.odcinumberlist(1,2,3)) c, 
     lateral(select * from vtest v where v.a = c.column_value) lat;

Plan hash value: 18036714

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |    10 |   190 |    41   (3)| 00:00:01 |
|   1 |  HASH UNIQUE                            |                 |    10 |   190 |    41   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                          |                 |    10 |   190 |    40   (0)| 00:00:01 |
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |    VIEW                                 | VW_LAT_4DB60E85 |    10 |   170 |    11   (0)| 00:00:01 |
|   5 |     SORT GROUP BY                       |                 |    10 |    80 |    11   (0)| 00:00:01 |
|   6 |      TABLE ACCESS BY INDEX ROWID BATCHED| XTEST           |    10 |    80 |    11   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN                  | ITEST           |    10 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"=VALUE(KOKBF$))

Let’s see a bit more complex query:

Test tables 2

create table xtest1(id primary key, a) as
  select level,level from dual connect by level<=1000;

create table xtest2(a, b, c) as
   select mod(level,1000),level,rpad('x',100,'x')
   from dual
   connect by level<=1e4
/
create index itest2 on xtest2(a)
/
create or replace view vtest2 as
select a,count(b) cnt
from xtest2
group by a
/

[collapse]

select v.* 
from xtest1 t1,
     vtest2 v
where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c)
  and v.a = t1.a;

Plan hash value: 4293766070

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |              |     1 |    36 |    80   (3)| 00:00:01 |
|*  1 |  HASH JOIN                                 |              |     1 |    36 |    80   (3)| 00:00:01 |
|   2 |   JOIN FILTER CREATE                       | :BF0000      |     1 |    10 |    31   (4)| 00:00:01 |
|   3 |    NESTED LOOPS                            |              |     1 |    10 |    31   (4)| 00:00:01 |
|   4 |     NESTED LOOPS                           |              |     1 |    10 |    31   (4)| 00:00:01 |
|   5 |      SORT UNIQUE                           |              |     1 |     2 |    29   (0)| 00:00:01 |
|   6 |       COLLECTION ITERATOR CONSTRUCTOR FETCH|              |     1 |     2 |    29   (0)| 00:00:01 |
|*  7 |      INDEX UNIQUE SCAN                     | SYS_C0026365 |     1 |       |     0   (0)| 00:00:01 |
|   8 |     TABLE ACCESS BY INDEX ROWID            | XTEST1       |     1 |     8 |     1   (0)| 00:00:01 |
|   9 |   VIEW                                     | VTEST2       |  1000 | 26000 |    49   (3)| 00:00:01 |
|  10 |    HASH GROUP BY                           |              |  1000 |  8000 |    49   (3)| 00:00:01 |
|  11 |     JOIN FILTER USE                        | :BF0000      | 10000 | 80000 |    48   (0)| 00:00:01 |
|* 12 |      TABLE ACCESS FULL                     | XTEST2       | 10000 | 80000 |    48   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("V"."A"="T1"."A")
   7 - access("T1"."ID"=VALUE(KOKBF$))
  12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"))

As you see, CVM can’t help in this case, but we can use lateral():

select/*+ no_merge(lat) */ lat.* 
from xtest1 t1,
     lateral(select * from vtest2 v where v.a = t1.a) lat
where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c);

Plan hash value: 1798023704

------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |    10 |   360 |    42   (3)| 00:00:01 |
|   1 |  NESTED LOOPS                            |                 |    10 |   360 |    42   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                           |                 |     1 |    10 |    31   (4)| 00:00:01 |
|   3 |    SORT UNIQUE                           |                 |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID           | XTEST1          |     1 |     8 |     1   (0)| 00:00:01 |
|*  6 |     INDEX UNIQUE SCAN                    | SYS_C0026365    |     1 |       |     0   (0)| 00:00:01 |
|   7 |   VIEW                                   | VW_LAT_A18161FF |    10 |   260 |    11   (0)| 00:00:01 |
|   8 |    SORT GROUP BY                         |                 |    10 |    80 |    11   (0)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID BATCHED  | XTEST2          |    10 |    80 |    11   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN                    | ITEST2          |    10 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("T1"."ID"=VALUE(KOKBF$))
  10 - access("A"="T1"."A")

There is also another workaround with non-documented “precompute_subquery” hint:

select v.* 
from xtest1 t1,
     vtest2 v 
where t1.id in (select/*+ precompute_subquery */ * from table(sys.odcinumberlist(1,2,3)) c)
and v.a = t1.a;

Plan hash value: 1964829099

------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |              |    30 |   480 |    37   (3)| 00:00:01 |
|   1 |  HASH GROUP BY                  |              |    30 |   480 |    37   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                  |              |    30 |   480 |    36   (0)| 00:00:01 |
|   3 |    NESTED LOOPS                 |              |    30 |   480 |    36   (0)| 00:00:01 |
|   4 |     INLIST ITERATOR             |              |       |       |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID| XTEST1       |     3 |    24 |     3   (0)| 00:00:01 |
|*  6 |       INDEX UNIQUE SCAN         | SYS_C0026365 |     3 |       |     2   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN            | ITEST2       |    10 |       |     1   (0)| 00:00:01 |
|   8 |    TABLE ACCESS BY INDEX ROWID  | XTEST2       |    10 |    80 |    11   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("T1"."ID"=1 OR "T1"."ID"=2 OR "T1"."ID"=3)
   7 - access("A"="T1"."A")

It can help even in most difficult cases, for example if you can’t rewrite query (in this case you can create sql patch or sql profile with “precompute_subquery”), but I wouldn’t suggest it since “precompute_subquery” is non-documented, it can be used only with simple collections and has limitation in 1000 values.
I’d suggest to use the workaround with lateral, since it’s most reliable and very simple.

Categories: Development

[Solved] OIM/OIG – IDCS Integration : SSL Handshake Exception: Received fatal alert: protocol_version

Online Apps DBA - Thu, 2019-05-30 08:12

[Solved] OIM/OIG – IDCS Integration: SSL Handshake Exception TLS1.2: Protocol_version To synch users from On-Premise Identity Management system to Oracle Identity Cloud Service (IDCS), IDCS connector can be used with OIM/OIG. This connection for security reasons must be configured to using TLS 1.2 If you integrate OIM/OIG with IDCS and get Error like *”javax.net.ssl.SSLException Received […]

The post [Solved] OIM/OIG – IDCS Integration : SSL Handshake Exception: Received fatal alert: protocol_version appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Can you start two (or more) PostgreSQL instances against the same data directory?

Yann Neuhaus - Thu, 2019-05-30 06:41

As PostgreSQL does not know the concept of running multiple instances against the same files on disk (e.g. like Oracle RAC) it should not be possible to start two or more instances against the same data directory. If that would work the result can only be corruption. In this post we will look at how PostgreSQL is detecting that and what mechanism are build in to avoid the situation of having multiple instances working against the same files on disk.

To start with we create a new cluster:

postgres@rhel8pg:/home/postgres/ [PGDEV] mkdir /var/tmp/pgtest
12:16:46 postgres@rhel8pg:/home/postgres/ [PGDEV] initdb -D /var/tmp/pgtest/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/tmp/pgtest ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Europe/Zurich
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/tmp/pgtest/ -l logfile start

We use a dedicated port and then start it up:

postgres@rhel8pg:/home/postgres/ [PGDEV] export PGPORT=8888
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
waiting for server to start....2019-05-16 12:17:22.399 CEST [7607] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:17:22.403 CEST [7607] LOG:  listening on IPv6 address "::1", port 8888
2019-05-16 12:17:22.403 CEST [7607] LOG:  listening on IPv4 address "127.0.0.1", port 8888
2019-05-16 12:17:22.409 CEST [7607] LOG:  listening on Unix socket "/tmp/.s.PGSQL.8888"
2019-05-16 12:17:22.446 CEST [7608] LOG:  database system was shut down at 2019-05-16 12:16:54 CEST
2019-05-16 12:17:22.455 CEST [7607] LOG:  database system is ready to accept connections
 done
server started

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -p 8888 -c "select version()" postgres
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

What happens when we want to start another instance against that data directory?

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:18:26.252 CEST [7629] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:18:26.252 CEST [7629] HINT:  Is another postmaster (PID 7607) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

When PostgreSQL is starting up it will look at a file called “postmaster.pid” which exists in the data directory once the instance is started. If that file exists PostgreSQL will not start up another instance against the same data directory. Once the instance is stopped the file is removed:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ stop
waiting for server to shut down....2019-05-16 12:48:50.636 CEST [7896] LOG:  received fast shutdown request
2019-05-16 12:48:50.641 CEST [7896] LOG:  aborting any active transactions
2019-05-16 12:48:50.651 CEST [7896] LOG:  background worker "logical replication launcher" (PID 7903) exited with exit code 1
2019-05-16 12:48:50.651 CEST [7898] LOG:  shutting down
2019-05-16 12:48:50.685 CEST [7896] LOG:  database system is shut down
 done
server stopped
postgres@rhel8pg:/home/postgres/ [PGDEV] ls -al /var/tmp/pgtest/postmaster.pid
ls: cannot access '/var/tmp/pgtest/postmaster.pid': No such file or directory

At least by default this is not possible to start two or more instances as PostgreSQL checks if postmaster.pid already exists. Lets remove that file and try again:

postgres@rhel8pg:/home/postgres/ [PGDEV] rm /var/tmp/pgtest/postmaster.pid
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
waiting for server to start....2019-05-16 12:20:17.754 CEST [7662] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:20:17.756 CEST [7662] LOG:  could not bind IPv6 address "::1": Address already in use
2019-05-16 12:20:17.756 CEST [7662] HINT:  Is another postmaster already running on port 8888? If not, wait a few seconds and retry.
2019-05-16 12:20:17.756 CEST [7662] LOG:  could not bind IPv4 address "127.0.0.1": Address already in use
2019-05-16 12:20:17.756 CEST [7662] HINT:  Is another postmaster already running on port 8888? If not, wait a few seconds and retry.
2019-05-16 12:20:17.756 CEST [7662] WARNING:  could not create listen socket for "localhost"
2019-05-16 12:20:17.756 CEST [7662] FATAL:  could not create any TCP/IP sockets
2019-05-16 12:20:17.756 CEST [7662] LOG:  database system is shut down
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Again, this does not work and even the initial instance was shutdown because PostgreSQL detected that the lock file is not there anymore:

2019-05-16 12:20:22.540 CEST [7607] LOG:  could not open file "postmaster.pid": No such file or directory
2019-05-16 12:20:22.540 CEST [7607] LOG:  performing immediate shutdown because data directory lock file is invalid
2019-05-16 12:20:22.540 CEST [7607] LOG:  received immediate shutdown request
2019-05-16 12:20:22.540 CEST [7607] LOG:  could not open file "postmaster.pid": No such file or directory
2019-05-16 12:20:22.544 CEST [7612] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:20:22.544 CEST [7612] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-16 12:20:22.544 CEST [7612] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-16 12:20:22.549 CEST [7664] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:20:22.549 CEST [7664] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.

Lets start the first instance again:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
waiting for server to start....2019-05-16 12:22:20.136 CEST [7691] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:22:20.140 CEST [7691] LOG:  listening on IPv6 address "::1", port 8888
2019-05-16 12:22:20.140 CEST [7691] LOG:  listening on IPv4 address "127.0.0.1", port 8888
2019-05-16 12:22:20.148 CEST [7691] LOG:  listening on Unix socket "/tmp/.s.PGSQL.8888"
2019-05-16 12:22:20.193 CEST [7693] LOG:  database system was interrupted; last known up at 2019-05-16 12:17:22 CEST
.2019-05-16 12:22:21.138 CEST [7693] LOG:  database system was not properly shut down; automatic recovery in progress
2019-05-16 12:22:21.143 CEST [7693] LOG:  redo starts at 0/15D3420
2019-05-16 12:22:21.143 CEST [7693] LOG:  invalid record length at 0/15D3458: wanted 24, got 0
2019-05-16 12:22:21.143 CEST [7693] LOG:  redo done at 0/15D3420
2019-05-16 12:22:21.173 CEST [7691] LOG:  database system is ready to accept connections
 done
server started

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -p 8888 -c "select version()" postgres
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

Lets change the port for the second instance and then try again to start it against the same data directory:

postgres@rhel8pg:/home/postgres/ [PGDEV] export PGPORT=8889
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:24:41.700 CEST [7754] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:24:41.700 CEST [7754] HINT:  Is another postmaster (PID 7741) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Does not work as well, which is good. Lets be a bit more nasty and truncate the postmaster.pid file:

postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid 
7790
/var/tmp/pgtest
1558002434
8888
/tmp
localhost
  8888001    819201
ready   
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /dev/null > /var/tmp/pgtest/postmaster.pid
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid

The pid file is now empty and right after emptying that file we can see this in the PostgreSQL log file:

019-05-16 12:30:14.140 CEST [7790] LOG:  lock file "postmaster.pid" contains wrong PID: 0 instead of 7790
2019-05-16 12:30:14.140 CEST [7790] LOG:  performing immediate shutdown because data directory lock file is invalid
2019-05-16 12:30:14.140 CEST [7790] LOG:  received immediate shutdown request
2019-05-16 12:30:14.149 CEST [7795] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:30:14.149 CEST [7795] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-16 12:30:14.149 CEST [7795] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-16 12:30:14.160 CEST [7790] LOG:  database system is shut down

So even that case it is detected and PostgreSQL protects you from starting up another instance against the same data directory. Lets try something else and modify PGDATA in the postmaster.pid file:

postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid 
7896
/var/tmp/pgtest
1558002751
8888
/tmp
localhost
  8888001    851969
ready   

postgres@rhel8pg:/home/postgres/ [PGDEV] sed -i  's/\/var\/tmp\/pgtest/\/var\/tmp\/pgtest2/g' /var/tmp/pgtest/postmaster.pid 
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid
7896
/var/tmp/pgtest2
1558002751
8888
/tmp
localhost
  8888001    851969
ready   

Although we changed PGDATA PostgreSQL will not start up another instance against this data directory:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:35:28.540 CEST [7973] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:35:28.540 CEST [7973] HINT:  Is another postmaster (PID 7896) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So by default you can not get PostgreSQL to start two or even more instances against the same data directory. There is an comment about this behaviour in src/backend/postmaster/postmaster.c in the source code:

                /*
                 * Once a minute, verify that postmaster.pid hasn't been removed or
                 * overwritten.  If it has, we force a shutdown.  This avoids having
                 * postmasters and child processes hanging around after their database
                 * is gone, and maybe causing problems if a new database cluster is
                 * created in the same place.  It also provides some protection
                 * against a DBA foolishly removing postmaster.pid and manually
                 * starting a new postmaster.  Data corruption is likely to ensue from
                 * that anyway, but we can minimize the damage by aborting ASAP.
                 */

“Once a minute” might be critical and we might be able to start a second one if we are fast enough, so lets try again. This time we start the first one, remove the lock file and immediately start another one using another port:

export PGPORT=8888
pg_ctl -D /var/tmp/pgtest start
rm -f /var/tmp/pgtest/postmaster.pid
export PGPORT=8889
pg_ctl -D /var/tmp/pgtest start

And here you have it:

postgres@rhel8pg:/home/postgres/ [pg120] ps -ef | grep postgres
postgres  1445     1  0 May27 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1456  1445  0 May27 ?        00:00:00 (sd-pam)
root      9780   786  0 06:09 ?        00:00:00 sshd: postgres [priv]
postgres  9783  9780  0 06:09 ?        00:00:00 sshd: postgres@pts/1
postgres  9784  9783  0 06:09 pts/1    00:00:00 -bash
postgres 10302     1  0 06:19 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /var/tmp/pgtest
postgres 10304 10302  0 06:19 ?        00:00:00 postgres: checkpointer   
postgres 10305 10302  0 06:19 ?        00:00:00 postgres: background writer   
postgres 10306 10302  0 06:19 ?        00:00:00 postgres: walwriter   
postgres 10307 10302  0 06:19 ?        00:00:00 postgres: autovacuum launcher   
postgres 10308 10302  0 06:19 ?        00:00:00 postgres: stats collector   
postgres 10309 10302  0 06:19 ?        00:00:00 postgres: logical replication launcher   
postgres 10313     1  0 06:19 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /var/tmp/pgtest
postgres 10315 10313  0 06:19 ?        00:00:00 postgres: checkpointer   
postgres 10316 10313  0 06:19 ?        00:00:00 postgres: background writer   
postgres 10317 10313  0 06:19 ?        00:00:00 postgres: walwriter   
postgres 10318 10313  0 06:19 ?        00:00:00 postgres: autovacuum launcher   
postgres 10319 10313  0 06:19 ?        00:00:00 postgres: stats collector   
postgres 10320 10313  0 06:19 ?        00:00:00 postgres: logical replication launcher   
postgres 10327  9784  0 06:19 pts/1    00:00:00 ps -ef

Conclusion: PostgreSQL does some basic checks to avoid starting two instances against the same files on disk. But if you really want (and of course you should never do that) then you can achieve that => with all the consequences! Don’t do it!

Cet article Can you start two (or more) PostgreSQL instances against the same data directory? est apparu en premier sur Blog dbi services.

Lastest Blog Posts from ACE Program Members - May 19-25, 2019

OTN TechBlog - Thu, 2019-05-30 05:00
How it all stacks up...

...depends on who's doing the stacking. The blog posts listed below, published by members of the Oracle ACE Program between May 19th and 25th, reveal a bit about how these people get things done.

 

Oracle ACE Director

Oracle ACE Director Edward Whalen Edward Whalen
Chief Technologist, Performance Tuning Corporation
Houston, Texas

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 

Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
Freelance Consultant, Architect, Trainer
Saudi Arabia

 
Oracle ACE

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Satoshi Mitani
Database Platform Technical Lead, Yahoo! JAPAN
Tokyo, Japan

 

Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

 

Oracle ACE Stefan OehrliStefan Oehrli
Platform Architect, Trivadis AG
Zurich, Switzerland

 
Oracle ACE Associate

Oracle ACE Associate Bruno Reis da SilvaBruno Reis da Silva
Senior Oracle Database Administrator, IBM
Stockholm, Sweden

 

Oracle ACE Associate Tercio CostaTercio Costa
Analista de Dados, Unimed João Pessoa
João Pessoa, Paraíba, Brazil

 

Oracle ACE Associate Wadah DaouehiWadhah Daouehi
Manager/Consultant/Trainer, ORANUX
Riadh city, Tunisia

 

 

Additional Resources

Why Oracle Cloud @ Customer is a good option?

Syed Jaffar - Thu, 2019-05-30 03:53
One of the major concerns moving over cloud is the security for most of the organizations. Though cloud concept is around for quite sometime, still, a majority of customers are concerned about putting their data over cloud. To gain the confidence and at the same to take full advantage of Cloud technologies, various Cloud vendors started offering cloud at customer solutions. In this blog spot, am going to discuss about Oracle cloud at customer solutions, its advantages , subscription model etc.

Oracle Cloud at Customer delivers full advantages of cloud technologies at your data center. You subscribe hardware and software together when you go for cloud at customer option. Though Oracle does the initial setup, configuration and day-to-day system management, you still have all the advantages of security, network of your data center.

Typically, the cloud at customer option consist of the following:


  • The hardware required to run Cloud at customer
  • Control panel software
  • The Oracle Advanced Support Gateway
  • Oracle Cloud services
 As a customer, your responsibility involves managing cloud account and subscribed services. At any time, you can check your account balance and your current Oracle Cloud at Customer service usage. It is also possible that you can view your usage by region, by service, or by a specific time period.
To check your account balance and usage, Oracle recommends that you sign in to your Oracle Cloud Account in an Oracle data region. From there, you can view your overall account usage and Universal Credits balance. 

In nutshell, cloud at customer brings the cloud solutions to your data center, where you can apply all the rules of your data centre while taking the full advantages of cloud solutions.

Subscribe to business events in Fusion-based SaaS applications from Oracle Integration Cloud ...

For integration with all Oracle Fusion based Cloud services - like Oracle Sales Cloud and Oracle ERP Cloud - each service provides Business Events which external applications or integrations can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Build and Deploy a Helidon Microservice Using Oracle Developer Cloud

OTN TechBlog - Wed, 2019-05-29 19:53

Project Helidon was recently introduced by Oracle. It provides a new way to write microservices. This blog will help you understand how use Oracle Developer Cloud to build and deploy your first Helidon-based microservice on Oracle Container Engine for Kubernetes.

Before we begin, let’s examine a few things:

What is Helidon?

Project Helidon is a set of Java Libraries for writing microservices.  Helidon supports two programming models: Helidon MP, based on MicroProfile 1.2, and Helidon SE, a small, functional style API.

Regardless of which model you choose, you’ll be writing an application that is a Java SE-based program. Helidon is open source and the code is available on GitHub.  To read and learn more about Project Helidon, see the following links:

Get Started

Helidon doesn’t have downloads. Instead, you’ll need to use the Maven releases. This means that you’ll be using the Maven Archetype to get started with your Helidon microservice project. In this blog, we’ll be using the Helidon SE programming model.

The following basic prerequisites should be installed on your machine to develop with Helidon:

  • Maven
  • Java 8
  • Gitcli (for pushing code to the Oracle Developer Cloud Git repository)

Download the Sample Microservice Project for Helidon with Maven

Open the command prompt, if you’re using a Windows machine. Then go to (or create) the directory or folder where you’d like to create the sample Helidon microservice project and execute the following Maven command.

mvn archetype:generate -DinteractiveMode=false \

    -DarchetypeGroupId=io.helidon.archetypes \

    -DarchetypeArtifactId=helidon-quickstart-se \

    -DarchetypeVersion=1.1.0 \

    -DgroupId=io.helidon.examples \

    -DartifactId=helidon-quickstart-se \

    -Dpackage=io.helidon.examples.quickstart.se

 

When executed, this Maven command will create the helidon-quickstart-se folder.

The microservice application code, the build files, and the deployment files all reside in the helidon-quickstart-se folder.

 

These are the files and folder(s):

  • src folder –  Contains the microservice application source code
  • app.yml – Describes the Kubernetes deployment
  • Dockerfile – Provides instructions for building the Docker image
  • Dockerfile.native – Provides instructions for building the Docker image using the Graal VM
  • Pom.xml – Project descriptor for the Maven build
  • README.md –  File that contains a description of the project

 

Now let’s create an Oracle Developer Cloud project with a Git repository. We’ll call the Git repository Helidon.git.

 

Navigate to the helidon-quickstart-se folder in your command prompt window and execute the following Git commands to push the Helidon microservice application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

 

Your Helidon.git repository should have the structure shown below.

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button.  

In the New Job dialog, enter BuildHelidon for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build Docker image for the Helidon Microservice code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select Helidon.git as the Git repository and, for the branch, select master.

 

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/helidonmicro for the Image Name and 1,0 for the Version Tag. The full image name shown is <DockerHub Username>/helidonmicro:1.0

 

In the Docker push step, enter <DockerHub Username>/helidonmicro for the Image Name and 1.0 for the Version Tag. Then click the Save button.

Before we create the build job that will deploy the Helidon Microservice Docker container, you need to edit the app.yaml file and modify the Docker image name. To edit that file, go to the Git page, select the Helidon.git repository, and click the app.yml file link.

 

Click the pencil icon to edit the file.

 

Replace the image name with <your DockerHub username>/helidonmicro:1.0, then click the Commit button to commit the code changes to the master branch.

 

To create another build job, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployHelidon for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildHelidon build job to the Kubernetes cluster.

 

The first thing you’ll do to configure the DeployHelidon build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select Helidon.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli and Unix Shell from the Add Step drop down. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, in the Unix Shell build step, enter the following script. You can get the Kubernetes Cluster Id from the Oracle Cloud Infrastructure console. 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f app.yaml sleep 30 kubectl get services helidon-quickstart-se kubectl get pods kubectl describe pods

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

 

In the Create Pipeline dialog, you can enter the Name as HelidonPipeline. Then click the Create button.

Drag and drop the BuildHelidon and DeployHelidon build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Click the Build button, as shown, to run the build pipeline. The BuildHelidon build job will be executed first and, if it is successful, then the DeployHelidon build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployHelidon build job.  Then click the log icon for the executed build. You should see messages that the service and deployment were successfully created.  Now, for the helidon-quickstart-se service and deployment that were created on the Kubernetes cluster, search the log, and find the public IP address and port to access the microservice, as shown below.

 

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/greet

You should see the “Hello World!” message in your browser.

 

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Helidon-based microservices and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Timestamp Oddity

Jonathan Lewis - Wed, 2019-05-29 12:17

[Editorial note: this is something I started writing in 2013, managed to complete in 2017, and still failed to publish. It should have been a follow-on to another posting on the oddities of timestamp manipulation.]

Just as national language support used to be, timestamps and time-related columns are still a bit of a puzzle to the Oracle world – so much so that OEM could cripple a system if it was allowed to do the check for “failed logins over the last 30 minutes”. And, just like NLS, it’s one of those things that you use so rarely that you keep forgetting what went wrong the last time you used it. Here’s one little oddity that I reminded myself about recently:

rem
rem     Script:         timestamp_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          April 2013
rem
create table t1 (
        ts_tz   timestamp(9) with time zone,
        ts_ltz  timestamp(9) with local time zone
);

insert into t1 values(systimestamp, systimestamp);
commit;

alter table t1 add constraint ts_ltz_uk unique (ts_ltz);
alter table t1 add constraint ts_tz_uk  unique (ts_tz);


Nothing terribly difficult – just a table with two variants on the timestamp data type and a unique constraint on both: except for one problem. Watch what happens as I create the unique constraints:

SQL> alter table t1 add constraint ts_ltz_uk unique (ts_ltz);

Table altered.

SQL> alter table t1 add constraint ts_tz_uk  unique (ts_tz);
alter table t1 add constraint ts_tz_uk  unique (ts_tz)
                                        *
ERROR at line 1:
ORA-02329: column of datatype TIME/TIMESTAMP WITH TIME ZONE cannot be unique or a primary key

Slightly unexpected – unless you’ve memorized the manuals, of course, which I hadn’t. I wonder if you can create a unique index on timestamp with time zone:


SQL> create unique index ts_tz_uk on t1(ts_tz);

Index created.

You can’t have a unique constraint, but you CAN create a unique index! How curious – did that really happen ?

SQL> select index_name, column_name from user_ind_columns where table_name = 'T1';

INDEX_NAME           COLUMN_NAME
-------------------- --------------------
TS_LTZ_UK            TS_LTZ
TS_TZ_UK             SYS_NC00003$

The index is on a column called SYS_NC00003$ – which looks suspiciously like one of those “function-based-index” things:


SQL> select * from user_ind_expressions where table_name = 'T1';

INDEX_NAME           TABLE_NAME           COLUMN_EXPRESSION                        COLUMN_POSITION
-------------------- -------------------- ---------------------------------------- ---------------
TS_TZ_UK             T1                   SYS_EXTRACT_UTC("TS_TZ")                               1

Oracle has silently invoked the sys_extract_utc() function on our (free-floating) timestamp column to normalize it to UTC. This is really not very friendly but it does make sense, of course – it would be rather expensive to enforce uniqueness if there were (at least) 24 different ways of storing the same absolute value – and 24 is a conservative estimate.

 

 

Securing the Oracle Cloud

Oracle Security Team - Wed, 2019-05-29 09:01

Greetings from sunny Seattle! My name is Eran Feigenbaum and I am the Chief Information Security Officer for the Oracle Cloud. Oracle Cloud Infrastructure (OCI) is what we call a Gen2 cloud, a fundamentally re-designed public cloud, architected for superior customer isolation and enterprise application performance than the cloud designs of ten years past. OCI is the platform for Autonomous Data Warehouse and Autonomous Transaction Processing  and, in short order, for all Oracle applications  (see Oracle CEO Mark Hurd on moving NetSuite to the Oracle Cloud),.  This is my inaugural post on our relaunched corporate security blog (thank you Mary Ann) and I’m thrilled to begin a substantive discussion with you about public cloud security. But first things first, with this blog I will describe how my group is organized and functions to protect the infrastructure for the literally thousands of applications and services moving to and continuously being developed on Oracle OCI.

My journey to Oracle was paved on over two decades-worth of experience in security. I was lucky to experience the cloud evolution from all sides in my various roles as pen tester, architect, cloud provider and cloud customer. Certainly, the core set of learnings came from nearly a decade of leading security for what is now Google Cloud. This was during a time when cloud business models were very much in their infancy, as were the protection mechanisms for customer isolation. Later, I would understand the challenges differently as the CISO of an e-commerce venture. Jet.com was a cloud-native business, so while we had no physical data centers, I understood well the limitations of first-generation cloud designs in dealing with cloud-borne threats and data protection requirements. So, when it came to joining OCI, the decision was an easy one. In its Gen2 offering, I saw that Oracle was building the future of enterprise cloud; a place where “enterprise-grade” had meaningful payoff in architecture choices like isolated network virtualization to control threat proliferation and as importantly, DevSecOps was foundational to OCI, not a transformation challenge. What security leader would not want to be a part of that?

OCI distinguishes itself among cloud providers for having predictable performance and a security-first design, so most of our customers are organizations with high sensitivity to data and information protection. They are building high performance computing applications, and that includes our Oracle internal customers, so security must be continuous, ubiquitous, agile and above all scalable. By extension then, the OCI Security Group is in many ways the modern Security Operations Center (SOC). Our job is to enable the continuous integration and continuous deployment (CI/CD) pipeline.

In building the team, I aimed at three main goals: 1) build a complete organization that could address not only detection and response but proactively ensure the security of services developed and deployed on OCI, 2) create a culture and operating practice of frequent communication and metrics sharing among teams to ensure continuous goal evaluation and 3) align with the practices that Oracle’s corporate security teams had set and refined over four decades of protecting customers’ most sensitive data.

To that end the Chief Security Office at Oracle Cloud Infrastructure (OCI) consists of six (6) teams. Between these six (6) teams, the OCI Security Group provides a comprehensive and proactive set of security services, technologies, guidance, and processes that ensure a good security posture and address security risks.

  • Security Assurance: Works collaboratively with the security teams and stakeholders throughout Oracle to drive the development and deployment of security controls, technologies, processes, and guidance for those building on OCI.
  • Product Security: This team really examines and evolves the OCI architecture, both hardware and software/services, to ensure we are taking advantage of innovations and making those changes that enhance our security posture.
  • Offensive Security: The work of this team is really to understand and emulate the methods of bad actors. Some of the work involves research, penetration testing and simulating advanced threats, against our hardware and software. All work is about strengthening our architecture and defensive capability.
  • Defensive Security: These are really the first responders of cloud security. They work proactively to spot weaknesses and in the event of incidents, work to remediate them within the shortest possible window.
  • Security Automation Services: We know that automation is fundamental to scaling but it is also key to shortening detection and response time. The team aggregates and correlates information about risks and methods to develop visualizations and tools that expedite risk reduction.
  • Security Go-To-Market: One of the most common requests of me is to share information on our security architecture, methods, tooling and best practices. Our internal and external customers want reference architectures and information on how to benefit from our experience. Having this function as part of the group gives the team access to ground truth and aligns with a core value to “put customers first”.

While the team organization is set up for completeness of function in service to the CI/CD pipeline, the key to achieving continuous security and security improvement is how well all members operate as a unit. I think of each team as being essential to the others. Each area generates intelligence that informs the other units and propels them in a kind of virtuous cycle with security automation enabling accelerated revolutions through this cycle.

 Functionally interdependent and mission aligned

As Oracle engineers, for instance, plan for the re-homing or development of new applications and services on OCI, our security architecture works with them. Throughout the drawing board and design phases, we advise on best practices, compliance considerations, tooling and what the process for continuous security will look like during the integration and deployment phases. Security assurance personnel, experts in code review best practices, give guidance and create awareness about the benefits of a security mindset for code development. At time of implementation and execution, the offensive security team conducts tests looking for weaknesses and vulnerabilities which will be surfaced both to the development teams as well as to our defensive security teams for both near term and long-term strategic remediation. This process is continuous as changes and updates can quickly alter the security posture of an environment or an application, so our aim is rapid response and most importantly refining practices and processes that will reduce the risk from those same vulnerabilities for the long term. This latter includes continuous security awareness training so that a security mindset is the cultural norm even as we scale and grow at a rapid pace.

Agility and scale in security are an imperative for a cloud operator, especially one at Oracle’s size and scope which attracts the most security sensitive businesses, governments and organizations. Our approach to security automation applies to nearly every activity and process of OCI security. We observe that which can be replicated and actioned either without human intervention or through self service mechanisms. Automation provides innovations and tooling that help not only our OCI security group but internal security stakeholders and even customers. Through visibility and self-service mechanisms, we make developers and service owners part of the OCI security mission and consequently improve our ability to maintain consistent security.

I mentioned at the beginning of this post that key to security effectiveness is not only an organizational structure built for the modern cloud but also security functional areas that are interdependent and in constant communication. One of the best ways that I have found to do this in my career managing large teams is through the Objective and Key Results (OKR) process. Similar, to Key Performance Indicators (KPIs), OKRs enable measurement of success or failure, but unlike KPIs, Objectives and Key Results (OKRs) encourage leaders, teams and contributors to make big bets, stretch beyond what seems merely achievable toward what can be revolutionary. In his seminal book Measure What Matters (of which I talk about to anyone who will listen), John Doerr outlines the structure by which agile enterprises stay aligned to mission even as they adjust to account for changes in business conditions. The key results will confirm if the direction is correct or needs adjusting. The teams of the OCI Security group stay aligned and informed by one another through the OKR system. The focus on cross communication, deduplication and realignment give us visibility to the incremental improvements and successes.

With this description of the OCI Security Group, I’ve given you some insights to how we secure the industry’s most technically advanced public cloud. Over the next months, I am eager to delve deeper on the architecture choices and innovations that set us apart. Let the journey of getting to know OCI security begin!

 

 

 

 

Utilities Test Drive Analytics from Oracle to Manage Influx of Electric Vehicles

Oracle Press Releases - Wed, 2019-05-29 07:00
Press Release
Utilities Test Drive Analytics from Oracle to Manage Influx of Electric Vehicles Advanced analytics help utilities better plan for energy demand as cars move from gas to grid

Redwood Shores, Calif.—May 29, 2019

The use of electric vehicles (EVs) is growing at a record rate, with the International Energy Agency (IEA) predicting that the number of electric cars on the road will rise from 3.1 million in 2017 to 125 million in 2030. Enabling utilities to intelligently manage this new energy demand on the power grid, Oracle Utilities has unveiled a breakthrough in EV detection.

Tapping deep machine learning, Oracle Utilities Analytics Insights is able to identify the presence of an EV, show the time and frequency of charging and disaggregate the energy being consumed by the vehicle with advanced metering infrastructure (AMI) data. With this intelligence, utilities can reliably plan for the energy infusion needed to power EVs at scale and engage customers to charge at the times that are the least expensive for them and best for the health of the energy grid. The new EV detection capabilities from Oracle Utilities Analytics Insights are currently being piloted by a number of utilities.

“With solar, wind and storage technologies now constituting 90 percent of investment interest, the road is paved for deeper decarbonization of the electricity sector,” said Ben Kellison, director grid research, Wood Mackenzie Power & Renewables. “The case for transport electrification has never been stronger and the rapid growth in investment interest from car manufacturers is a confirmation of the future consumer demand for EVs. Utilities are now faced with an increasingly clean and decentralized system and they need new data and analytic packages to support a new planning paradigm.” 

Impact of the EV Explosion on the Energy Grid

The influx of EVs could represent an average additional growth of 1-4 percent in peak load on the grid over the next few decades, according to a report by McKinsey. While this may seem modest, the impact will be highly volatile and cause unpredictable spikes at the local sub-station and feeder levels in residential areas. This load is projected to reach as high as 30 percent peak growth in certain urban areas that are hotspots for EV adoption.

While this transportation development represents an important step forward in reducing carbon emissions, most electricity grids were created long before EVs were a commercially viable consumer product. As transportation continues to evolve from gas to the grid, utilities must plan for an uptick in energy demand that will vary dramatically by area. 

“With almost every major auto manufacturer releasing new EV models in the coming years, the window of time for utilities to act is closing,” said Dan Byrnes, SVP of product development, Oracle Utilities. “The intelligence our analytics provide is essential for utilities to make needed assessments on grid investments and in tandem, work as trusted advisors to customers who may be in the dark as to how owning an EV is impacting their energy footprint and bill. From utility optimization to proven customer engagement, only Oracle offers a complete package to manage the explosion of EVs.”

Powering Better EV Planning and Engagement

The Oracle EV detection capabilities are powered by more than a decade of research and experience disaggregating household energy data from billions of data points collected from 60 million households across 100 utilities. Oracle’s trained data models can be deployed for each specific household’s usage to understand whether a customer has an EV, how they interact with their EV chargers, and where EVs are clustering on the distribution grid. As such, utilities will be able to better plan for and manage the operational impact of EVs as a new distributed energy resource (DER) on the grid.

From a customer perspective, charging an EV can increase a typical household’s energy usage by 15 percent or more and potentially double usage during peak demand times. With the offering, utilities will have the tools to roll-out intuitive, user-friendly EV adoption customer journeys and time-of-use (TOU) plans to engage, educate and reward owners for charging during non-peak times. In the future, these same kinds of engagement programs can also be used for utilities to buy-back unused energy from their customers’ EV batteries to help balance energy supply and demand in times of need.

“EVs will have an impact on every part of a utility’s operations—from grid stability and regulatory affairs to customer billing and engagement,” added Byrnes. “With Oracle, our customers have the tools and intelligence they need to make better decisions, maximize outcomes, and increase customer satisfaction every step of the journey.”

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

PeopleSoft Adminstrator Podcast: #184 – nVision Performance

David Kurtz - Wed, 2019-05-29 06:42
I recorded a second podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about nVision.
(10 May 2019) #184 – nVision Performance You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

[Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 1 Training Review

Online Apps DBA - Wed, 2019-05-29 05:22

[Q/A] 1Z0-932 Oracle Cloud Infra Architect: Day1 Review & Feedback Oracle Cloud Infrastructure (OCI) is used everywhere be it for Database, EBS (R12), Peoplesoft, JDEdward, SOA, OBIEE, WebLogic or Unix/Linux Machines. There is a huge demand in the Market for Certified Cloud Architect. Check what 56 Students in our May batch learned on Day1 including […]

The post [Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 1 Training Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator