Feed aggregator

AEM Forms – WebLogic Clustering synch issue for Workbench 2-way-SSL

Yann Neuhaus - Thu, 2019-10-31 13:00

In a previous blog, I described the process to setup the AEM Forms to allow the AEM Workbench to connect to AEM using “2-way-SSL”. This setup is normally independent of the Application Server that you are using to host AEM. However, I already faced an issue (other than this one) which was caused by the 2-way-SSL setup for the Workbench in case of a WebLogic Cluster has been used to host AEM.

As mentioned in previous blog, I’m not an AEM expert but I know a few things about WebLogic so the idea here was to setup a fully functional WebLogic Cluster composed of two Managed Servers on two hosts/machines, test it properly and then install the AEM Forms application on top of it. Obviously, AEM Forms has been configured behind a Load Balancer for this purpose. At this point, AEM Forms was working perfectly in HA and stopping one of the nodes wasn’t a problem.

Then I configured the Workbench for 2-way-SSL and I did so while being connected to the AEM Node1 in Workbench, creating the Hybrid Domain in the AEM AdminUI Node1, aso… At the end of the setup, the AEM Workbench was working properly with the 2-way-SSL setup as well so it looked like the setup was completed. Just to be sure, I stopped the AEM Node1 and try to login to the AEM Workbench with the exact same parameters (same keystore, same truststore, same passwords) except for the target Server which I switched to the AEM Node2. Doing so, the login failed and I could see in the AEM Node2 Managed Server logs the following message:

####<Feb 12, 2019 2:14:46,277 PM UTC> <Info> <EJB> <aem-node-2> <msAEM-02> <[ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <81fe4dac-31f0-4c25-bf37-17d5b327a901-0000005e> <1549980886277> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-010227> <EJB exception occurred during invocation from home or business: com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapter_fw85em_Intf generated exception: ALC-DSC-124-000: com.adobe.idp.dsc.DSCAuthorizationException: User does not have the Service Read Permission.>

 
Just looking at this message, it’s clear that the user account that is working properly for the AEM Node1 isn’t working for the AEM Node2. After some investigation, it looked like the Hybrid Domain wasn’t shown on the AEM AdminUI Node2, for some reason… Both nodes are using the same Oracle Database and the same GDS (Global Document Storage) path so I thought that the issue might be related to a cache somewhere in AEM. Therefore, I thought about re-creating the Hybrid Domain but I just cancelled this move right away because I assume it could have bring me more trouble than solution (I didn’t want to create 2 objects with the same name, avoid corruption or whatever…):

  • Open the AEM AdminUI Node2 (Node1 is still down) (https://<AEM_HOST_2>:<AEM_PORT>/adminui)
  • Login with an administative account (E.g.: administrator)
  • Navigate to: Settings > User Management > Domain Management
    • -> Only 1 domain is displayed, the default one: DefaultDom
  • Click on “New Hybrid Domain
    • Click on “Cancel”

 
After doing that, the Hybrid Domain (the one created in this blog, named “SSLMutualAuthProvider“) magically appeared so I assume that it forced a synchronization and an update of the cache on the AEM Node2. Trying again a login to the AEM Workbench without changing the parameters printed the following on the AEM Node2 Managed Server logs:

####<Feb 12, 2019 2:30:43,208 PM UTC> <Info> <com.adobe.livecycle.usermanager.sslauthprovider.SSLMutualAuthProvider> <aem-node-2> <msAEM-02> <[ACTIVE] ExecuteThread: '117' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-24A18C6CA9D79C032EFA> <81fe4dac-31f0-4c25-bf37-17d5b327a901-00000067> <1549981843208> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Got Subject DN as CN=aem-dev,OU=IT,O=dbi services,L=Delemont,ST=Jura,C=CH>

 
The above message means that the login is successful and Workbench is able to load the data from AEM properly. I guess that there are other ways to fix this issue. There is a “Sync Now” as well as a “Refresh” button on the Domain Management page of the AdminUI so maybe this would have done the same thing and forced a synchronization… I must admit that I first thought about re-creating the Hybrid Domain but cancelled that and since it solved my issue, I couldn’t test more, unfortunately. A restart of the AEM Node2 is also sufficient to force a refresh but this takes a few minutes and it requires a downtime so it’s not ideal.

Cet article AEM Forms – WebLogic Clustering synch issue for Workbench 2-way-SSL est apparu en premier sur Blog dbi services.

IOT Hash

Jonathan Lewis - Thu, 2019-10-31 09:59

It’s another of my double-entendre titles. The optimizer can turn a hash join involving an index-organized table into a real performance disaster (though you may have to help it along the way by using a silly definition for your primary key columns). This post was inspired by a question posted on the Oracle Developer Community forum recently so the table and column names I’ve used in my model reflect (almost, with a few corrections) the names used in the post.

We start with a simple requirement expressed through the following SQL:


rem
rem     Script:         iot_hash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

insert
        /*+
                qb_name(insert)
        */
into t_iot(
        id, inst_id, nr_time,
        o_time, st, null_col, svname
)
select
        /*+
                qb_name(main)
                unnest(@subq)
                leading(@sel$a93afaed apar@main ob@subq)
                use_hash(@sel$a93afaed ob@subq)
                swap_join_inputs(@sel$a93afaed ob@subq)
                index_ss_asc(@sel$a93afaed ob@subq (t_iot.st t_iot.inst_id t_iot.svname))a
        */
        apar.id,
        'UP',
        to_date('2019-10-24','yyyy-mm-dd'),
        to_date('1969-12-31','yyyy-mm-dd'),
        'IDLE',
        null,
        'tkt007.jj.bb.com'
from
        t_base apar
where
        apar.id not in (
                select
                        /*+
                                qb_name(subq)
                        */
                        id
                from
                        t_iot ob
                where
                        inst_id = 'UP'
        )
and     nvl(apar.gp_nm,'UA') = 'UA'
and     rownum <= 5000
/

The requirement is simple – insert into table t_iot a set of values dictated by a subset of the rows in table t_base if they do not already exist in t_iot. To model the issue that appeared I’ve had to hint the SQL to get the following plan (which I pulled from memory after enabling rowsource execution stats):


---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |             |      1 |        |   296 (100)|      0 |00:00:00.03 |     788 |    148 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL | T_IOT       |      1 |        |            |      0 |00:00:00.03 |     788 |    148 |       |       |          |
|*  2 |   COUNT STOPKEY          |             |      1 |        |            |    100 |00:00:00.03 |      99 |     91 |       |       |          |
|*  3 |    HASH JOIN RIGHT ANTI  |             |      1 |    100 |   296   (2)|    100 |00:00:00.03 |      99 |     91 |    14M|  1843K|   15M (0)|
|*  4 |     INDEX SKIP SCAN      | T_IOT_STATE |      1 |  12614 |   102   (0)|  10000 |00:00:00.01 |      92 |     91 |       |       |          |
|*  5 |     TABLE ACCESS FULL    | T_BASE      |      1 |    100 |     2   (0)|    100 |00:00:00.01 |       7 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(ROWNUM<=5000)
   3 - access("APAR"."ID"="ID")
   4 - access("INST_ID"='UP')
       filter("INST_ID"='UP')
   5 - filter(NVL("APAR"."GP_NM",'UA')='UA')

The optimizer has unnested (as hinted) the subquery and converted it to an anti-join using a right hash anti-join. Take a look at the Used-mem for the hash join – would it surprise you to learn that the total size of the (not compressed in any way) IOT, and all its indexes, and the t_base table together total less than 4 MB. Something dramatically awful has happened in the hash join to generated a requirement of 14MB. (In the case of the OP this appeared as an unexpected 5GB written to the temporary tablespace.)

Before I address the source of the high memory usage, take a close look at the Predicate Information, particularly operation 3, and ask yourself what the definition of index t_iot_state might be. The predicate joins t_base.id to t_iot.id, and here’s the code to create both tables and all the indexes.

create table t_iot (
        nr_time         timestamp,
        id              varchar2(1024),
        inst_id         varchar2(200),
        o_time          timestamp,
        st              varchar2(200),
        null_col        varchar2(100),
        svname          varchar2(200),
        constraint t_iot_pk primary key(nr_time, id, inst_id)
)
organization index
/

insert into t_iot
select
        sysdate,
        dbms_random.string('l',10),
        'UP',
        sysdate,
        'IDLE',
        null,
        rpad('x',25,'x')
from
        all_objects
where
        rownum <= 1e4 -- > hint to avoid wordpress format issue
/

create index t_iot_state on t_iot(st, inst_id, svname); 
create index idx2        on t_iot(id, inst_id, svname);

create table t_base(
        id              varchar2(400) not null,
        gp_nm           varchar2(200)
)
/

insert into t_base
select
        dbms_random.string('l',10),
        'UA'
from
        all_objects
where
        rownum <= 100 -- > hint to avoid wordpress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_iot',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_base',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );
end;
/


The index t_iot_state that Oracle has used in the hash join is defined on the columns (st, inst_id, svname) – so the predicate is doing a comparison with a column that’s not in the index! At least, it’s not visibly declared in the index; but this is a secondary index on an IOT, and IOTs don’t have “normal” rowids, the rowid in a secondary index is the value of the primary key (plus a “block guess”). So the columns in the index (even though not declared in the index) are: (st, inst_id, svname, {nr_time, id, inst_id, blockguess}). So this index does supply the required id column.

Side note: you’ll see in the list of columns above that inst_id appears twice. In fact (since Oracle 9, I think) the code to handle secondary indexes has been smart enough to avoid this duplication. If the secondary index contains columns from the primary key then the “rowid” doesn’t store those columns, the code knows how to construct the primaryh key value from the stored PK columns combined with the needed columns from the index entry. This can make IOTs a very nice choice of implementation for “intersection” tables that are used to represent many-to-many joins between two other tables.

Unfortunately this “rowid” is the explanation for the massive memory demand. Take a look at the “Column Projection Information” for my execution plan:


Column Projection Information (identified by operation id):
-----------------------------------------------------------
   2 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200], ROWNUM[8]
   3 - (#keys=1) "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]
   4 - "OB".ROWID[ROWID,1249], "NR_TIME"[TIMESTAMP,11], "ID"[VARCHAR2,1024], "INST_ID"[VARCHAR2,200], "OB".ROWID[ROWID,1249]
   5 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]

The interesting line is operation 4. A hash join takes the rowsource from its first child (the build table) and creates an in-memory hash table (which may spill to disc, of course), so if I see an unreasonable memory allocation (or unexpected spill to disc) a good starting point is to look at what the first child is supplying. In this case the first child seems to be saying that it’s supplying (or allowing for) nearly 3,700 bytes to be passed up to the hash join.

On closer inspection we can see it’s reporting the “rowid” twice, and also reporting the three columns that make up that rowid. I think it’s reasonable to assume that it’s only supplying the rowid once, and maybe it’s not even supplying the other three columns because they are embedded in the rowid. Doing a quick arithmetic check, let’s multiply the size of the rowid by the value of A-rows: 1,249 * 10,000 = 12,490,000. That’s pretty close to the 14MB reported by the hash join in operation 3.

Hypothesis – to get at the id column, Oracle has used this index (actually a very bad choice of those available) to extract the rowid and then passed the rowid up to the parent in a (length padded) fixed format. Oracle has then created a hash table by extracting the id column from the rowid and building the hash table on it but also carrying the length-padded rowid into the hash table.  Possible variants on this theme are that some or all of the other columns in the Column Projection Information are also passed upwards so that the id doesn’t have to be extracted, but if they are they are not padded to their maximum length.

A simple test that this is broadly the right assumption is to re-run the model making the declared length of the rowid much larger to see what happens to the memory allocation. Changing the inst_id declaration from 200 bytes to 1000 bytes (note the stored value is only the 2 bytes needed for the value ‘UP’) the Used-mem jumps to 23 MB (which is an increment very similar to 800 * 10,000).  You’ll note that I chose to experiment with a column that wasn’t the column used in the join. It was a column in the secondary index definition, though, so another test would be to change the nr_time column from a timestamp (11 bytes) to a large varchar2, so I re-ran the test declaring the nr_time as a varchar2(1000) – reverting the inst_id to varchar2(200) – and the Used-mem increased to 25MB.

Preliminary Conclusion

If Oracle uses the contents of the rowid of a secondary index on an IOT in a join then it constructs a fixed format version for the rowid by padding every column in the primary key to its maximum length and concatenating the results. This can have catastrophic side effects on performance if you’ve declared some very long columns “just in case”. Any time you use index organized tables you should remember to check the Column Projection Information in any execution plans that use secondary indexes in case they are passing a large, padded, primary key through the plan to a point where a blocking operation (such as a hash join or merge join) has to accumulate a large number of rows.

Footnote

In my test case I had to hint the query heavily to force Oracle into the path I wanted to demonstrate.

It’s surprising that the optimizer should have chosen this path in the OP’s system, given that there’s another secondary index that contains the necessary columns in its definition. (So one thought is that there’s a statistics problem to address, or possibly the “good” index is subject to updates that make it become very inefficient (large) very quickly.)

Another oddity of the OP’s system was that Oracle should have chosen to do a right hash anti-join when it looked as if joining the tables in the opposite order would produce a much smaller memory demand and lower cost – there was an explict swap_join_inputs() hint in the Outline Information (so copying the outline into the query and changing that to no_swap_join_inputs() might have been abother viable workaround.) In the end the OP hinted the query to use a nested loop (anti-)join from t_base to t_iot – which is another way to avoid the hash table threat with padded rowids.

 

How to Record your Own Music: a Beginner’s Manual

VitalSoftTech - Thu, 2019-10-31 09:56

If you’re an aspiring musician or independent artist, you might be wondering how to record your own music at home through a DIY music production process. By doing this, you could potentially save a lot of money, time, and effort that goes into producing a song. In the 21st century, there’s an app available for […]

The post How to Record your Own Music: a Beginner’s Manual appeared first on VitalSoftTech.

Categories: DBA Blogs

Database Design Question

Tom Kyte - Thu, 2019-10-31 09:47
Hello, Ask Tom Team. I have a table A with columns (client_id,invoice_number,invoice_type) with a composite primary key (client_id,invoice_number). At business level there are 5 invoice_types (70,71,72,73,74). The invoice_number always brings the ...
Categories: DBA Blogs

finding out the source of the data in a table

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I am working on a database application, and i need to know how a table is being populated in the database schema. I have tried querying xxx_dependencies and xxx_source but of no use. I believe that this table might be populated from an...
Categories: DBA Blogs

How to change the plan of a query in execution??

Tom Kyte - Thu, 2019-10-31 09:47
So I often face an issue when a query generates an execution plan assuming wrong carnality and since it assumes it as 1 it goes into a M3RG3 CART3SIAN join. Now if I gather the stats the query does not automatically picks up the new plan. Is there a ...
Categories: DBA Blogs

List of event codes for traces

Tom Kyte - Thu, 2019-10-31 09:47
Hello Masters, Can you give me a link on docs.oracle.com where are listed all the system event codes for the trace like 10046, 10053... and, most important, with there signification? I know the 10046, 10053 codes but I am sure there are many ...
Categories: DBA Blogs

Performance of a VIEW on multiple tables (historical plus current)

Tom Kyte - Thu, 2019-10-31 09:47
Good afternoon, I have a customer that manage millions of data with a lot of tables. The thing is that we have a big table from 2013 until now which we would like to divided in two, one for the last three months and one for the rest (which we call...
Categories: DBA Blogs

Restrict Application access to developers in same workspace

Tom Kyte - Thu, 2019-10-31 09:47
How to show a user only certain amount of applications in app builder(i.e user should not be able to see all applications in app builder)? In a Oracle Apex workspace, I need to create a new user(admin role) such that the new user can only see sele...
Categories: DBA Blogs

Find if event spanning several dates happened via SQL

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have data like below <code> event_flag event_date 1 date1 1 date2 0 date3 1 date4 0 date5 0 date6 1 date7 1 date8 1 date9 ...
Categories: DBA Blogs

pragma autonomous_transaction within procedure before creating synonyms

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have created a stored procedure with in oracle package which creates list of synonyms depending upon the change of dblink server name. I need to execute this procedure to create/replace synonym pointing to another dblink server. My quest...
Categories: DBA Blogs

Undo Tablespaces.

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, Waiting to ask u this question. What is a Undo Tablespace in 9i. Is this similar to Rollback Segments. What are NonStandard Block sizes Why that non-Standarad. Why am i not able to create a RS on a Locally Managed Automatically Si...
Categories: DBA Blogs

How to count no of records in table without count?

Tom Kyte - Thu, 2019-10-31 09:47
2)How to count no of records in table without count? ---Actually,this question asked when i had attended an interview in Dell company.I don't know why they people are asked these type of questions,but i said an answer like in my own way. --->...
Categories: DBA Blogs

Use of Blockchain Helps Speed Global Shipping Transactions

Oracle Press Releases - Thu, 2019-10-31 06:00
Blog
Use of Blockchain Helps Speed Global Shipping Transactions

By Guest Author, Oracle—Oct 31, 2019

While ocean shipping has been steadily growing in recent years, multiple pressures are impacting the predictability of the far-flung supply chains and pressuring industry profits. Key to navigating these challenges for shippers, ocean carriers, terminal operators, and other parties involved with shipping ocean freight is better visibility and real-time access to information.

That’s a challenge given the sheer number of participants in a single shipment—and often leads to delays, disputes, and significant extra costs for shippers and carriers. But things are changing thanks to the help of emerging technology and the formation of a new blockchain consortium in this space pioneered by CargoSmart.

For the last year, CargoSmart and Oracle have been working together to develop a blockchain solution that aims to simplify the shipping documentation process and deliver a single source of truth for trusted, real-time sharing of information, thereby increasing trust and boosting efficiency.

More recently, CargoSmart announced a significant milestone in forming the Global Shipping Business Network (GSBN) blockchain consortium. Comprising of nine leading ocean carriers and terminal operators: CMA CGM, COSCO SHIPPING Lines, COSCO SHIPPING Ports, Hapag-Lloyd, Hutchison Ports, OOCL, Port of Qingdao, PSA International, and Shanghai International Port Group. This not-for-profit joint venture aims to accelerate the digital transformation of the shipping industry, with CargoSmart providing the resources underpinning it.

“The close cooperation between our R&D group and Oracle’s blockchain team has already helped to accelerate the development of some pilot applications. By leveraging Oracle’s enriched technical support and advice, CargoSmart has been able to achieve high levels of operational capability, reduce R&D time, and significantly improve the productivity of its blockchain application developers,” said Romney Wong, Chief Technology Officer of CargoSmart.

Watch the CargoSmart Video

Watch this video to learn more about how CargoSmart is leveraging Oracle Blockchain Platform to bring transparency and trust into the complex systems behind CargoSmart’s logistics and shipping management.

 


Read More Stories from Oracle Cloud

CargoSmart is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

SLES15 SP1 – New features

Yann Neuhaus - Thu, 2019-10-31 05:20

With SLES15 SUSE introduced the Multimodal OS and the unified installer. Means, you only get what you really need. Your OS is flexible and you can easily add features you need and remove them as well. But this article shouldn’t be an explanation of the multimodal OS, it will show you some of the new features of SLES15 SP1.

SUSE supports the migration from SLES15 to SLES15 SP1 in online mode.
You can upgrade using two possibilities, YaST migration (GUI) and Zypper migration (command line).
Be sure that your system is registered at the SUSE Customer Center or has a local RMT server. Afterwards, just use “zypper migration”, type the number of the product you want to migrate and accept the terms of the license. That’s it.
The best way to check, if the installation was successful.

sles15:~ # cat /etc/os-release | grep PRETTY_NAME
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1"

So let’s have a look at the new features and improvements of SLES15 SP1 .

Unified Installer

SUSE Manager Server and Proxy are now available as base products. Both can be installed using the unified installer.
Point of Service and SLE Real Time are also included in the unified installer now.

Transactional Update

In OpenSUSE Leap and SUSE CaaS transactional update was already implemented, now it is also possible to run transactional updates with SLE. To install transactional update, the Transactional Server Module needs to get activated first (no additional key is needed). Afterwards the transactional-update package and its dependencies can be installed.

sle15:~ #  SUSEConnect --product sle-module-transactional-server/15.1/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-transactional-server 15.1 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system
sle15:~ # zypper install transactional-update
Refreshing service 'Basesystem_Module_15_SP1_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_SP1_x86_64'.
Refreshing service 'Server_Applications_Module_15_SP1_x86_64'.
Refreshing service 'Transactional_Server_Module_15_SP1_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 6 NEW packages are going to be installed:
  attr bc openslp psmisc rsync transactional-update

6 new packages to install.
Overall download size: 686.6 KiB. Already cached: 0 B. After the operation, additional 1.3 MiB will be used.
Continue? [y/n/v/...? shows all options] (y): y

As you maybe know, SUSE uses btrfs with snapper as default for the file systems. This builds the basis for the transactional updates. Transactional updates are applied into a new snapshot, so the running system is not touched. Using this technology, the updated snapshot will be activated after the next reboot. So this is an update, that is
– Atomic: either fully applied or not.
– Easily revertabled: after a failed update the return to the previous (running) system is easy.

Simplified btrfs layout

There is only one single subvolume under /var not 10 for simplified and consistens snapshots. This takes only effect for fresh installations. Upgraded systems still use the old layout.
Startings with SLES15 SP1 there is also the possibility to have each home-directory as single subvolume. But this is not the default.

Secure encrypted virtualization (SEV)

Data encryption is a important topic in todays IT environments. Data stored on disk is widley encrypted, but how about the data in RAM? AMD’s SEV gives the opportunity to protect Linux KVM virtual machines by encrypting the memory of each VM with a unique key. It can also generate a signature, that attests the correct encryption.
This increases system security a lot and protects VM for memory scrape attachs from hypervisor.
With SLES15 SP1, Suse provides full support for this technology. For further information about SEV, click here .

Quarterly Updates

Starting with 15 SP1 SUSE offers quarterly updates of the installation and package media. They will be refreshed every quarter with all maintenance and security updates. SO for the setup of new systems there is always a recent and up-to-date state.

Conclusion

This is not the full list of new features, only an abstract. But nevertheless, especially the transactional update makes it effortable to upgrade to SLES15 SP1. And always think about the security improvements which come with every new release.

Cet article SLES15 SP1 – New features est apparu en premier sur Blog dbi services.

Square root in excel – A Step-By-Step Tutorial

VitalSoftTech - Wed, 2019-10-30 10:30

Have you ever stopped to wonder what life would have been like when there wasn’t a calculator to perform the arithmetical operations for people? It surely sends a shiver down your spine to even imagine the horror of having to survive without a calculator. But in the present times, it wouldn’t be wrong to state […]

The post Square root in excel – A Step-By-Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Patroni Operations – Changing Parameters

Yann Neuhaus - Wed, 2019-10-30 10:17

Sooner or later all of us have to change a parameter on the database. But how is this put into execution when using a Patroni cluster? Of course there are some specifics you have to consider.
This post will give you a short introduction into this topic.

When you want to change a parameter on a Patroni cluster you have several possibilities:
– Dynamic configuration in DCS. These changes are applied asynchronously to every node.
– Local configuration in patroni.yml. This will take precedence over the dynamic configuration.
– Cluster configuration using “alter system”.
– Environment configuration using local environment variables.

Change PostgreSQL parameters using patronictl 1. Change parameters, that do not need a restart

If you want to change a parameter (or more) for the whole cluster, you should use patronictl. If you want to change the initial configuration as well, you should also adjust patroni.yml.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1

All parameters already set are shown and can be changed like in any other file using the vi commands:

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1

loop_wait: 10
maximum_lag_on_failover: 1048576
postgresql:
  parameters:
    archive_command: /bin/true
    archive_mode: 'on'
    autovacuum_max_workers: '6'
    autovacuum_vacuum_scale_factor: '0.1'
    autovacuum_vacuum_threshold: '50'
    client_min_messages: WARNING
    effective_cache_size: 512MB
    hot_standby: 'on'
    hot_standby_feedback: 'on'
    listen_addresses: '*'
    log_autovacuum_min_duration: 60s
    log_checkpoints: 'on'
    log_connections: 'on'
    log_directory: pg_log
    log_disconnections: 'on'
    log_duration: 'on'
    log_filename: postgresql-%a.log
    log_line_prefix: '%m - %l - %p - %h - %u@%d - %x'
    log_lock_waits: 'on'
    log_min_duration_statement: 30s
    log_min_error_statement: NOTICE
    log_min_messages: WARNING
    log_rotation_age: '1440'
    log_statement: ddl
    log_temp_files: '0'
    log_timezone: Europe/Zurich
    log_truncate_on_rotation: 'on'
    logging_collector: 'on'
    maintenance_work_mem: 64MB
    max_replication_slots: 10
    max_wal_senders: '20'
    port: 5432
    shared_buffers: 128MB
    shared_preload_libraries: pg_stat_statements
    wal_compression: 'off'
    wal_keep_segments: 8
    wal_level: replica
    wal_log_hints: 'on'
    work_mem: 8MB
  use_pg_rewind: true
  use_slots: true
retry_timeout: 10
ttl: 30

Once saved, you get the following:

---
+++
@@ -2,7 +2,8 @@
 maximum_lag_on_failover: 1048576
 postgresql:
   parameters:
-    archive_command: /bin/true
+    archive_command: 'test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f'
     archive_mode: 'on'
     autovacuum_max_workers: '6'
     autovacuum_vacuum_scale_factor: '0.1'

Apply these changes? [y/N]: y
Configuration changed

When connecting to the database you will see, that the parameter is changed now. It is also changed on all the other nodes.

 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] sq
psql (11.5)
Type "help" for help.

postgres=# show archive_command;
                                  archive_command
------------------------------------------------------------------------------------
 test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f
(1 row)
2. Change parameters, that need a restart

How can parameters be changed that need a restart? Especially as we want to have a minimal downtime of the cluster.
First of all the parameter can be changed the same way as the parameters that do not need a restart using patronictl edit-config. Once the parameter is changed the status overview of the cluster gets a new column showing which node needs a restart.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+

Afterwards there are two possibilites.

2.1 Restart node by node

If you do not want to restart the whole cluster, you have the possibility to restart each node separatly. Keep in mind, that you have to restart the Leader Node first, otherwise the change does not take effect. It is also possible to schedule the restart of a node.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni1
When should the restart take place (e.g. 2019-10-08T15:33)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni1? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni1
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni2
When should the restart take place (e.g. 2019-10-08T15:34)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |                 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni2? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni2
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni3
When should the restart take place (e.g. 2019-10-08T15:34)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |                 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |                 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni3? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni3
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
2.2 Restart the whole cluster

In case you don’t want to restart node by node and you have the possibility of a downtime, it is also possible to restart the whole cluster (scheduled or immediately)

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1
When should the restart take place (e.g. 2019-10-08T15:37)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni1, patroni2, patroni3? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni1
Success: restart on member patroni2
Success: restart on member patroni3
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Change PostgreSQL parameters using “alter system”

Of course you can change a parameter only on one node using “alter system”, too.

 postgres@patroni1:/home/postgres/ [PG1] sq
psql (11.5)
Type "help" for help.

postgres=# show archive_Command;
 archive_command
-----------------
 /bin/false
(1 row)

postgres=# alter system set archive_command='/bin/true';
ALTER SYSTEM

postgres=# select pg_reload_conf();
 pg_reload_conf
----------------
 t
(1 row)

postgres=# show archive_command;
 archive_command
-----------------
 /bin/true
(1 row)

For sure the parameter change is not automatically applied to the replicas. The parameter is only changed on that node. All the other nodes will keep the value from the DCS. So you can change the parameter using “patronictl edit-config” or with an “alter system” command on each node. But: you also have to keep in mind the order in which the parameters are applied. The “alter system” change will persist the “patronictl edit-config” command.

Conclusion

So if you consider that there are some specialities when changing parameters in a Patroni cluster, it is quite easy to change a parameter. There are some parameters that need the same value on all nodes, e.g. max_connections, max_worker_processes, wal_level. And there are as well some parameters controlled by patroni, e.g listen_addresses and port. For a more details check the Patroni documentation . And last but not least: If you change the configuration with patronictl and one node still has another configuration. Look for a postgresql.auto.conf in the PGDATA directory. Maybe there you can find the reason for different parameters on your nodes.
If you are interested in more “Patroni Operations” blogs, check also this one Patroni operations: Switchover and Failover.

Cet article Patroni Operations – Changing Parameters est apparu en premier sur Blog dbi services.

PostgreSQL 13 will come with partitioning support for pgbench

Yann Neuhaus - Wed, 2019-10-30 08:32

A lot of people use pgbench to benchmark a PostgreSQL instance and pgbench is also heavily used by the PostgreSQL developers. While declarative partitioning was introduced in PostgreSQL 10 there was no support for that in pgbench, even in the current version, which is PostgreSQL 12. With PostgreSQL 13, which is currently in development, this will change and pgbench will be able to create a partitioned pgbench_accounts tables you then can run your benchmark against.

Having a look at the parameters of pgbench in PostgreSQL 13, two new ones pop up:

postgres@centos8pg:/home/postgres/ [pgdev] pgbench --help
pgbench is a benchmarking tool for PostgreSQL.

Usage:
pgbench [OPTION]... [DBNAME]

Initialization options:
-i, --initialize         invokes initialization mode
-I, --init-steps=[dtgvpf]+ (default "dtgvp")
run selected initialization steps
-F, --fillfactor=NUM     set fill factor
-n, --no-vacuum          do not run VACUUM during initialization
-q, --quiet              quiet logging (one message each 5 seconds)
-s, --scale=NUM          scaling factor
--foreign-keys           create foreign key constraints between tables
--index-tablespace=TABLESPACE
create indexes in the specified tablespace
--partitions=NUM         partition pgbench_accounts in NUM parts (default: 0)
--partition-method=(range|hash)
partition pgbench_accounts with this method (default: range)
--tablespace=TABLESPACE  create tables in the specified tablespace
--unlogged-tables        create tables as unlogged tables

Options to select what to run:
-b, --builtin=NAME[@W]   add builtin script NAME weighted at W (default: 1)
(use "-b list" to list available scripts)
-f, --file=FILENAME[@W]  add script FILENAME weighted at W (default: 1)
-N, --skip-some-updates  skip updates of pgbench_tellers and pgbench_branches
(same as "-b simple-update")
-S, --select-only        perform SELECT-only transactions
(same as "-b select-only")

Benchmarking options:
-c, --client=NUM         number of concurrent database clients (default: 1)
-C, --connect            establish new connection for each transaction
-D, --define=VARNAME=VALUE
define variable for use by custom script
-j, --jobs=NUM           number of threads (default: 1)
-l, --log                write transaction times to log file
-L, --latency-limit=NUM  count transactions lasting more than NUM ms as late
-M, --protocol=simple|extended|prepared
protocol for submitting queries (default: simple)
-n, --no-vacuum          do not run VACUUM before tests
-P, --progress=NUM       show thread progress report every NUM seconds
-r, --report-latencies   report average latency per command
-R, --rate=NUM           target rate in transactions per second
-s, --scale=NUM          report this scale factor in output
-t, --transactions=NUM   number of transactions each client runs (default: 10)
-T, --time=NUM           duration of benchmark test in seconds
-v, --vacuum-all         vacuum all four standard tables before tests
--aggregate-interval=NUM aggregate data over NUM seconds
--log-prefix=PREFIX      prefix for transaction time log file
(default: "pgbench_log")
--progress-timestamp     use Unix epoch timestamps for progress
--random-seed=SEED       set random seed ("time", "rand", integer)
--sampling-rate=NUM      fraction of transactions to log (e.g., 0.01 for 1%)
--show-script=NAME       show builtin script code, then exit

Common options:
-d, --debug              print debugging output
-h, --host=HOSTNAME      database server host or socket directory
-p, --port=PORT          database server port number
-U, --username=USERNAME  connect as specified database user
-V, --version            output version information, then exit
-?, --help               show this help, then exit

Report bugs to .

That should give us partitions according to the amount of partitions and partitioning method we chose, so let’s populate a new database:

postgres@centos8pg:/home/postgres/ [pgdev] psql -c "create database pgbench" postgres
CREATE DATABASE
Time: 326.715 ms
postgres@centos8pg:/home/postgres/ [pgdev] pgbench -i -s 10 --partitions=10 --partition-method=range --foreign-keys pgbench
dropping old tables...
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
creating tables...
creating 10 partitions...
generating data...
100000 of 1000000 tuples (10%) done (elapsed 0.20 s, remaining 1.78 s)
200000 of 1000000 tuples (20%) done (elapsed 0.40 s, remaining 1.62 s)
300000 of 1000000 tuples (30%) done (elapsed 0.74 s, remaining 1.73 s)
400000 of 1000000 tuples (40%) done (elapsed 1.23 s, remaining 1.85 s)
500000 of 1000000 tuples (50%) done (elapsed 1.47 s, remaining 1.47 s)
600000 of 1000000 tuples (60%) done (elapsed 1.81 s, remaining 1.21 s)
700000 of 1000000 tuples (70%) done (elapsed 2.25 s, remaining 0.97 s)
800000 of 1000000 tuples (80%) done (elapsed 2.46 s, remaining 0.62 s)
900000 of 1000000 tuples (90%) done (elapsed 2.81 s, remaining 0.31 s)
1000000 of 1000000 tuples (100%) done (elapsed 3.16 s, remaining 0.00 s)
vacuuming...
creating primary keys...
creating foreign keys...
done in 5.78 s (drop tables 0.00 s, create tables 0.07 s, generate 3.29 s, vacuum 0.84 s, primary keys 0.94 s, foreign keys 0.65 s).

The pgbench_accounts table should now be partitioned by range:

postgres@centos8pg:/home/postgres/ [pgdev] psql -c "\d+ pgbench_accounts" pgbench
Partitioned table "public.pgbench_accounts"
Column  |     Type      | Collation | Nullable | Default | Storage  | Stats target | Description
----------+---------------+-----------+----------+---------+----------+--------------+-------------
aid      | integer       |           | not null |         | plain    |              |
bid      | integer       |           |          |         | plain    |              |
abalance | integer       |           |          |         | plain    |              |
filler   | character(84) |           |          |         | extended |              |
Partition key: RANGE (aid)
Indexes:
"pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Foreign-key constraints:
"pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)
Referenced by:
TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)
Partitions: pgbench_accounts_1 FOR VALUES FROM (MINVALUE) TO (100001),
pgbench_accounts_10 FOR VALUES FROM (900001) TO (MAXVALUE),
pgbench_accounts_2 FOR VALUES FROM (100001) TO (200001),
pgbench_accounts_3 FOR VALUES FROM (200001) TO (300001),
pgbench_accounts_4 FOR VALUES FROM (300001) TO (400001),
pgbench_accounts_5 FOR VALUES FROM (400001) TO (500001),
pgbench_accounts_6 FOR VALUES FROM (500001) TO (600001),
pgbench_accounts_7 FOR VALUES FROM (600001) TO (700001),
pgbench_accounts_8 FOR VALUES FROM (700001) TO (800001),
pgbench_accounts_9 FOR VALUES FROM (800001) TO (900001)

The same should work for hash partitioning:

postgres@centos8pg:/home/postgres/ [pgdev] pgbench -i -s 10 --partitions=10 --partition-method=hash --foreign-keys pgbench
dropping old tables...
creating tables...
creating 10 partitions...
generating data...
100000 of 1000000 tuples (10%) done (elapsed 0.19 s, remaining 1.69 s)
200000 of 1000000 tuples (20%) done (elapsed 0.43 s, remaining 1.71 s)
300000 of 1000000 tuples (30%) done (elapsed 0.67 s, remaining 1.55 s)
400000 of 1000000 tuples (40%) done (elapsed 1.03 s, remaining 1.54 s)
500000 of 1000000 tuples (50%) done (elapsed 1.22 s, remaining 1.22 s)
600000 of 1000000 tuples (60%) done (elapsed 1.59 s, remaining 1.06 s)
700000 of 1000000 tuples (70%) done (elapsed 1.80 s, remaining 0.77 s)
800000 of 1000000 tuples (80%) done (elapsed 2.16 s, remaining 0.54 s)
900000 of 1000000 tuples (90%) done (elapsed 2.36 s, remaining 0.26 s)
1000000 of 1000000 tuples (100%) done (elapsed 2.69 s, remaining 0.00 s)
vacuuming...
creating primary keys...
creating foreign keys...
done in 4.99 s (drop tables 0.10 s, create tables 0.08 s, generate 2.74 s, vacuum 0.84 s, primary keys 0.94 s, foreign keys 0.30 s).
postgres@centos8pg:/home/postgres/ [pgdev] psql -c "\d+ pgbench_accounts" pgbench
Partitioned table "public.pgbench_accounts"
Column  |     Type      | Collation | Nullable | Default | Storage  | Stats target | Description
----------+---------------+-----------+----------+---------+----------+--------------+-------------
aid      | integer       |           | not null |         | plain    |              |
bid      | integer       |           |          |         | plain    |              |
abalance | integer       |           |          |         | plain    |              |
filler   | character(84) |           |          |         | extended |              |
Partition key: HASH (aid)
Indexes:
"pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Foreign-key constraints:
"pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)
Referenced by:
TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)
Partitions: pgbench_accounts_1 FOR VALUES WITH (modulus 10, remainder 0),
pgbench_accounts_10 FOR VALUES WITH (modulus 10, remainder 9),
pgbench_accounts_2 FOR VALUES WITH (modulus 10, remainder 1),
pgbench_accounts_3 FOR VALUES WITH (modulus 10, remainder 2),
pgbench_accounts_4 FOR VALUES WITH (modulus 10, remainder 3),
pgbench_accounts_5 FOR VALUES WITH (modulus 10, remainder 4),
pgbench_accounts_6 FOR VALUES WITH (modulus 10, remainder 5),
pgbench_accounts_7 FOR VALUES WITH (modulus 10, remainder 6),
pgbench_accounts_8 FOR VALUES WITH (modulus 10, remainder 7),
pgbench_accounts_9 FOR VALUES WITH (modulus 10, remainder 8).

Looks fine. Now you can easily benchmark against a partitioned pgbench_accounts table.

Cet article PostgreSQL 13 will come with partitioning support for pgbench est apparu en premier sur Blog dbi services.

Cloning Oracle Homes in 19c Part 2

Michael Dinh - Wed, 2019-10-30 08:10

You didn’t think there was going to be a part 2 did you?

$ORACLE_HOME/log is not included in creategoldimage which makes perfect sense as I was discussing on twitter for having garbage in a gold image but why is it being read?

For ol7-19-rac2, permission for $GRID_HOME/log has not been changed; hence, creategoldimage failed.

-exclFiles $ORACLE_HOME/.patch_storage (succeeded)
-exclFiles $ORACLE_HOME/log (failed)

[oracle@ol7-19-rac1 ~]$ ls /u01/app/19.0.0/grid/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties
[oracle@ol7-19-rac1 ~]$

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -exclFiles $ORACLE_HOME/.patch_storage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-30_12-05-43PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/19.0.0/grid5/
[oracle@ol7-19-rac1 grid5]$ unzip -qo grid_home_2019-10-30_12-05-43PM.zip

[oracle@ol7-19-rac1 grid5]$ ls .patch_storage
ls: cannot access .patch_storage: No such file or directory

[oracle@ol7-19-rac1 grid5]$ ls -l
total 3203396
drwxr-xr-x.  3 oracle oinstall         22 Oct  8 20:33 acfs
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsccm
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsccreg
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfscm
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsiob
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsrd
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsrm
drwxr-xr-x.  2 oracle oinstall        102 Oct  1 23:45 addnode
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 advmccb
drwxr-xr-x. 10 oracle oinstall       4096 Apr 17  2019 assistants
drwxr-xr-x.  2 oracle oinstall      12288 Oct 30 12:05 bin
drwxr-x---.  3 oracle oinstall         18 Oct  1 23:47 cdp
drwxr-x---.  4 oracle oinstall         31 Oct  1 23:47 cha
drwxr-xr-x.  4 oracle oinstall         87 Oct  1 23:45 clone
drwxr-xr-x. 12 oracle oinstall       4096 Oct 30 12:05 crs
drwx--x--x.  5 oracle oinstall         41 Oct  1 23:47 css
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 ctss
drwxrwxr-x.  7 oracle oinstall         71 Apr 17  2019 cv
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 dbjava
drwxr-xr-x.  2 oracle oinstall         39 Oct 30 12:05 dbs
drwxr-xr-x.  5 oracle oinstall       4096 Oct  1 23:45 deinstall
drwxr-xr-x.  3 oracle oinstall         20 Apr 17  2019 demo
drwxr-xr-x.  3 oracle oinstall         20 Apr 17  2019 diagnostics
drwxr-xr-x. 13 oracle oinstall       4096 Apr 17  2019 dmu
-rw-r--r--.  1 oracle oinstall        852 Aug 18  2015 env.ora
drwxr-x---.  6 oracle oinstall         53 Oct 30 12:05 evm
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 gipc
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 gnsd
drwxr-x---.  5 oracle oinstall         49 Oct 30 12:05 gpnp
-rw-r--r--.  1 oracle oinstall 3280127704 Oct 30 12:12 grid_home_2019-10-30_12-05-43PM.zip
-rwxr-x---.  1 oracle oinstall       3294 Mar  8  2017 gridSetup.sh
drwxr-xr-x.  4 oracle oinstall         32 Apr 17  2019 has
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 hs
drwxr-xr-x. 11 oracle oinstall       4096 Oct 30 12:12 install
drwxr-xr-x.  2 oracle oinstall         29 Apr 17  2019 instantclient
drwxr-x---. 13 oracle oinstall       4096 Oct 30 12:05 inventory
drwxr-xr-x.  8 oracle oinstall         82 Oct 30 12:05 javavm
drwxr-xr-x.  3 oracle oinstall         35 Apr 17  2019 jdbc
drwxr-xr-x.  6 oracle oinstall       4096 Oct 30 12:05 jdk
drwxr-xr-x.  2 oracle oinstall       8192 Oct  8 20:28 jlib
drwxr-xr-x. 10 oracle oinstall       4096 Apr 17  2019 ldap
drwxr-xr-x.  4 oracle oinstall      12288 Oct 30 12:05 lib
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 md
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 mdns
drwxr-xr-x. 10 oracle oinstall       4096 Oct 30 12:05 network
drwxr-xr-x.  5 oracle oinstall         46 Apr 17  2019 nls
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 ohasd
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 ologgerd
drwxr-x---. 14 oracle oinstall       4096 Oct  1 23:45 OPatch
drwxr-xr-x.  8 oracle oinstall         77 Apr 17  2019 opmn
drwxr-xr-x.  4 oracle oinstall         34 Apr 17  2019 oracore
drwxr-xr-x.  6 oracle oinstall         52 Apr 17  2019 ord
drwxr-xr-x.  4 oracle oinstall         66 Apr 17  2019 ords
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 oss
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 osysmond
drwxr-xr-x.  8 oracle oinstall       4096 Oct  1 23:45 oui
drwxr-xr-x.  4 oracle oinstall         33 Apr 17  2019 owm
drwxr-xr-x.  5 oracle oinstall         39 Apr 17  2019 perl
drwxr-xr-x.  6 oracle oinstall         78 Apr 17  2019 plsql
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 precomp
drwxr-xr-x.  2 oracle oinstall         26 Apr 17  2019 QOpatch
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 qos
drwxr-xr-x.  5 oracle oinstall         56 Oct 30 12:05 racg
drwxr-xr-x. 15 oracle oinstall       4096 Oct 30 12:05 rdbms
drwxr-xr-x.  3 oracle oinstall         21 Apr 17  2019 relnotes
drwxr-xr-x.  7 oracle oinstall        102 Apr 17  2019 rhp
-rwxr-xr-x.  1 oracle oinstall        405 Oct  1 23:45 root.sh
-rwx------.  1 oracle oinstall        490 Apr 17  2019 root.sh.old
-rw-r-----.  1 oracle oinstall         10 Apr 17  2019 root.sh.old.1
-rwx------.  1 oracle oinstall        405 Apr 18  2019 root.sh.old.2
-rw-r-----.  1 oracle oinstall         10 Apr 17  2019 root.sh.old.3
-rwxr-xr-x.  1 oracle oinstall        414 Oct  1 23:45 rootupgrade.sh
-rwxr-x---.  1 oracle oinstall        628 Sep  3  2015 runcluvfy.sh
drwxr-xr-x.  5 oracle oinstall       4096 Apr 17  2019 sdk
drwxr-xr-x.  3 oracle oinstall         18 Apr 17  2019 slax
drwxr-xr-x.  5 oracle oinstall       4096 Oct  8 20:26 sqlpatch
drwxr-xr-x.  6 oracle oinstall         53 Oct  1 23:44 sqlplus
drwxr-xr-x.  7 oracle oinstall         66 Oct 30 12:05 srvm
drwxr-x---.  5 oracle oinstall         63 Oct 30 12:05 suptools
drwxr-xr-x.  4 oracle oinstall         29 Apr 17  2019 tomcat
drwxr-xr-x.  3 oracle oinstall         35 Apr 17  2019 ucp
drwxr-xr-x.  7 oracle oinstall         71 Apr 17  2019 usm
drwxr-xr-x.  2 oracle oinstall         33 Apr 17  2019 utl
-rw-r-----.  1 oracle oinstall        500 Feb  6  2013 welcome.html
drwxr-xr-x.  3 oracle oinstall         18 Apr 17  2019 wlm
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 wwg
drwxr-xr-x.  5 oracle oinstall       4096 Oct  8 20:35 xag
drwxr-x---.  6 oracle oinstall         58 Apr 17  2019 xdk

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/log/
crs  diag  ol7-19-rac1  procwatcher

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/log/*
ls: cannot access /u01/app/19.0.0/grid5/log/*: No such file or directory
[oracle@ol7-19-rac1 grid5]$

================================================================================

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-30_12-23-31PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/19.0.0/grid5/
[oracle@ol7-19-rac1 grid5]$ unzip -qo grid_home_2019-10-30_12-23-31PM.zip

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/log/
crs  diag  ol7-19-rac1  procwatcher

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/log/*
ls: cannot access /u01/app/19.0.0/grid5/log/*: No such file or directory
[oracle@ol7-19-rac1 grid5]$

================================================================================

[oracle@ol7-19-rac2 ~]$ . oraenv <<< +ASM2
ORACLE_SID = [cdbrac2] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-19-rac2 ~]$ ls $ORACLE_HOME/log/*
/u01/app/19.0.0/grid/log/diag:
adrci_dir.mif  asmtool  clients

/u01/app/19.0.0/grid/log/ol7-19-rac2:
acfs  admin  afd  chad  client  crsd  cssd  ctssd  diskmon  evmd  gipcd  gnsd  gpnpd  mdnsd  ohasd  racg  srvm  xag

/u01/app/19.0.0/grid/log/procwatcher:
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prw.sh: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac2: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora.org: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prw_ol7-19-rac2.log: Permission denied
prwinit.ora  prwinit.ora.org  prw_ol7-19-rac2.log  prw.sh  PRW_SYS_ol7-19-rac2

[oracle@ol7-19-rac2 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -exclFiles $ORACLE_HOME/log -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-30_12-40-37PM for more details.
Setup failed.
[oracle@ol7-19-rac2 ~]$

Strange Estimates.

Jonathan Lewis - Wed, 2019-10-30 08:10

A question came up on the Oracle-L list server a couple of days ago expressing some surprise at the following execution plan:


--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |       |       |   845 (100)|          |
|   1 |  SORT AGGREGATE                      |                 |     1 |     7 |            |          |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| ANY_TABLE       | 84827 |   579K|   845   (1)| 00:00:01 |
|   3 |    SORT CLUSTER BY ROWID             |                 | 68418 |       |    76   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                 | ANY_INDEX       | 68418 |       |    76   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("X"."ANY_COLUMN1"='J')
   4 - access("X"."ANY_COLUMN2"=89155)

You’ll notice that this is a very simple query accessing a table by index, yet the estimated table rows found exceeds the estimated number of index entries used to probe the table. How can this happen. The answer (most frequently) is that there’s a mismatch between the table (or, more commonly, column) statistics and the index statistics. This seems to happen very frequently when you start mixing partitioned tables with global (or globally partitioned) indexes but it can happen in very simple cases, especially since a call to gather_table_stats() with cascade set to true and using the auto_sample_size will take a small sample from the index while using a 100% “sample” from the table.

Here’s an example I engineered very quickly to demonstrate the point. There’s no particular reason for the choice of DML I’ve used on the data beyond a rough idea of setting up a percentage of nulls and deleting a non-uniform pattern of rows.


rem
rem     Script:         table_index_mismatch.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,1000)                n1,
        mod(rownum,1000)                n2,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1, for columns (n1,n2) size 1'
        );
end;
/

create index t1_i1 on t1(n1);

delete from t1 where mod(trunc(sqrt(n1)),7) = 0;
update t1 set n1 = null where mod(n1,10) = 0;
delete from t1 where mod(n1,10) = trunc(dbms_random.value(0,10));

execute dbms_stats.gather_table_stats(user,'t1',estimate_percent=>1)
execute dbms_stats.gather_index_stats(null,'t1_i1',estimate_percent=> 0.01)

Of course you’re not supposed to collect stats with arbitrary samples in any recent version of Oracle, so going for a 1% and 0.01% sample seems a little daft but I’m just doing that to demonstrate the problem with a very small data set.

After generating the data and gathering the stats I ran a few queries to pick out some critical numbers.


select
        table_name, sample_size, num_rows
from
        user_tables
where
        table_name = 'T1'
/

select 
        index_name, sample_size, num_rows, distinct_keys
from
        user_indexes
where
        table_name = 'T1'
and     index_name = 'T1_I1'
/

select
        column_name, sample_size, num_nulls, num_distinct
from
        user_tab_cols
where
        table_name = 'T1'
and     (
            column_name = 'N1'
         or virtual_column = 'YES'
        )
order by
        column_name
/

You’ll notice that I’ve only picked one of my original columns and any virtual columns. My gather_table_stats() call had a method_opt that included the creation of extended stats for the column group (n1, n2) and I want to report the stats on the resulting virtual column.


TABLE_NAME           SAMPLE_SIZE   NUM_ROWS
-------------------- ----------- ----------
T1                          7865     786500


INDEX_NAME           SAMPLE_SIZE   NUM_ROWS DISTINCT_KEYS
-------------------- ----------- ---------- -------------
T1_I1                     385779     713292           714


COLUMN_NAME                      SAMPLE_SIZE  NUM_NULLS NUM_DISTINCT
-------------------------------- ----------- ---------- ------------
N1                                      7012      85300          771
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS          7865          0          855

A couple of observations on the stats

  • the table sample size is, as expected, 1% of the reported num_rows (the actual count is 778,154).
  • The index sample size is much bigger than expected – but that’s probably related to the normal “select 1,100 leaf blocks strategy”. Because of the skew in the pattern of deleted values it’s possible for the sample size in this model to vary between 694,154 and something in the region of 380,000.
  • The n1 sample size is about 10% smaller than the table sample size – but that’s because I set 10% of the column to null.
  • The column group sample size matches the table sample size because column group hash values are never null, even if an underlying column is null.

So let’s check the execution plan for a very simple query:


set autotrace on explain
select id from t1 where n1 = 140 and n2 = 140;
set autotrace off


---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |   920 | 11960 |   918   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   920 | 11960 |   918   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |   909 |       |     5   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N2"=140)
   2 - access("N1"=140)

The estimate for relevant index rowids is smaller than the estimate for the number of table rows! The difference is not as extreme as the case reported on Oracle-l, but I’m only trying to demonstrate a principle, not reproduce the exact results.

There are several ways in which contradictory results like this can appear – but in this case we can see the following:

  • For the table access table.num_rows/column.num_distinct = 786,500 / 855 = 919.88  (using the column group num_distinct)
  • For the index range scan: (table.num_rows – column.num_nulls) / column.num_distinct = (786500 – 85300) / 771 = 909.47 (using the n1 statistics)

So the change in strategy as it becomes possible for the optimizer to take advantage of the column group means the index and table have been using incompatible sets of stats (in particular there’s that loss of information about NULLs) as their cardinalities are calculated. The question, then, is “how much is that likely to matter”, and the follow-up if it can matter is “in what circumstancs could the effect be large enough to cause problems”. But that’s a topic for another day.

Update / Footnote

In the case of the Oracle-l example, there was no column group, and in some cases the optimizer would produce a plan where the table estimate was much smaller than the index estimate, and in other cases (like the opening plan above) the table estimate was signficantly greater than the index estimate. This was a side effect of adaptive statistics: the low table estimate was due to the basic “multiply separate selectivities”; but the with adaptive statistics enabled Oracle started sampling the table to check the correlation between the two tables, and then produced an SQL Plan Directive to do so and got to the higher (and correct) result.

 

 

Pages

Subscribe to Oracle FAQ aggregator