Bobby Durrett's DBA Blog

Subscribe to Bobby Durrett's DBA Blog feed
Oracle database performance
Updated: 1 week 22 hours ago

Rolling out patched 19c home using gold image

Thu, 2019-08-08 18:49

For Oracle versions 11.2 through 18 I have been applying quarterly patches to a test database and then gathering the Oracle home into a tar file. When we want to roll out a new database server with the latest patches we untar the file and clone the Oracle home with a command like this:

$ORACLE_HOME/oui/bin/runInstaller -clone -silent ...

This command no longer works for 19c so now I am using a new command like this:

$ORACLE_HOME/runInstaller -silent -responseFile ...

Before running this command I had to unzip my gold image zip file into $ORACLE_HOME. I created the gold image zip file using a command like this:

./runInstaller -createGoldImage -destinationLocation ...

I ran this through MobaXterm to use their X server. I created the response file when I initially installed 19c on this test server. Then I patched the Oracle home with the July 2019 PSU and finally ran the above command to create the gold image.

Some useful links that I ran into:

Franck Pachot’s post about doing a silent 18c install using the new runInstaller

Oracle support document that says the old Oracle home cloning does not work in 19c:

19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)

Oracle documentation about installing silent with response file

DBA Stackexchange post about how you have to use a response file because the command line options don’t work

This is kind of terse but it has some of the key information. I may update it later if I can.

Bobby

Categories: DBA Blogs

Python Script To Backup Linux Directory To Windows

Thu, 2019-07-25 18:32

I found out that my blog backup script was failing so I had to rewrite it to handle dropped connections to my remote sftp server. In the process I broke out as much of the code as I could into a module that I could share. The module is backupremote.py in my miscpython repository. Might be helpful to someone else. It copies the directory tree on a remote Linux server down to a directory on a Windows machine (i.e. a laptop). Uses sftp.

The earlier version of this script was in this blog post: https://www.bobbydurrettdba.com/2018/05/30/python-script-to-backup-remote-directory-using-sftp/

Bobby

Categories: DBA Blogs

Batch Query Reduced from 12 hours to 45 Minutes

Thu, 2019-06-13 16:38

I was asked to look at a batch SQL query that ran for 12 hours on June 4th. I messed around with some other ideas and then went back to my best method for tuning long running batch queries. I ripped it apart. My code ran in 45 minutes.

This only works if you have the code and can change it. I work with a lot of PL/SQL code in Unix shell scripts running SQL*Plus. If someone comes to me with a SQL statement in this kind of code, I do not tear my hair out trying to tune it in some clever way. Also, this assumes that I cannot use a SQL Profile. SQL Profiles are my first choice for production performance problems. But for long running SQL that I have full control over and cannot use a SQL Profile I tear it apart.

Ripping or tearing a SQL statement apart means that I become the optimizer. In this case the problem query joined 5 tables. It summarized 3 months of data from a large fact table and the other 4 tables were joined together with the fact table. I replaced the one query with 5 queries each of which saved their results in a table. This first query summarized the fact table and the remaining four joined one more table to the current results. Something like this:

  • Summarize 3 months of fact table data – table 1
  • Join table 2 on surrogate key
  • Join table 3 on surrogate key
  • Join table 4 on natural keys
  • Join table 5 on natural keys

So, I created 5 tables each of which held the results of the previous joins. I dropped the tables as I was done with them to save space.

I have a feeling that I could use some clever hint to force the join order and access methods to match my 5 queries. But my time is short and this works, so I did not bother trying. When you have a query that runs for 12 hours it’s not very motivating to try various hints to get it to run faster. How long do you wait for each try before you give up? Working on one table at a time is nice. I have had this approach work for me time and time again. It is almost a mechanical process without a lot of painful thinking.

Anyway, I pass this on to the internet. People may think that breaking up a 5 table join into 5 queries is ugly, but it works.

Bobby

Categories: DBA Blogs

Another On Call Week, Another SQL Profile (or two)

Thu, 2019-06-13 16:14

I was on call again last week and Sunday night I used a SQL Profile to fix a performance problem. We rolled out some updates Saturday night and the modified query ran fine in some cases, but others ran for hours. When I got on there were 60 sessions running the bad plan and the load on the system was very high. It took me a while to identify the good plan, but then I found it, put in a SQL Profile, killed the 60 bad sessions, and the problem jobs ran in 2 minutes each. A similar situation came up Monday morning after I went off on call and a coworker took over and he ended up applying another SQL Profile on a similar query.

I spent the past couple of hours doing my typical SQL tuning exercise to see if I could figure out why Sunday’s query sometimes chose the bad plan.

The typical scenario includes these elements:

  1. Partitioned table with some near empty partitions and a lot of full partitions
  2. Bind variables used to determine partition choice

In our case we have certain tables that partition data by a code number. Certain numbers were used in the past or for other reasons are not active now. My guess is that the bad plans that we see come from bind variable values that point to the empty partitions. The bad plan works fine with empty partitions but then the optimizer uses it on full partitions and the query spins for hours.

I started to research this further to see if I could come up with a better fix than putting in SQL Profiles but did not get very far. I thought I would just pass this post along as is.

Bobby

P.S. I originally wrote this June 4th, but decided to just publish as is today.

Categories: DBA Blogs

Slides and My Impressions from May 17th AZORA Meetup

Tue, 2019-06-11 18:44

We have the slides from the two talks at our May 17th AZORA Meetup.

Here are Stephen Andert’s slides: Networking is NOT just cables and fiber!

Here are Doug Hood’s slides in three parts:

  1. Using SQL and PLSQL for Mid-Tier Database Caching
  2. Oracle TimesTen Scaleout – World’s Fastest OLTP DB
  3. Oracle In-Memory Data Processing

I wanted to share my impressions of the two talks and the meeting as well as share the slides. Of course, these are my opinions shaped by my own experiences and not universally true of everyone!

This meetup had two very different types of talks. I thought of Stephen’s talk as a career development or soft skills sort of talk. I have been to a number of talks like that at national Oracle user group conferences such as Collaborate. They help balance out the purely technical Oracle database content. Once Stephen got into his talk, I really started to appreciate the quality of the content. To me he was talking about keeping in touch with people in an intentional but sincere way. I like the idea of planning on contacting people a certain number of times per year for example.

Years ago, in my first job I worked for Campus Crusade for Christ (now Cru) and I raised money to support myself and my family. I networked with people that I met through churches and friends and family members. It was different than networking as part of a DBA career because I was directly asking for money instead of making career-oriented connections. But the core idea that I remember from Stephen’s talk applied then. Stephen’s idea was to genuinely seek to help the folks in your network without focusing on what they could do for you. In my CCC days the support raising training told us that we were not “fundraising” but instead “friend raising”. I had some great experiences meeting people and getting to know them and I think it was best when my focus was on how to inspire and encourage the people I met rather than to anxiously think about whether they could give money to support what I did.

The other less serious connection I saw between Stephen’s presentation and my Cru days is that Stephen has a hand-written database setup to keep track of his people. Back in the day I had a Paradox database from Borland running on MS-DOS to do a lot of the same things. So, hearing Stephen talk about his contact database was a blast from the past for me.

I am not really doing much in the way of networking myself these days. I write this blog. I speak at conferences every couple of years or so. I help with the local Oracle user group AZORA. But I am not intentionally making and maintaining relationships with other technical people in the way Stephen described so his talk gave me something to think about.

Doug Hood’s talk was at the other end of the spectrum with some cool technology. Doug spoke on several things and with a lot of detail so I cannot do a good job of summarizing what he said. Check the slides for more details. But I do want to write down my impressions. Listening to Doug’s talk reminded me of some of the computer science study that I have been doing on my own recently. I have sort of gone back to school as an independent learner. When Doug talked about the memory hierarchy and caching it tied right back to the assembly language and algorithms study I have been doing.

Doug presented some cool hardware that puts persistent memory close enough to the CPU that it changes the way we think about memory hierarchy. What if you replace your RAM with persistent RAM that did not get cleared when you power off your computer? Evidently in some architectures (maybe all the modern ones these days I don’t know) the RAM is closely connected to the CPU and does not have to be accessed over the bus in the way I/O is. So, persistent RAM would be much faster than some solid-state disk being read over the bus no matter how fast the SSD is. Anyway, see Doug’s slides. I am sure that I am butchering the details, but I am giving my impression and my memory so keep that in mind.

In general database work and database performance has a lot to do with caching. I have heard a little bit about how algorithms can be designed to work well with CPU caches. I recently read a chapter about the B-Tree data structure that is used in databases and it was a big contrast to the other data structures I had studied because it took disk reads and memory accesses into consideration. Anyway, at a high level I took away from Doug’s talk notions about memory and caching and different ways people can tweak the memory hierarchy to get better database performance.

I had to leave a little early to head for the mountains for the weekend but as always, I valued the time I spent at AZORA, my local Oracle user group. I appreciate Stephen and Doug stepping up and giving their presentations. I hope that my links and the way I characterized their talks is accurate enough. I am sure that I made mistakes, but I got something out of my time and appreciate their efforts.

AZORA is taking a break for the hot Arizona summer but coming back strong with our next meeting on September 27th. The details are being finalized so keep an eye on our Meetup page.

Bobby

Categories: DBA Blogs

MySQL Source Installs for each RDS version

Wed, 2019-05-22 18:26

I have been doing a lot of Oracle and PeopleSoft work this year, but I am trying to continue to develop my MySQL and Amazon Web Services (AWS) knowledge at the same time. My goal is to learn some new thing about MySQL and AWS each month and then document it either on this blog or on my company’s internal web site.

This month I decided to focus on building a Linux virtual machine on VirtualBox that has the source code for each version of MySQL that we support on AWS through RDS. I already had an Oracle Linux VM with MySQL 5.7.20 installed from source code from the MySQL GitHub site. So, all I had to do was get the source code to the correct release in git and then recompile it and create a test database. Then I could save a VirtualBox snapshot for that release.

I don’t want to spend time here describing how I did the initial MySQL 5.7.20 install except to say that I followed the steps in the 5.7 reference manual section titled “2.9 Installing MySQL from Source“. The GitHub specific instructions were in the section titled “2.9.3 Installing MySQL Using a Development Source Tree“. I can’t remember why it was a problem, but I could not get Boost installed correctly for CMake to pick it up, so I pass the path to Boost to CMake using the following command:

cmake . -DWITH_BOOST=/home/bobby/boost_1_59_0

instead of what is in the manual. Otherwise I just followed the manual.

I looked at our AWS MySQL RDS databases and found 4 versions: 5.5.46, 5.6.34, 5.7.17, and 5.7.25. So, I wanted to install each of these from source. My idea is that if we hit a bug or unexpected behavior, I can try different versions and see if that behavior is version specific. We could also dive into the source if needed and have the correct version.

Here are the steps that I put together that worked for our 5.7 databases:

cd /home/bobby/mysql-server

make clean

rm CMakeCache.txt

git checkout 5.7

git pull

git checkout mysql-5.7.17

cmake . -DWITH_BOOST=/home/bobby/boost_1_59_0

make

su - root

cd /home/bobby/mysql-server

make install

cd /usr/local/mysql/data

rm -rf *

cd ..

bin/mysqld --initialize --user=mysql

bin/mysql_ssl_rsa_setup

mysqld_safe --user=mysql &

mysql -p

use default password

ALTER USER 'root'@'localhost' IDENTIFIED BY 'My!Password1234';

mysql -p

use My!Password1234

My git repository was /home/bobby/mysql-server and my Linux username was bobby. The database is in /usr/local/mysql/data. The 5.6 and 5.5 databases had a different way to create the database and change the password:

Replace these lines:

bin/mysqld --initialize --user=mysql

bin/mysql_ssl_rsa_setup

mysqld_safe --user=mysql &

mysql -p

use default password

ALTER USER 'root'@'localhost' IDENTIFIED BY 'My!Password1234';

with

scripts/mysql_install_db --user=mysql

./bin/mysqld_safe &

./bin/mysqladmin -u root password 'My!Password1234'

./bin/mysqladmin -u root -h mysqlsrc password 'My!Password1234'

Here are some pictures from the 5.7.17 final working install:

Clean up from previous install Switch to 5.7 branch Go to 5.7.17 release End of cmake step to setup options for make End of make End of make install Database initialized after old database deleted Setup keys Up on correct version 5.7.17 All four versions on their own snapshots

Time will tell if this setup really helps us during some sort of problem, but I like having the source code in case we hit a bug or unexpected behavior. The great thing about open source is that we can see the code, so why not use it?

Bobby

Categories: DBA Blogs

Tried Live SQL

Thu, 2019-05-09 18:13

Quick post. I tried https://livesql.oracle.com/ for the first time.

Looks like you can try out the latest version of Oracle for free. You can type in SQL statements in your web browser.

There seem to be a boatload of scripts and tutorials to try out. Might be good for someone who is new to SQL.

Bobby

Categories: DBA Blogs

Installed 19.3 on Linux on VirtualBox

Thu, 2019-05-02 15:14

I noticed that Oracle database 19.3 was available to download so I installed it on a Linux VM under VirtualBox.

I cloned my base Oracle Linux 7 VM and did a yum -y update to get it up to the latest Linux version.

I installed the preinstall RPM:

yum install oracle-database-preinstall-19c

Created the Oracle home, set some variables, and disabled the firewall.

mkdir -p /home/oracle/product/db/19.0.0
chgrp oinstall /home/oracle/product/db/19.0.0
cd /home/oracle/product/db/19.0.0

export ORACLE_BASE=/home/oracle/product
export ORACLE_HOME=/home/oracle/product/db/19.0.0

[root@ora19 ~]# systemctl stop firewalld
[root@ora19 ~]# systemctl disable firewalld

Edited my .bashrc

export ORACLE_BASE=/home/oracle/product
export ORACLE_HOME=/home/oracle/product/db/19.0.0
export ORAENV_ASK=NO
export ORACLE_SID=orcl
. oraenv

Unzipped the downloaded file LINUX.X64_193000_db_home.zip in my ORACLE_HOME. Then I ran runInstaller from the ORACLE_HOME through MobaXterm. For some reason this decided to put the install window overlapping my two screens so I couldn’t tell what I was typing, and it didn’t allow me to move it. Fun times.

It seemed to hang on the last step, but it eventually finished. The process named ora_mz00_orcl was spinning on the CPU for part of the time that it seemed hung. I guess it was 5 or 10 minutes, but I didn’t time it.

Seems to be up and working:

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 2 13:00:59 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Thu May 02 2019 12:51:54 -07:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL>

Bobby

Categories: DBA Blogs

May 17th AZORA Meetup – Last until September!

Tue, 2019-04-30 14:28
#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
AZORA Meetup before summer heats up! Two presentations – Friday, May 17th

Friday, May 17, 2019, 12:00 PM

Republic Services – 2nd Floor Conference Rooms
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

8 AZORAS Attending

AZORA Meetup before the summer heats up! We will be featuring two presentations on Friday May 17th meeting. Logistics are the same at friendly Republic Services, an afternoon with other Oracle types between Noon till 4pm. We will follow typical agenda of Lunch – Learn – Learn some more – Let us call it a weekend! Here are the details: Meeting Agend…

Check out this Meetup →

This is our last Meetup until our Summer break. Come check out two great presentations.

Doug Hood from Oracle will talk about the Oracle In-Memory database feature. We appreciate Oracle providing us with this technical content to support AZORA.

AZORA’s own Stephen Andert will be sharing a non-technical presentation on Networking. He just gave the same talk at the national Oracle user group meeting called Collaborate 19 so it will be great to have him share with his local user group.

Looking forward to seeing you there.

Bobby

P.S. AZORA is the Arizona Oracle User Group, and we meet in the Phoenix, Arizona area.

Categories: DBA Blogs

Example of coe_xfr_sql_profile force_match TRUE

Wed, 2019-04-17 10:57

Monday, I used the coe_xfr_sql_profile.sql script from Oracle Support’s SQLT scripts to resolve a performance issue. I had to set the parameter force_match to TRUE so that the SQL Profile I created would apply to all SQL statements with the same FORCE_MATCHING_SIGNATURE value.

I just finished going off the on-call rotation at 8 am Monday and around 4 pm on Monday a coworker came up to me with a performance problem. A PeopleSoft Financials job was running longer than it normally did. Since it had run for several hours, I got an AWR report of the last hour and looked at the SQL ordered by Elapsed Time section and found a number of similar INSERT statements with different SQL_IDs.

The inserts were the same except for certain constant values. So, I used my fmsstat2.sql script with ss.sql_id = ’60dp9r760ja88′ to get the FORCE_MATCHING_SIGNATURE value for these inserts. Here is the output:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 60dp9r760ja88         3334601 15-APR-19 05.00.34.061 PM                1         224414.511     224412.713         2.982                  0                      0                   .376             5785269                 40                   3707

Now that I had the FORCE_MATCHING_SIGNATURE value 5442820596869317879 I reran fmsstat2.sql with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 instead of ss.sql_id = ’60dp9r760ja88′ and got all of the insert statements and their PLAN_HASH_VALUE values. I needed these to use coe_xfr_sql_profile.sql to generate a script to create a SQL Profile to force a better plan onto the insert statements. Here is the beginning of the output of the fmsstat2.sql script:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 0yzz90wgcybuk      1314604389 14-APR-19 01.00.44.945 PM                1            558.798        558.258             0                  0                      0                      0               23571                  0                    812
     5442820596869317879 5a86b68g7714k      1314604389 14-APR-19 01.00.44.945 PM                1            571.158        571.158             0                  0                      0                      0               23245                  0                    681
     5442820596869317879 9u1a335s936z9      1314604389 14-APR-19 01.00.44.945 PM                1            536.886        536.886             0                  0                      0                      0               21851                  0                      2
     5442820596869317879 a922w6t6nt6ry      1314604389 14-APR-19 01.00.44.945 PM                1            607.943        607.943             0                  0                      0                      0               25948                  0                   1914
     5442820596869317879 d5cca46bzhdk3      1314604389 14-APR-19 01.00.44.945 PM                1            606.268         598.11             0                  0                      0                      0               25848                  0                   1763
     5442820596869317879 gwv75p0fyf9ys      1314604389 14-APR-19 01.00.44.945 PM                1            598.806        598.393             0                  0                      0                      0               24981                  0                   1525
     5442820596869317879 0u2rzwd08859s         3334601 15-APR-19 09.00.53.913 AM                1          18534.037      18531.635             0                  0                      0                      0              713757                  0                     59
     5442820596869317879 1spgv2h2sb8n5         3334601 15-APR-19 09.00.53.913 AM                1          30627.533      30627.533          .546                  0                      0                      0             1022484                 27                    487
     5442820596869317879 252dsf173mvc4         3334601 15-APR-19 09.00.53.913 AM                1          47872.361      47869.859          .085                  0                      0                      0             1457614                  2                    476
     5442820596869317879 25bw3269yx938         3334601 15-APR-19 09.00.53.913 AM                1         107915.183     107912.459         1.114                  0                      0                      0             2996363                 26                   2442
     5442820596869317879 2ktg1dvz8rndw         3334601 15-APR-19 09.00.53.913 AM                1          62178.512      62178.512          .077                  0                      0                      0             1789536                  3                   1111
     5442820596869317879 4500kk2dtkadn         3334601 15-APR-19 09.00.53.913 AM                1         106586.665     106586.665         7.624                  0                      0                      0             2894719                 20                   1660
     5442820596869317879 4jmj30ym5rrum         3334601 15-APR-19 09.00.53.913 AM                1          17638.067      17638.067             0                  0                      0                      0              699273                  0                    102
     5442820596869317879 657tp4jd07qn2         3334601 15-APR-19 09.00.53.913 AM                1          118948.54      118890.57             0                  0                      0                      0             3257090                  0                   2515
     5442820596869317879 6gpwwnbmch1nq         3334601 15-APR-19 09.00.53.913 AM                0          48685.816      48685.816          .487                  0                      0                  1.111             1433923                 12                      0
     5442820596869317879 6k1q5byga902a         3334601 15-APR-19 09.00.53.913 AM                1            2144.59        2144.59             0                  0                      0                      0              307369                  0                      2

The first few lines show the good plan that these inserts ran on earlier runs. The good plan has PLAN_HASH_VALUE 1314604389 and runs in about 600 milliseconds. The bad plan has PLAN_HASH_VALUE 3334601 and runs in 100 or so seconds. I took a look at the plans before doing the SQL Profile but did not really dig into why the plans changed. It was 4:30 pm or so and I was trying to get out the door since I was not on call and wanted to get home at a normal time and leave the problems to the on-call DBA. Here is the good plan:

Plan hash value: 1314604389

------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                |                    |       |       |  3090 (100)|          |
|   1 |  HASH JOIN RIGHT SEMI           |                    |  2311 |  3511K|  3090   (1)| 00:00:13 |
|   2 |   VIEW                          | VW_SQ_1            |   967 | 44482 |  1652   (1)| 00:00:07 |
|   3 |    HASH JOIN                    |                    |   967 | 52218 |  1652   (1)| 00:00:07 |
|   4 |     TABLE ACCESS FULL           | PS_PST_VCHR_TAO4   |    90 |  1980 |    92   (3)| 00:00:01 |
|   5 |     NESTED LOOPS                |                    | 77352 |  2417K|  1557   (1)| 00:00:07 |
|   6 |      INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   7 |      TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  | 77352 |  2039K|  1557   (1)| 00:00:07 |
|   8 |       INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  | 77352 |       |   756   (1)| 00:00:04 |
|   9 |   TABLE ACCESS BY INDEX ROWID   | PS_VCHR_TEMP_LN4   | 99664 |   143M|  1434   (1)| 00:00:06 |
|  10 |    INDEX RANGE SCAN             | PSAVCHR_TEMP_LN4   | 99664 |       |   630   (1)| 00:00:03 |
------------------------------------------------------------------------------------------------------

Here is the bad plan:

Plan hash value: 3334601

---------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                    |       |       |  1819 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID       | PS_VCHR_TEMP_LN4   |  2926 |  4314K|  1814   (1)| 00:00:08 |
|   2 |   NESTED LOOPS                     |                    |  2926 |  4446K|  1819   (1)| 00:00:08 |
|   3 |    VIEW                            | VW_SQ_1            |     1 |    46 |     4   (0)| 00:00:01 |
|   4 |     SORT UNIQUE                    |                    |     1 |    51 |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID   | PS_PST_VCHR_TAO4   |     1 |    23 |     1   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                 |                    |     1 |    51 |     4   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                |                    |     1 |    28 |     3   (0)| 00:00:01 |
|   8 |         INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   9 |         TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  |     1 |    23 |     3   (0)| 00:00:01 |
|  10 |          INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  |     1 |       |     2   (0)| 00:00:01 |
|  11 |        INDEX RANGE SCAN            | PS_PST_VCHR_TAO4   |     1 |       |     1   (0)| 00:00:01 |
|  12 |    INDEX RANGE SCAN                | PSAVCHR_TEMP_LN4   |   126K|       |  1010   (1)| 00:00:05 |
---------------------------------------------------------------------------------------------------------

Notice that in the bad plan the Rows column has 1 in it on many of the lines, but in the good plan it has larger numbers. Something about the statistics and the values in the where clause caused the optimizer to build the bad plan as if no rows would be accessed from these tables even though many rows would be accessed. So, it made a plan based on wrong information. But I had no time to dig further. I did ask my coworker if anything had changed about this job and nothing had.

So, I created a SQL Profile script by going to the utl subdirectory under sqlt where it was installed on the database server. I generated the script by running coe_xfr_sql_profile gwv75p0fyf9ys 1314604389. I edited the created script by the name coe_xfr_sql_profile_gwv75p0fyf9ys_1314604389.sql and changed the setting force_match=>FALSE to force_match=>TRUE and ran the script. The long running job finished shortly thereafter, and no new incidents have occurred in future runs.

The only thing that confuses me is that when I run fmsstat2.sql now with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 I do not see any runs with the good plan. Maybe future runs of the job have a different FORCE_MATCHING_SIGNATURE and the SQL Profile only helped the one job. If that is true, the future runs may have had the correct statistics and run the good plan on their own.

I wanted to post this to give an example of using force_match=>TRUE with coe_xfr_sql_profile. I had an earlier post about this subject, but I thought another example could not hurt. I also wanted to show how I use fmsstat2.sql to find multiple SQL statements by their FORCE_MATCHING_SIGNATURE value. I realize that SQL Profiles are a kind of band aid rather than a solution to the real problem. But I got out of the door by 5 pm on Monday and did not get woken up in the middle of the night so sometimes a quick fix is what you need.

Bobby

Categories: DBA Blogs

Check your hints carefully

Tue, 2019-04-16 16:32

Back in 2017 I wrote about how I had to disable the result cache after upgrading a database to 11.2.0.4. This week I found one of our top queries and it looked like removing the result cache hints made it run 10 times faster. But this did not make sense because I disabled the result cache. Then I examined the hints closer. They looked like this:

/*+ RESULT CACHE */

There should be an underscore between the two words. I look up hints in the manuals and found that CACHE is a real hint. So, I tried the query with these three additional combinations:

 
/*+ RESULT */
 
/*+ CACHE */
 
/*+ RESULT_CACHE */

It ran slow with the original hint and with just the CACHE hint but none of the others. So, the moral of the story is to check your hints carefully because they may not be what you think they are.

Bobby

Categories: DBA Blogs

Huge Pages Speeds Startup of Web Servers

Tue, 2019-04-02 10:50

We had issues deploying changes to a large web farm a few weeks back. We had automated the startup of the 25 web servers using a DevOps tool but when it tried to bring them all up at once it took forever and the web servers got timeout errors. We resolved this by putting in huge pages on the metadata database.

I mentioned the unexpectedly large impact of huge pages on login time in a previous post. But, we had not realized that a small metadata database with a 5 gigabyte SGA would need huge pages. I guess I should mention that this is 11.2.0.3 Oracle on Linux. The web farm has 25 servers with 25 database connections each. I think the number of connections from each server should be 5 or less but I have not convinced others to reduce the number. If you have a metadata database with 4 cores and 625 connections from 25 web servers, the web servers cannot use all of those connections. A 4-core database server can only process 10 or 20 SQL statements at once if they are I/O centric. If they are all CPU you are looking at 4 concurrent active SQL statements. If you kick off 625 SQL statements at once the load will go through the roof and everything will timeout. I thought about putting in shared servers on the metadata database to force the 625 sessions to funnel through 12 shared servers so the database server would not be overwhelmed should all 625 connections choose to run at once. But the only problem we had was with logins. All 625 were logging in at the same time when the web farm was started by our DevOps tool. Huge pages resolved this issue by making the login time 10-20 times faster.

The database was in a 16 gigabyte Linux VM with 4 cores and a 5 gigabyte SGA. Without huge pages each login took about 2.5 seconds. With it the logins took about .15 seconds. Without huge pages the load on the database server went over 100 when all 625 sessions started at once. With huge pages the load never exceeded 1. Without huge pages the web farm never came up cleanly and with it the farm came up quickly without error. So, if you have a Linux VM with several gigabytes in your SGA you might want to use huge pages.

Bobby

Categories: DBA Blogs

High Processes Cause Semaphore Errors

Thu, 2019-03-28 10:38

I could not create a new Delphix virtual database because I kept getting semaphore limit errors. I ended up reducing the processes parameter value to resolve the issue.

I got these errors creating the new VDB:

ERROR : SQL> ORA-27154: post/wait create failed 
ERROR : ORA-27300: OS system dependent operation:semget failed with status: 28
ERROR : ORA-27301: OS failure message: No space left on device
ERROR : ORA-27302: failure occurred at: sskgpcreates

The processes variable was set to 3000 on production so Delphix tried to create the VDB with the same value. I reduced processes to 100 and the VDB came up cleanly.

I have been living with this problem for a year or two but yesterday I really needed to resolve it and I finally figured it out. I got the idea of reducing processes from this Oracle support document:

Instance Startup Fails With Error ORA-27154,ORA-27300,ORA-27301,ORA-27302 (Doc ID 314179.1)

I looked these errors up on Oracle’s support site several times in the past and never saw the idea of reducing processes so I thought I would mention it here if only to remind myself.

Bobby

Categories: DBA Blogs

Python Practice Produced Pretty Pictures

Sat, 2019-03-23 18:57

I wrote a Python program that made some fun pictures so I thought I would share them even though this is not really a database post.

I practice Python programming by doing Rosetta Code programming tasks that no one has implemented in Python. This is a fun way of keeping up my Python skills. My most recent contribution made pretty pictures so I thought I would show them here. The code takes a cube and breaks up the faces into smaller and smaller pieces that change the cube into a rounder shape.

Here is the input:

Input to the program, a cube

Here is the output after one subdivision:

After one subdivision. Chunky.

Here is the output after two subdivisions:

Two subdivisions. Pretty round.

Note that it is getting rounder. Lastly, after four subdivisions it is remarkably round considering that it started as a cube:

Four subdivisions. Quite round.

The main point of this post was to show the pretty pictures. But, to be more serious, if someone is reading this blog and looking for a programming task to do for practice you can do what I do and find a Rosetta Code task for the language you are learning and you can get some good practice.

Bobby

Categories: DBA Blogs

Speed of Light

Wed, 2019-03-20 16:30

Looking at cloud databases has me thinking about the speed of light. Wikipedia says that the speed of light is about 186,000 miles per second. If my calculations are correct that is 5.37 microseconds per mile. The United States is about 2680 miles wide so it would take light about 14.4 milliseconds to cross the US. If I ping one of my favorite web sites it takes tens of milliseconds to ping so that kind of makes sense because those sites are in other cities and I am going through various routers. I did some tests with my company’s storage and found that reading from our storage when the data is cached in the storage server takes around 200 microseconds. That is 200 microseconds for a round trip. I’m sure that our database servers and storage are a lot less than a mile apart so most of that time has nothing to do with the speed of light. I heard about a cloud vendor whose fast network connection took 100 microseconds plus the speed of light. I guess 100 microseconds is the cost of getting your data to fiber and light does the rest. If your cloud database was on the other side of the country, I guess it could take 14 milliseconds each way at least for each SQL request. If the cloud database was in your own city and say 10 miles away that would only tack on about 53.7 microseconds each way to the 100 microseconds overhead. I guess it makes sense. Maybe 100 microseconds plus the speed of light is the cost of moving data in the best case?

Bobby

Categories: DBA Blogs

New utility Python scripts for DBAs

Tue, 2019-03-19 14:45

I pushed out three new Python scripts that might be helpful to Oracle DBAs. They are in my miscpython repository.

Might be helpful to some people.

Bobby

Categories: DBA Blogs

Slides from March AZORA meeting

Thu, 2019-03-14 16:27

Here are the slides from our March Arizona Oracle User Group (AZORA) meeting:

Daniel Morgan Security Master Class

We really appreciate Daniel Morgan taking the time to share this information about the increasingly important topic of database security.

Also, AZORA is always looking for people to present at future meetings. We have one more meeting in May before the blazing hot Arizona summer and then we start up again in September. Email be at bobby@bobbydurrettdba.com if you would like to speak at a future meeting.

Bobby

Categories: DBA Blogs

Oracle Database Security Master Class AZORA Meetup Thursday

Fri, 2019-03-01 16:33

Next Thursday Daniel Morgan from TidalScale will be giving a free Oracle Database Security Master Class for our March Arizona Oracle User Group meeting. See the details in our Meetup link:

#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
Oracle Database Security Master Class

Thursday, Mar 7, 2019, 12:00 PM

Republic Services – 2nd Floor Conference Rooms
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

13 AZORAS Attending

Oracle Database Security Master Class: Year after year Oracle databases that should be secure are violated. In some cases data is stolen, in some cases data is altered, in some cases the database is not the target but rather is means to an end. In almost all cases there is no excuse. No excuse … but there is an explanation. Oracle professionals d…

Check out this Meetup →

Please RSVP so that we know how many people will be there.

Given the ever-increasing number of security threats to our databases this session on Oracle database security should be valuable to us all.

I hope to see you there!

Bobby

Categories: DBA Blogs

ORA-30926 on MERGE statement

Tue, 2019-02-26 15:19

I was on call last week and I got a ticket about a batch job that was failing on a MERGE statement with an ORA-30926 error. Our support team worked around it by deleting some duplicate rows in a table that was feeding into the MERGE. This week I wanted to go back and try to understand what would cause ORA-30926 errors on MERGE statements.

I read through some blog posts and Oracle support documents relating to ORA-30926 and merge. Then I tried building some simple test scripts to see when you get ORA-30926 on a MERGE. At the end of my search I came back to this simple statement from the 18c SQL Language Reference manual’s description of the MERGE statement:

MERGE is a deterministic statement. You cannot update the same row of the target table multiple times in the same MERGE statement.

I was puzzled by the term “deterministic” but “cannot update the same row…multiple times” seems simple enough. I created test scripts to show whether you can update the same row multiple times with a MERGE: zip

Here are the two test tables:

create table target (a number,b number);

create table source (a number,b number);

Here is data and a MERGE statement that causes the error:

SQL> insert into target values (1,1);
SQL> insert into source values (1,2);
SQL> insert into source values (1,3);

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b;
using source
      *
ERROR at line 2:
ORA-30926: unable to get a stable set of rows in the source tables 

This seems very simple. We join the source and target tables on column A with value 1. The merge statement tries to update column B on the target table row twice. It tries to set it to 2 and then to 3. I guess this is where the term “deterministic” comes in. If the merge statement updated B to 2 and 3 which would it do first and which second?

The example above with values 2 and 3 for B makes good sense but I saw some odd behavior when I used source rows with 1 and 2 for the B values of the two rows. With B values of 1 and 2 in this case the MERGE does not get an error:

SQL> insert into target values (1,1);
SQL> insert into source values (1,1);
SQL> insert into source values (1,2);

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b;

2 rows merged.

SQL> select * from target;

         A          B                                                           
---------- ----------                                                           
         1          2                                                           

This seems to contradict the idea that you cannot update the same row multiple times with a MERGE. In this case it seems like B is updated twice, once to 1 which is what it already was set to and then to 2. I guess this works because B was already set to 1 so the update of B to the same value does not count as an update. It seems like only one update took place and then B ends up set to 2.

This example does not work with a slightly different MERGE statement on Oracle 12.1 or earlier:

SQL> select banner from v$version;

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production    

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b
  6  when not matched then
  7  insert values (source.a,source.b);
using source
      *
ERROR at line 2:
ORA-30926: unable to get a stable set of rows in the source tables 

SQL> select * from source;

         A          B                                                           
---------- ----------                                                           
         1          1                                                           
         1          2                                                           

SQL> 
SQL> select * from target;

         A          B                                                           
---------- ----------                                                           
         1          1                                                           

All that I can say based on these two tests is that sometimes an update of a column to the same value counts as an update and sometimes it does not. The preceding example works on 12.2:

SQL> select banner from v$version;

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production    

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b
  6  when not matched then
  7  insert values (source.a,source.b);

2 rows merged.

SQL> select * from source;

         A          B                                                           
---------- ----------                                                           
         1          1                                                           
         1          2                                                           

SQL> 
SQL> select * from target;

         A          B                                                           
---------- ----------                                                           
         1          2                                                           

It all seemed so simple when I read the documentation. Maybe instead of saying:

You cannot update the same row of the target table multiple times in the same MERGE statement.

Instead it should say:

You may not be able to update the same row of the target table multiple times in the same MERGE statement.

We should not count on being able to update the same row multiple times with a single MERGE statement, but clearly there are some cases in which we can. If you are like me and you get paged on a job that fails with ORA-30926 on a MERGE, it makes sense to get rid of the multiple updates on a single row because you most likely have hit a case where it is not allowed.

Bobby

Categories: DBA Blogs

One query in MySQL Performance Schema

Tue, 2019-02-12 16:48

I am learning about MySQL performance tuning. I read the Performance Schema chapter of the MySQL 5.7 manual and I have a MySQL 5.7.20 database running under Linux. I have a test table and I am running a simple query that does a full scan of the table and returns the sum of a couple of columns. The goal was to look at some Performance Schema tables to see what they would tell me about this simple query. I bounce the MySQL database service before running my test script so that the query pulls data from disk and not memory. A zip of my test script and its output is here: zip

I looked at the tables that had this pattern events_*_history_long. These are the four tables:

There seems to be a hierarchical relationship among these tables something like this:

  • statements->transactions
  • statements->stages->waits->waits

There may be other relationships that my simple test did not capture. If you look at the zip you will see all the levels of the hierarchy that I explored but, in this case, the bottom level seems the most interesting. Here is the query that explores the bottom of the statements->stages->waits->waits path:

select * from events_waits_history_long
where
NESTING_EVENT_ID in 
(select EVENT_ID from events_waits_history_long
where
NESTING_EVENT_ID in 
(select EVENT_ID from events_stages_history_long
where
NESTING_EVENT_ID in 
(select EVENT_ID from events_statements_history_long
where SQL_TEXT like concat('%te','st%'))
and NESTING_EVENT_TYPE = 'STATEMENT')
and NESTING_EVENT_TYPE = 'STAGE')
and NESTING_EVENT_TYPE = 'WAIT'
order by EVENT_ID;

The output reminds me of a trace in Oracle. It shows the individual disk reads and waits for internal locks and mutexs.

+-----------+----------+--------------+-------------------------------------------+-----------------+---------------+---------------+------------+-------+---------------+-------------------------------+------------+-------------+-----------------------+------------------+--------------------+----------------+-----------------+-------+
| THREAD_ID | EVENT_ID | END_EVENT_ID | EVENT_NAME                                | SOURCE          | TIMER_START   | TIMER_END     | TIMER_WAIT | SPINS | OBJECT_SCHEMA | OBJECT_NAME                   | INDEX_NAME | OBJECT_TYPE | OBJECT_INSTANCE_BEGIN | NESTING_EVENT_ID | NESTING_EVENT_TYPE | OPERATION      | NUMBER_OF_BYTES | FLAGS |
+-----------+----------+--------------+-------------------------------------------+-----------------+---------------+---------------+------------+-------+---------------+-------------------------------+------------+-------------+-----------------------+------------------+--------------------+----------------+-----------------+-------+
|        28 |    11162 |        11162 | wait/io/file/innodb/innodb_data_file      | fil0fil.cc:5778 | 5744476160960 | 5744479752316 |    3591356 |  NULL | NULL          | /var/lib/mysql/bobby/test.ibd | NULL       | FILE        |       139648405042752 |              203 | WAIT               | read           |           16384 |  NULL |
|        28 |    11163 |        11163 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0lru.cc:1320 | 5744479934370 | 5744479974470 |      40100 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11164 |        11164 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0buf.cc:5150 | 5744480619278 | 5744480646546 |      27268 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11165 |        11165 | wait/synch/sxlock/innodb/hash_table_locks | buf0buf.cc:5153 | 5744480749202 | 5744480858274 |     109072 |  NULL | NULL          | NULL                          | NULL       | NULL        |              70197752 |              203 | WAIT               | exclusive_lock |            NULL |  NULL |
|        28 |    11166 |        11166 | wait/synch/mutex/innodb/fil_system_mutex  | fil0fil.cc:1032 | 5744481202332 | 5744481236016 |      33684 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66654712 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11167 |        11167 | wait/io/file/innodb/innodb_data_file      | fil0fil.cc:5778 | 5744481464586 | 5744485206718 |    3742132 |  NULL | NULL          | /var/lib/mysql/bobby/test.ibd | NULL       | FILE        |       139648405042752 |              203 | WAIT               | read           |           16384 |  NULL |
|        28 |    11168 |        11168 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0lru.cc:1320 | 5744485374336 | 5744485415238 |      40902 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11169 |        11169 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0buf.cc:5150 | 5744485590876 | 5744485618144 |      27268 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11170 |        11170 | wait/synch/sxlock/innodb/hash_table_locks | buf0buf.cc:5153 | 5744485730424 | 5744485815436 |      85012 |  NULL | NULL          | NULL                          | NULL       | NULL        |              70197624 |              203 | WAIT               | exclusive_lock |            NULL |  NULL |
|        28 |    11171 |        11171 | wait/synch/mutex/innodb/fil_system_mutex  | fil0fil.cc:1032 | 5744486328716 | 5744486357588 |      28872 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66654712 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11172 |        11172 | wait/io/file/innodb/innodb_data_file      | fil0fil.cc:5778 | 5744486586960 | 5744490523176 |    3936216 |  NULL | NULL          | /var/lib/mysql/bobby/test.ibd | NULL       | FILE        |       139648405042752 |              203 | WAIT               | read           |           16384 |  NULL |
|
... edited for length...

Most of the time seems to be taken up on reads from a file, which is what I expected. If you look at the wait/io/file/innodb/innodb_data_file waits they seem to be 16384 byte reads from the file associated with the table. I could use this information to build a query to show a profile of the time spent by the query based on EVENT_NAME. It would be kind of like a tkprof of an Oracle trace.

This post is just a simple first test. I am new to MySQL tuning so feel free to leave a comment or email me if you have suggestions or corrections related to this post.

Bobby

Categories: DBA Blogs

Pages