The Anti-Kyte

Subscribe to The Anti-Kyte feed The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 5 months 6 days ago

VirtualBox – running a Windows 10 Guest on an Ubuntu Host

Fri, 2019-08-02 10:02

Yes, you read that right. There are lots of guides out there on how to set up and run Ubuntu in VirtualBox on a Windows host.
These days, you even have access to an Ubuntu sub-system in Windows itself.
If, like me, you’re OS of choice is Ubuntu but you need to test how something behaves in Windows – is it possible to knock up an appropriate environment ?
The answer is, of course, yes – otherwise this would be quite a short post.

The following steps will work for VirtualBox on any host – Linux, Mac, even Windows.

What I’m going to cover is :

  • Finding a Windows ISO
  • Configuring the VM in VirtualBox
  • Persuading VirtualBox to use a sensible screen size for your new VM

But first…

A quick word about versions

The Host OS I’m running is Ubuntu 16.04 LTS.
I’m using version 5.0 of VirtualBox.
NOTE – steps to install VirtualBox on a Debian-based host such as Ubuntu can be found here.
The Guest OS I’m installing is, as you’d expect, Windows 10.

Finding a Windows ISO

Depending on which Windows edition you are after, there are a couple of places you can look.
Microsoft provides an ISO for a 180-day evaluation version of Windows Server here.

In this case, I simply want to try Windows 10 so I need to go to this page.

Once here, I need to select an edition…

…and the language…

…before we’re presented with a choice of 32 or 64-bit :

I’ve chosen 64-bit. After the download, I am now the proud owner of :

-rw-rw-r-- 1 mike mike 4.7G Jul 10 17:10 Win10_1903_V1_English_x64.iso
Creating the VirtualBox VM

Fire up VirtualBox and click on the New button to start the Create Virtual Machine wizard :

…Next assign it some memory


I’m going to create a Virtual Hard Disk :

…using the default type…

…and being dynamically allocated…

…of the size recommended by VirtualBox :

I now have a new VM, which I need to point at the Windows ISO I downloaded so that I can install Windows itself :

All I have to do now is follow the Windows installation prompts, a process which I’ll not bore you with here.
However, you may be interested to learn that you don’t necessarily require a Product Key for this installation.
Chris Hoffman has produced an excellent guide on the subject.

Installing Guest Additions

Now I’ve configured Windows, I still need to install VirtualBox Guest Additions. Among other things, this will help to control the screen size of the VM so that I don’t need a magnifying glass !

First of all, we need to virtually eject the virtual cd containing the Windows ISO. To do this, we actually go to the VM’s VirtualBox menu and select Devices/Optical Drives/Remove disk from virtual drive :

Now, using the same menu (Devices), we select Insert Guest Additions CD Image :

When Windows prompts you, choose to install :

Accept the defaults when prompted and then reboot the VM.

If, by some chance you are still faced with a small viewport for your Windows VM, you can try the following…

Resizing the VM display

Go to the VirtualBox application itself and with the VM selected, go to the File/Preferences menu.

Click on Display, and set the Maximum Guest Screen Size to Automatic

When you next re-start the VM, the window should now be a more reasonable size.
In fact, with any luck, your desktop should now look something like this :

The best way to run Windows !

Customizing DML in an APEX Interactive Grid

Tue, 2019-07-23 16:06

It should have been quite a relaxing Cricket World Cup final. After all, it was England v New Zealand. I was guaranteed to be on the winning side.
After several hours of nerve-shredding tension had failed to separate the teams England were awarded the trophy on the basis of dumb luck hitting more boundaries. The result was born with stoicism by the Black Caps, whose philosophy would, in other countries, be known as “Elite Niceness”. By a cruel twist of fate, Ben Stokes – England’s star all-rounder and Man of the Match – was actually born in Christchurch.
Oracle APEX has it’s own star all-rounder in the shape of the Editable Interactive Grid ( see what I did there ?)
As well as presenting information in the same way as an Interactive Report, it allows users to perform DML operations on the records it displays – provided it’s based on a single table.
What we’re going to look at here is how to base an Interactive Grid (IG) on a Query rather than a table whilst retaining the ability to perform DML operations on the displayed records. To achieve this, we’ll be customizing the PL/SQL that is executed when a DML operation is invoked in the IG.

The Application

For what follows, I’ll be using APEX 18.2 running against an Oracle 18cXE database.

We have two related tables which hold information about Men’s 50-over Cricket World Cup Finals :

The tables were created as follows :

create table teams(
    cid varchar2(3) primary key,
    team_name varchar2(100) not null)
/

create table finals(
    tournament_year number(4) primary key,
    date_played date,
    venue varchar2(100),
    winning_tcid varchar2(3) references teams(cid),
    losing_tcid varchar2(3) references teams(cid),
    winning_margin varchar2(100))
/    

… and have been populated with some data. The TEAMS table first…

insert into teams( cid, team_name)
values('AUS', 'AUSTRALIA');

insert into teams( cid, team_name)
values('ENG', 'ENGLAND');

insert into teams( cid, team_name)
values('RSA', 'SOUTH AFRICA');

insert into teams( cid, team_name)
values('WI', 'WEST INDIES');

insert into teams( cid, team_name)
values('IND', 'INDIA');

insert into teams( cid, team_name)
values('NZL', 'NEW ZEALAND');

insert into teams( cid, team_name)
values('PAK', 'PAKISTAN');

insert into teams( cid, team_name)
values('SL', 'SRI LANKA');

insert into teams( cid, team_name)
values('ZIM', 'ZIMBABWE');

insert into teams( cid, team_name)
values('BAN', 'BANGLADESH');

insert into teams( cid, team_name)
values('AFG', 'AFGHANISTAN');

insert into teams( cid, team_name)
values('IRL', 'IRELAND');

commit;

…and then FINALS…

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1975, to_date('21-JUN-1975', 'DD-MON-YYYY'), 'LORDS', 'WI', 'AUS', '17 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1979, to_date('23-JUN-1979' , 'DD-MON-YYYY'), 'LORDS', 'WI', 'ENG', '92 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1983, to_date('25-JUN-1983' , 'DD-MON-YYYY'), 'LORDS', 'IND', 'WI', '43 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1987, to_date('08-NOV-1987' , 'DD-MON-YYYY'), 'EDEN GARDENS', 'AUS', 'ENG', '7 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1992, to_date('25-MAR-1992' , 'DD-MON-YYYY'), null, 'PAK', 'ENG', '22 runs');

-- deliberate mistake to be corrected later
insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1997, to_date('17-MAR-1996' , 'DD-MON-YYYY'), 'QADDAFI STADIUM', 'SL', 'AUS', '8 wickets');

commit;

The data that we wish to present to application users can be retrieved with the following query :

select f.tournament_year, f.date_played, 
    initcap(f.venue) as venue,
    initcap(win.team_name) as winners,
    initcap(ru.team_name) as runners_up,
    f.winning_margin
from finals f
inner join teams win on f.winning_tcid = win.cid
inner join teams ru on f.losing_tcid = ru.cid
/

There are a couple of issues with the data as it stands so we want users to be able to edit the existing application records and add new ones.
As we’re using APEX, it would be good if we could use an Editable Interactive Grid as this would mean only needing to write a single page to handle all of these actions.
Of course, we could simply create a view using this query and then knock-up an Instead Of trigger to handle any DML. Alternatively…

Creating the Interactive Grid

The first step is to create a Region…

…and define it as an IG…

…using the above query as the Source SQL Query

When we run this (after saving our changes), we can see that the ROW_SELECTOR and ROW_ACTION widgets are missing :

Making the IG Editable

In order to persuade APEX to add these widgets, we need to make the IG Editable. We can do this by going to the Region’s Attributes and setting the Edit Enabled property to Yes

Among other things, this automatically creates a Save Interactive Grid Data process :

However, if we attempt to run the page now (after saving these changes), we’ll hit an error…

…so we need to select a Primary Key.
TOURNAMENT_YEAR will fulfil this purpose in our IG, so we just need to adjust the properties of the column :

When we save the change and run the page we can see that the ROW_SELECTOR and ROW_ACTION are now present :

The IG is not based on a table or view, remember, so we still need to tell APEX what to do when any DML actions are initiated by the user.

Customising the Save Interactive Grid Data process

Returning to the Processing Tab in the Page Designer we need to change the Type of this process to PL/SQL Code :

In the Source PL/SQL Code box, we need to enter a PL/SQL block which will be run whenever the Page processes a DML action.
In order to tell what specific DML action a row is subject to, we can look at the value of the built-in $APEXROW_STATUS variable.
The possible values are :

  • C – for Create
  • U – Update
  • D – Delete

Therefore, one approach for our PL/SQL block would be simply to include the actual DML statements we want to execute in-line like this :

declare 
    l_win_tcid finals.winning_tcid%type;
    l_ru_tcid finals.losing_tcid%type;
    
    cursor c_team_cid( i_name teams.team_name%type)
    is
        select cid
        from teams
        where team_name = upper(i_name);
begin
    -- reset the variables for each pass through this process
    l_win_tcid := null;
    l_ru_tcid := null;
    
    if :APEX$ROW_STATUS = 'D' then
        -- DELETE the record
        delete from finals
        where tournament_year = :TOURNAMENT_YEAR;
        
    else

        -- As we're either doing an UPDATE or an INSERT, we need to find the 
        -- CID value for each of the team names ( if specified)

        if :WINNERS is not null then
            open c_team_cid(:WINNERS);
            fetch c_team_cid into l_win_tcid;
            close c_team_cid;
        end if;

        if :RUNNERS_UP is not null then
            open c_team_cid(:RUNNERS_UP);
            fetch c_team_cid into l_ru_tcid;
            close c_team_cid;
        end if;
        
        if :APEX$ROW_STATUS = 'U' then
            -- UPDATE the record
            -- Note that, although DATE_PLAYED is a DATE field, the bind variable
            -- contains a string so we need to handle the conversion to a date here
            update finals
            set date_played = nvl(to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), date_played),
                venue = nvl(:VENUE, venue),
                winning_tcid = nvl(l_win_tcid, winning_tcid),
                losing_tcid = nvl(l_ru_tcid, losing_tcid),
                winning_margin = nvl(:WINNING_MARGIN, winning_margin)
            where tournament_year = :TOURNAMENT_YEAR;
        
        elsif :APEX$ROW_STATUS = 'C' then
            -- CREATE (INSERT) as new record
            -- We need to return the Primary Key of the new record as APEX will
            -- use it to refresh the IG display and show the newly created row
            insert into finals( tournament_year, date_played, venue, 
                winning_tcid, losing_tcid, winning_margin)
            values( :TOURNAMENT_YEAR, to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), :VENUE,
                l_win_tcid, l_ru_tcid, :WINNING_MARGIN)
            returning tournament_year into :TOURNAMENT_YEAR;
        end if;
    end if;
end;

The main points to note are :

  • The possible values of APEX$ROW_STATUS
  • the bind variables for the IG column values return a string, hence the explicit date conversion of :DATE_PLAYED
  • when the insert code generates a new primary key value (e.g. an ID taken from a sequence), APEX needs to know what it is so that it can display the new record once processing is completed.

On that last point, I believe that we may not need to return the PK value in this case because we’re already providing it explicitly in the UI. I’ve left it here as an illustration that this may be the case in other circumstances.

Anyhow, let’s give this a test.
First of all, we’re going to update the 1992 record with the venue :

The record for 1997 is wrong. The Tournament actually took place in 1996, so we’ll create a corrected record and delete the incorrect one :

In both cases, we get a success message :

…and if we check in the database, we can see that the DML has worked as expected :

NOTE : for any cricketing pedants reading – yes, I know that Sri Lanka won by 7 wickets, not 8. I’ll be correcting this when I “notice it” in a few paragraphs time.

At this point you may be less than thrilled at the prospect of having to maintain large chunks of PL/SQL in your APEX application.
Luckily for you, as the process code is a PL/SQL block, you can do pretty much anything you like…

Calling Stored Program Units from our APEX process

I’ve created a database package to handle DML operations on the FINALS table :

create or replace package edit_finals as

    procedure save_final( 
        i_year in finals.tournament_year%type, 
        i_date in finals.date_played%type,
        i_venue in finals.venue%type default null,
        i_winners in teams.team_name%type default null,
        i_losers in teams.team_name%type default null,
        i_margin in finals.winning_margin%type default null,
        o_year out finals.tournament_year%type);

    procedure delete_final( i_year in finals.tournament_year%type);
end edit_finals;
/

create or replace package body edit_finals as

    function get_team_cid( i_team in teams.team_name%type)
        return teams.cid%type 
    is 
        rtn_cid teams.cid%type;
    begin
        select cid 
        into rtn_cid
        from teams 
        where team_name = upper( i_team);

        return rtn_cid;
    end get_team_cid;

    procedure save_final( 
        i_year in finals.tournament_year%type, 
        i_date in finals.date_played%type,
        i_venue in finals.venue%type default null,
        i_winners in teams.team_name%type default null,
        i_losers in teams.team_name%type default null,
        i_margin in finals.winning_margin%type default null,
        o_year out finals.tournament_year%type)
    is 
        win_tcid teams.cid%type := null;
        ru_tcid teams.cid%type := null;
    begin
        if i_winners is not null then 
            win_tcid := get_team_cid( i_winners);
        end if;

        if i_losers is not null then 
            ru_tcid := get_team_cid( i_losers);
        end if;

        merge into finals 
            using dual 
            on ( tournament_year = i_year)
        when matched then update
            set date_played = nvl(i_date, date_played),
                venue = nvl(i_venue, venue),
                winning_tcid = nvl(win_tcid, winning_tcid),
                losing_tcid = nvl(ru_tcid, losing_tcid),
                winning_margin = nvl(i_margin, winning_margin)
            where tournament_year = i_year
        when not matched then 
            insert( tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
            values( i_year, i_date, i_venue, win_tcid, ru_tcid, i_margin);

        o_year := i_year;
    end save_final;

    procedure delete_final( i_year in finals.tournament_year%type) is 
    begin
        delete from finals
        where tournament_year = i_year;
    end delete_final;
end edit_finals;
/

This means that the PL/SQL code for the Save IG Process is a bit more compact :

begin
    if :APEX$ROW_STATUS = 'D' then
        edit_finals.delete_final(i_year => :TOURNAMENT_YEAR);
    elsif :APEX$ROW_STATUS in ('C', 'U') then
        edit_finals.save_final( 
           i_year => :TOURNAMENT_YEAR, 
           i_date => to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')),
           i_venue => :VENUE,
           i_winners => :WINNERS,
           i_losers => :RUNNERS_UP,
           i_margin => :WINNING_MARGIN,
           o_year => :TOURNAMENT_YEAR);
   end if;
end;

The behaviour is the same…

…which we can confirm in the database…

The benefit of this approach is that, if you decide to migrate from APEX to another front-end technology, the Package is there in the database and does not need to change.

References

I’ve not managed to find too much else out there on this topic, hence this post.
There is a very good forum post by Patrick Wolf.
This Oracle Tips and Tricks article may be worth a read.
Finally, there’s this rather comprehensive look at Interactive Grids by John Snyders.

Oracle Express Edition – features new to 18cXE

Sat, 2019-07-13 15:55

I learned a number of things watching the recently concluded Women’s Soccer World Cup.

  • it is possible for a human body to be fouled in the penalty area without then falling over as if it has just been shot (see Lisa-Marie Utland for Norway against England for proof)
  • England have developed a happy knack of reaching the Semi-Final of every tournament they enter
  • Alex Morgan is a tea-drinker

There were some complaints that Morgan’s celebration of her goal against England were disrespectful. Personally, I though it was rather witty. Let’s face it, if she’d really want to stir up some controversy, she’d have mimed putting the milk in first.
That said, she is going to face a challenge at the Olympics next year were she may herself up against a united Great Britain team.
If you’re not up on your sporting geopolitics, Great Britain (for now at least) comprises four nations – England, Wales, Northern Ireland and Scotland.
Should Morgan need to celebrate in a similar vein, the tea will be just the start. She’ll then need to neck a pint of Brains SA (known as “Skull Attack” in Cardiff) followed by a Guinness ( there is no border in Ireland when it comes to the Black Stuff) before moving on to a Scotch single-malt chaser.

Anyone looking for an object lesson in how to up their game could do far worse than have a look at how Oracle Express Edition has evolved from 11g to 18c…

“Hey Megan, how much extra stuff did Oracle squeeze into 18c Express Edition ?”

Using the License documentation for 18c XE and that of 11g XE, I’ve compiled a list of features which are now included in Express Edition but were not in 11gXE.
This is mainly for my own benefit as I keep being surprised when I find another – previously Enterprise Edition only – feature in Express Edition.
I’ve also listed the new stuff that wasn’t previously available in any edition of Oracle 11g.

Anyhow, for anyone who might find it useful…

Extra functionality in 18c

Using the Functional Categories mentioned in the license documents as a template, the features newly available in 18c Express Edition are :

Consolidation

Perhaps the most profound structural change is the advent of Multitenant functionality.
18c XE comes with Oracle Multitenant and allows up to three Pluggable Databases (PDBs).

Development Platform

SQLJ is now available.

High Availability
  • Online Index Rebuild
  • Online table organization
  • Online table redefinition
  • Trial recovery
  • Fast-start fault recovery
  • Flashback Table
  • Flashback Database
  • Cross-platform Backup and Recovery
Integration

Sharded Queues have been introduced in Oracle since 11g.

Networking

Network Compression is also new to Oracle since 11g.

Performance
  • Client Side Query Cache
  • Query Results Cache
  • PL/SQL Function Result Cache
  • Adaptive Execution Plans
  • In-Memory Column Store
  • In-Memory Aggregation
  • Attribute Clustering
Security
  • Column-Level Encryption
  • Tablespace Encryption
  • Oracle Advanced Security
  • Oracle Database Vault
  • Oracle Label Security
  • Centrally Managed Users
  • Fine-grained auditing
  • Privilege Analysis
  • Real Application Security
  • Redaction
  • Transparent Sensitive Data Protection
  • Virtual Private Database
Spatial and Graph Data

11g XE contained no spatial functionality at all. In 18c you get :

  • Oracle Spatial and Graph
  • Property Graph and RDF Graph Technologies (RDF/OWL)
  • Partitioned spatial indexes
VLDB, Data Warehousing, and Business Intelligence
  • Oracle Partitioning
  • Oracle Advanced Analytics
  • Oracle Advanced Compression
  • Advanced Index Compression
  • Prefix Compression (also called Key Compression)
  • Basic Table Compression
  • Deferred Segment Creation
  • Bitmapped index, bitmapped join index, and bitmap plan conversions
  • Transportable tablespaces, including cross-platform and full transportable export and import
  • Summary management—Materialized View Query Rewrite
Stuff that’s not included

Unlike it’s predecessor, 18cXE does not come with a version of Application Express (APEX). Fortunately, you can still get APEX and Oracle Rest Data Services for the same low, low price of – well, nothing – and install them separately.

SQLcl ALIAS – because you can’t remember everything.

Tue, 2019-06-25 08:47

I want to find out which file is going to hold any trace information generated by my database session. Unfortunately, I keep forgetting the query that I need to run to find out.
Fortunately I’m using SQLcl, which includes the ALIAS command.
What follows is a quick run-through of this command including :

  • listing the aliases that are already set up in SQLcl
  • displaying the code that an alias will execute
  • creating your own alias interactively
  • deleting an alias
  • using files to manage custom aliases

Whilst I’m at it, I’ll create the alias for the code to find that pesky trace file too.

In the examples that follow, I’m connected to an Oracle XE18c PDB using SQLcl 18.4 from my Ubuntu 16.4 LTS laptop via the Oracle Thin Client. Oh, and the Java details are :

Meet the ALIAS command

As so often in SQLcl, it’s probably a good idea to start with the help :

help alias

…which explains that :

“Alias is a command which allows you to save a sql, plsql or sqlplus script and assign it a shortcut command.”

A number of aliases are already included in SQLcl. To get a list of them simply type :

alias

…which returns :

locks
sessions
tables
tables2

If we want to see the code that will be run when an alias is invoked, we simply need to list the alias :

alias list tables

tables - tables <schema> - show tables from schema
--------------------------------------------------

select table_name "TABLES" from user_tables

Connected as HR, I can run the alias to return a list of tables that I own in the database :

Creating an ALIAS

To create an alias of my own, I simply need to specify the alias name and the statement I want to associate it with. For example, to create an alias called whoami :

alias whoami =
select sys_context('userenv', 'session_user')
from dual;

I can now confirm that the alias has been created :

alias list whoami
whoami
------

select sys_context('userenv', 'session_user')
from dual

…and run it…

I think I want to tidy up that column heading. I could do this by adding an alias in the query itself. However, alias does support the use of SQL*Plus commands…

alias whoami =
column session_user format a30
select sys_context('userenv', 'session_user') session_user
from dual;

…which can make the output look slightly more elegant :

A point to note here is that, whilst it is possible to include SQL*Plus statements in an alias for a PL/SQL block (well, sort of)…

alias whoami=set serverout on
exec dbms_output.put_line(sys_context('userenv', 'session_user'));

…when the alias starts with a SQL*Plus statement, it will terminate at the first semi-colon…

Where you do have a PL/SQL alias that contains multiple statement terminators (‘;’) you will need to run any SQL*Plus commands required prior to invoking it.
Of course, if you find setting output on to be a bit onerous, you can save valuable typing molecules by simply running :

alias output_on = set serverout on size unlimited

I can also add a description to my alias so that there is some documentation when it’s listed :

alias desc whoami The current session user

When I now list the alias, the description is included…more-or-less…

I’m not sure if the inclusion of the text desc whoami is simply a quirk of the version and os that I’m running on. In any case, we’ll come to a workaround for this minor annoyance in due course.

In the meantime, I’ve decided that I don’t need this alias anymore. To remove it, I simply need to run the alias drop command :

alias drop whoami


At this point, I know enough about the alias command to implement my first version of the session tracefile alias that started all this.
The query, that I keep forgetting, is :

select value
from v$diag_info
where name = 'Default Trace File'
/

To create the new alias :

alias tracefile =
select value "Session Trace File"
from v$diag_info
where name = 'Default Trace File';

I’ll also add a comment at this point :

alias desc tracefile The full path and filename on the database server of the tracefile for this session

My new alias looks like this :

The aliases.xml file

Unlike the pre-supplied aliases, the code for any alias you create will be held in a file called aliases.xml.

On Windows, this file will probably be somewhere under your OS user’s AppData directory.
On Ubuntu, it’s in $HOME/.sqlcl

With no custom aliases defined the file looks like this :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases/>

Note that, even though I have now defined a custom alias, it won’t be included in this file until I end the SQLcl session in which it was created.

Once I disconnect from this session, the file includes the new alias definition :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases>
<alias name="tracefile">
<description><![CDATA[desc tracefile The full path and filename on the database server of the tracefile for this session
]]></description>
<queries>
<query>
<sql><![CDATA[select value "Session Trace File"
from v$diag_info
where name = 'Default Trace File']]></sql>
</query>
</queries>
</alias>
</aliases>

Incidentally, if you’ve played around with SQLDeveloper extensions, you may find this file structure rather familiar.

The file appears to be read by SQLcl once on startup. Therefore, before I run SQLcl again, I can tweak the description of my alias to remove the extraneous text…

<description><![CDATA[The full path and filename on the database server of the tracefile for this session]]></description>

Sure enough, next time I start an SQLcl session, this change is now reflected in the alias definition :

Loading an alias from a file

The structure of the aliases.xml file gives us a template we can use to define an alias in the comfort of a text editor rather than on the command line. For example, we have the following PL/SQL block, which reads a bind variable :

declare
v_msg varchar2(100);
begin
if upper(:mood) = 'BAD' then
if to_char(sysdate, 'DAY') != 'MONDAY' then
v_msg := q'[At least it's not Monday!]';
elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
v_msg := q'[At least it's not Monday morning!]';
else
v_msg := q'[I'm not surprised. It's Monday morning !]';
end if;
elsif upper(:mood) = 'GOOD' then
v_msg := q'[Don't tell me West Ham actually won ?!]';
else
v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
end if;
dbms_output.new_line;
dbms_output.put_line(v_msg);
end;
/

Rather than typing this in on the command line, we can create a file ( called pep_talk.xml) which looks like this :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases>
<alias name="pep_talk">
<description><![CDATA[How are you feeling ? Usage is pep_talk <emotion>]]></description>
<queries>
<query>
<sql><![CDATA[
declare
v_msg varchar2(100);
begin
if upper(:mood) = 'BAD' then
if to_char(sysdate, 'DAY') != 'MONDAY' then
v_msg := q'[At least it's not Monday!]';
elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
v_msg := q'[At least it's not Monday morning!]';
else
v_msg := q'[I'm not surprised. It's Monday morning !]';
end if;
elsif upper(:mood) = 'GOOD' then
v_msg := q'[Don't tell me West Ham actually won ?!]';
else
v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
end if;
dbms_output.new_line;
dbms_output.put_line(v_msg);
end;
]]></sql>
</query>
</queries>
</alias>
</aliases>

Now, we can load this alias from the file as follows :

alias load pep_talk.xml
Aliases loaded

We can now execute our new alias. First though, we need to remember to turn serveroutput on before we invoke it :

Once you’ve terminated your SQLcl session, the new alias will be written to aliases.xml.

Exporting custom aliases

There may come a time when you want to share your custom aliases with your colleagues. After all, it’s always useful to know where the trace file is and who doesn’t need a pep talk from time-to-time ?

To “export” your aliases, you can issue the following command from SQLcl :

alias save mike_aliases.xml

This writes the file to the same location as your aliases.xml :

You can then import these aliases to another SQLcl installation simply by sharing the file and then using the alias load command.

References

As you can imagine, there are a wide variety of possible uses for the ALIAS command.

As the original author of this feature, this post by Kris Rice is probably worth a read.
Jeff Smith has written on this topic several times including :

Menno Hoogendijk has an example which employs some Javascript wizardry which he has published on GitHub.

Right, back to my trace files.

Installing the OE demo schema on 18cXE

Sat, 2019-06-15 13:18

It’s always a good idea to be wary of assuming too much.
Looking at the current Conservative Party Leadership contest, you might assume that a fantasy Brexit policy and a history of class A drug use were pre-requisites for the job of Prime Minister.
You may further assume that one is a result of the other.
That last assumption is unlikely however, unless the smoking, snorting and otherwise ingesting of illicit substances is widespread across all of the other major political parties. Then again…

For my part, I’ve had to correct some of my assumptions about the process for installing the OE sample schema into Oracle 18cXE running on CentOS 7.
What follows is a quick guide on how to accomplish this…without all the head-scratching over apparently spurious errors.
Specifically, I will be covering :

  • getting the OE schema installation scripts
  • checking the pre-requisites for the OE schema installation have been met
  • preparing the scripts for execution
  • performing the installation

Before we go too much further though, it’s probably wise to state some assumptions…

Assumptions

These steps assume that you’re running 18cXE on CentOS or some other Red Hat compatible distro ( e.g. Oracle Linux, Fedora).
We’re only installing the OE schema here. I already have the HR schema installed on the database and I do not want to drop and re-create it.
If you want to install all of the demo schemas then you’ll need to check the instructions in the README.md file once you’ve downloaded the installation scripts.
Speaking of which…

Finding the demo scripts

As stated in the documentation, only the scripts for the HR schema are included in the Oracle software.
If you want the other schemas, you need to download them from Oracle’s GitHub repo.

Although we’re only interested in the OE schema at the moment, the source code is provided in a single archive file.
Download the zip for the appropriate database release ( 18c in my case) and we should now have a file looking something like :

-rw-rw-r-- 1 mike mike 28882177 Jun  9 17:03 db-sample-schemas-18c.zip
Pre-requisites for installing the OE schema

Before I run off and start playing with my new zip file, I should really check that I’ve got everything I need to ensure that the setup will go smoothly.
The pre-requisites are :

  • the HR schema must already be installed and unlocked in the PDB that you are installing into
  • Oracle Spatial must be enabled
  • the installation scripts need to run on the database server (or a filesystem visible to it)

To check that HR is already available in the PDB (xepdb1 in my case) :

alter session set container = xepdb1;

select username, account_status
from cdb_users
where username = 'HR'
and con_id = sys_context('userenv', 'con_id');

USERNAME   ACCOUNT_STATUS 
---------- ---------------
HR         OPEN           

If the query does not return any rows then you will need to install the HR schema. This can be done following the instructions in the aforementioned documentation.

NOTE – before you do this it’s a good idea to double check to make sure that you are in the correct container database :

select sys_context('userenv', 'con_name') from dual;

If the ACCOUNT_STATUS is LOCKED then you need to unlock the HR user as the OE creation script will attempt to connect to the database as HR. To do this, connect to the target PDB as a user with the ALTER USER privilege (e.g. SYSTEM) and run :

alter user hr account unlock;

User HR altered

As I’m on 18cXE, Oracle Spatial should be enabled. Thanks to Norman, I know that I can confirm this by running :

select parameter, value       
from v$option 
where regexp_like(parameter, 'spatial', 'i')
/

PARAMETER            VALUE     
-------------------- ----------
Spatial              TRUE      

Next, we need to upload the zip file to the Database Server (which I’ve done using sftp).

Preparing the Install scripts

Now it’s on the correct machine, I need to change the ownership to oracle…

sudo chown oracle:oinstall db-sample-schemas-18c.zip

ls -l db-sample-schemas-18c.zip 
-rw-rw-r--. 1 oracle oinstall 28882177 Jun 10 12:34 db-sample-schemas-18c.zip

…because I want to move it to the appropriate directory in ORACLE_HOME…

sudo mv db-sample-schemas-18c.zip $ORACLE_HOME/demo/schema
cd $ORACLE_HOME/demo/schema

…and extract it as oracle…

sudo su oracle
unzip db-sample-schemas-18c.zip

cd db-sample-schemas-18c

… and create a directory to hold the log file that’s output when we run the installation…

mkdir order_entry/logs

Now we’ve extracted the files, you may be tempted to have a quick peek at the code.

Initially you may be somewhat surprised. For example, in order_entry/oe_main.sql you can see :

DEFINE vscript = __SUB__CWD__/order_entry/coe_&vrs

Is __SUB__CWD__ some SQL*Plus magic that has eluded you all these years ?

Well, no. Looking that the README.md file, we can confirm that it’s simply a placeholder that we need to replace with a valid absolute path to the scripts.

Fortunately, the README.md file also contains the code required to achieve this.
Simply ensure that we’re in the db-sample-schemas-18c directory and run :

perl -p -i.bak -e 's#__SUB__CWD__#'$(pwd)'#g' *.sql */*.sql */*.dat 

After running this, we can see that the line in oe_main.sql now reads :

DEFINE vscript = /opt/oracle/product/18c/dbhomeXE/demo/schema/db-sample-schemas-18c/order_entry/coe_&vrs
Performing the installation

The main order entry script is in the order_entry sub-directory and is called oe_main.sql.
This script accepts 9 positional parameters :

  1. the password for the OE user that will be created as part of this installation
  2. the default tablespeace for OE
  3. the temporary tablespace for OE
  4. the password for the pre-existing HR user
  5. the SYS password
  6. the directory path for the data files (these are in the current directory as they are included in the zip file)
  7. the absolute path for the log directory we just created ( including a trailing slash)
  8. the version (it’s v3 in this case)
  9. the SQL*Plus connect string for the target database

The “version” the script asks for is appended to the basename of some of the scripts that will be run as part of the install.
To obtain this, simply run :

ls -1 *_v?.sql

ccus_v3.sql
cidx_v3.sql
cmnt_v3.sql
coe_v3.sql
cord_v3.sql
cwhs_v3.sql
doe_v3.sql
loe_v3.sql
pcus_v3.sql
poe_v3.sql
pord_v3.sql
pwhs_v3.sql

From this, I can see that the value of version that the script needs is “v3”.

Now, connect to the database via SQL*Plus as SYSTEM. Note that, as I’ve not setup an Oracle client on the server, I need to specify the server name, tns port and pdb name in the connect string :

sqlplus system@frea.virtualbox:1522/xepdb1

Finally, we can run the install script :

@oe_main.sql oe_user_password users temp hr_user_password sys_password . /opt/oracle/product/18c/dbhomeXE/demo/schema/db-sample-schemas-18c/order_entry/logs/ v3 frea.virtualbox:1522/xepdb1

The log file generated during the installation will contain everything that now whizzes up the screen.
You can find it after the run in the logs directory we created earlier :

ls -l logs
total 8
-rw-r--r--. 1 oracle oinstall 6244 Jun 10 13:50 oe_oc_v3.log

Even if there aren’t any particularly alarming errors in the logfile, it would be nice to verify that all has gone as intended.
First we can check that the OE schema now contains the expected number of objects of each type :

select object_type, 
    count(object_name) as "Number of objects"
from dba_objects
where owner = 'OE'
group by object_type
order by 2 desc;

OBJECT_TYPE          Number of objects
-------------------- -----------------
INDEX                               48
TYPE                                37
LOB                                 15
TABLE                               14
VIEW                                13
SYNONYM                              6
TRIGGER                              4
TYPE BODY                            3
SEQUENCE                             1
FUNCTION                             1

10 rows selected. 

We can also confirm that data has been loaded into the tables :

set serverout on size unlimited
declare
    l_count pls_integer;
begin
    for r_tab in (
        select dbms_assert.sql_object_name(owner||'.'||table_name) oe_table
        from dba_tables
        where owner = 'OE'
        and nested = 'NO'
        order by table_name)
    loop
        execute immediate 'select count(*)  from '||r_tab.oe_table 
            into l_count;
        dbms_output.put_line(upper(r_tab.oe_table)||' contains '||l_count||' records.');
    end loop;
end;
/

OE.CUSTOMERS contains 319 records.
OE.INVENTORIES contains 1112 records.
OE.ORDERS contains 105 records.
OE.ORDER_ITEMS contains 665 records.
OE.PRODUCT_DESCRIPTIONS contains 8640 records.
OE.PRODUCT_INFORMATION contains 288 records.
OE.PROMOTIONS contains 2 records.
OE.WAREHOUSES contains 9 records.


PL/SQL procedure successfully completed.

The Country may be descending into chaos but at least we’ve got a working OE schema to play around with.

Great Football Managers and Oracle Unified Auditing

Fri, 2019-05-31 11:48

It’s quite a good time for English football at the moment. Not only have English clubs monopolised the finals of the two main European Club competitions this year, but Manchester City have made history by winning all three domestic competitions in the same season.
Note that this isn’t a British footballing first. Glasgow Rangers managed it way back in 1949. And whilst the European Cup ( Champions League if you must) has eluded City this season, Celtic managed that particular clean sweep in 1967.
In English football however, this particular treble is unprecedented. In fact, there are remarkably few managers who have been able to win every one of the major domestic honours in their entire career.
All of which will come in handy when looking for examples to illustrate the topic at hand, namely Oracle Unified Auditing.
With the aid of 18c Express Edition, we’ll be looking at :

  • The Oracle supplied Unified Auditing Policies that are enabled by default
  • Where to find the Audit Trail
  • How to create our own Unified Auditing Policy to monitor DML operations on specific objects

Unified Auditing default settings

Unified Autiding was introduced in 12c. By default, it’s configured to work alongside auditing from earlier database versions.
We can check that this the case – and that traditional auditing has not been disabled by checking that the Unified Auditing parameter is set to FALSE :

Looking at the audit initialization parameters is also useful at this point :

select name, value, description
from v$parameter
where name like '%audit%'
and value is not null
order by name
/

NAME                           VALUE                          DESCRIPTION                                       
------------------------------ ------------------------------ --------------------------------------------------
audit_file_dest                /opt/oracle/admin/XE/adump     Directory in which auditing files are to reside   
audit_sys_operations           TRUE                           enable sys auditing                               
audit_trail                    DB                             enable system auditing                            
unified_audit_sga_queue_size   1048576                        Size of Unified audit SGA Queue                   

We can see that we have a directory setup for audit files to be written to. However, the audit_trail is set to DB – i.e. a table in the database.
To start with, we can have a look at the audit_file_dest directory and see what, if anything, is being written there :

ls -lrt /opt/oracle/admin/XE/adump

...
-rw-r-----. 1 oracle oinstall     866 May 30 13:16 XE_ora_3880_20190530131639680705403060.aud
-rw-r-----. 1 oracle oinstall    1777 May 30 13:16 XE_ora_4375_20190530131639737578276396.aud
-rw-r-----. 1 oracle oinstall    1202 May 30 13:16 XE_ora_4381_20190530131643953498040745.aud

Closer inspection of these files reveal that they contain details of the operations performed when Oracle is started but before the database is open. For example :

cat XE_ora_3880_20190530131639680705403060.aud

Audit file /opt/oracle/admin/XE/adump/XE_ora_3880_20190530131639680705403060.aud
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
Build label:    RDBMS_18.4.0.0.0DBRU_LINUX.X64_181017
ORACLE_HOME:    /opt/oracle/product/18c/dbhomeXE
System name:    Linux
Node name:	frea.virtualbox
Release:        3.10.0-957.1.3.el7.x86_64
Version:        #1 SMP Thu Nov 29 14:49:43 UTC 2018
Machine:        x86_64
Instance name: XE
Redo thread mounted by this instance: 0 <none>
Oracle process number: 299
Unix process pid: 3880, image: oracle@frea.virtualbox (TNS V1-V3)

Thu May 30 13:16:39 2019 +01:00
LENGTH : '247'
ACTION :[7] 'STARTUP'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[7] 'UNKNOWN'
STATUS:[1] '0'
DBID:[0] ''
SESSIONID:[1] '0'
USERHOST:[15] 'frea.virtualbox'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[3] '138'

Well that could come in useful, but is there any auditing of operations once the database is up and running ?

Oracle supplied Unified Audit Policies

There are a number of policies that are set up by default :

select owner, object_name
from dba_objects
where object_type = 'UNIFIED AUDIT POLICY'
and oracle_maintained = 'Y'
order by object_name;

OWNER      OBJECT_NAME                                       
---------- --------------------------------------------------
SYS        ORA_ACCOUNT_MGMT                                  
SYS        ORA_CIS_RECOMMENDATIONS                           
SYS        ORA_DATABASE_PARAMETER                            
SYS        ORA_DV_AUDPOL                                     
SYS        ORA_DV_AUDPOL2                                    
SYS        ORA_LOGON_FAILURES                                
SYS        ORA_RAS_POLICY_MGMT                               
SYS        ORA_RAS_SESSION_MGMT                              
SYS        ORA_SECURECONFIG                                  

9 rows selected. 

We can confirm which of these are active by running :

select policy_name, user_name, 
    enabled_option, entity_name, entity_type, 
    success, failure
from audit_unified_enabled_policies
order by policy_name
/

POLICY_NAME          ENABLED_OPTION       ENTITY_NAME  ENTITY_TYPE  SUCCESS    FAILURE   
-------------------- -------------------- ------------ ------------ ---------- ----------
ORA_LOGON_FAILURES   ALL USERS            ALL USERS    USER         NO         YES       
ORA_SECURECONFIG     ALL USERS            ALL USERS    USER         YES        YES       

One point to note is that the ORA_LOGON_FAILURES policy is configured to record statements on failure and not on success.

ORA_LOGON_FAILURES

We can see which activities are covered by this policy by running :

select audit_option, audit_option_type 
from audit_unified_policies
where policy_name = 'ORA_LOGON_FAILURES';

AUDIT_OPTION         AUDIT_OPTION_TYPE 
-------------------- ------------------
LOGON                STANDARD ACTION   

From this, we can infer that this policy will record any failed logon attempts in the audit trail. Let’s test that hypothesis…

First off, I’m going to issue a failed login attempt :

sqlplus hr/thisisnotthepassword@xepdb1

SQL*Plus: Release 12.2.0.1.0 Production on Thu May 30 14:10:13 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: 

Right, that should do it. Now to check the audit trail in the database :

select event_timestamp, audit_type, dbusername, action_name, return_code
from unified_audit_trail uat
where unified_audit_policies = 'ORA_LOGON_FAILURES'
and uat.event_timestamp > systimestamp - ((1/24/60) *5) -- happened in the last 5 minutes
order by event_timestamp desc
/

Note here that the record includes the return code, which looks to be the Oracle Error that was generated.
Also note that, in a multitenant environment, you need to be in the relevant container when you query the audit trail.
In this example, I was attempting to login to the XEPDB1 PDB. I would not see this logon failure if I was querying the audit trail from the CDB.

The UNIFIED_AUDIT_TRAIL contains 99 columns and the relevance of each of them will vary according to what you’re looking at in the audit trail.
As usual, the columns themselves have been documented with comments which you can find by running :

select column_name, comments
from dba_col_comments
where owner = 'AUDSYS'
and table_name = 'UNIFIED_AUDIT_TRAIL'
order by column_name
/

COLUMN_NAME                    COMMENTS                                          
------------------------------ --------------------------------------------------
ACTION_NAME                    Name of the action executed by the user           
ADDITIONAL_INFO                Text comment on the audit trail entry             
APPLICATION_CONTEXTS           SemiColon seperate list of Application Context Nam
                               espace, Attribute, Value information in (APPCTX_NS
                               PACE,APPCTX_ATTRIBUTE=<value>) format             

AUDIT_OPTION                   Auditing option set with the audit statement      
AUDIT_TYPE                     Type of the Audit Record                          
AUTHENTICATION_TYPE            Type of Authentication for the session user       
CLIENT_IDENTIFIER              Client identifier in each Oracle session        
***snip***
XS_SESSIONID                   Real Application User Session Identifier          
XS_TARGET_PRINCIPAL_NAME       Target principal name in Real Application Security
                                operations                                       

XS_USER_NAME                   Real Application User name                        
ORA_SECURECONFIG

By contrast, ORA_SECURECONFIG audits a number of system privileges :

select audit_option, audit_option_type, object_schema
from audit_unified_policies
where policy_name = 'ORA_SECURECONFIG'
order by 1
/

AUDIT_OPTION                             AUDIT_OPTION_TYPE  OBJECT_SCHEMA                 
---------------------------------------- ------------------ ------------------------------
ADMINISTER KEY MANAGEMENT                SYSTEM PRIVILEGE   NONE                          
ALTER ANY PROCEDURE                      SYSTEM PRIVILEGE   NONE                          
ALTER ANY SQL TRANSLATION PROFILE        SYSTEM PRIVILEGE   NONE                          
ALTER ANY TABLE                          SYSTEM PRIVILEGE   NONE                          
ALTER DATABASE                           SYSTEM PRIVILEGE   NONE                          
ALTER DATABASE DICTIONARY                STANDARD ACTION    NONE                          
ALTER DATABASE LINK                      STANDARD ACTION    NONE                          
ALTER PLUGGABLE DATABASE                 STANDARD ACTION    NONE                          
ALTER PROFILE                            STANDARD ACTION    NONE                          
ALTER ROLE                               STANDARD ACTION    NONE                          
ALTER SYSTEM                             SYSTEM PRIVILEGE   NONE                          
ALTER USER                               STANDARD ACTION    NONE                          
AUDIT SYSTEM                             SYSTEM PRIVILEGE   NONE                          
BECOME USER                              SYSTEM PRIVILEGE   NONE                          
CREATE ANY JOB                           SYSTEM PRIVILEGE   NONE                          
CREATE ANY LIBRARY                       SYSTEM PRIVILEGE   NONE                          
CREATE ANY PROCEDURE                     SYSTEM PRIVILEGE   NONE                          
CREATE ANY SQL TRANSLATION PROFILE       SYSTEM PRIVILEGE   NONE                          
CREATE ANY TABLE                         SYSTEM PRIVILEGE   NONE                          
CREATE DATABASE LINK                     STANDARD ACTION    NONE                          
CREATE DIRECTORY                         STANDARD ACTION    NONE                          
CREATE EXTERNAL JOB                      SYSTEM PRIVILEGE   NONE                          
CREATE PLUGGABLE DATABASE                STANDARD ACTION    NONE                          
CREATE PROFILE                           STANDARD ACTION    NONE                          
CREATE PUBLIC SYNONYM                    SYSTEM PRIVILEGE   NONE                          
CREATE ROLE                              STANDARD ACTION    NONE                          
CREATE SQL TRANSLATION PROFILE           SYSTEM PRIVILEGE   NONE                          
CREATE USER                              SYSTEM PRIVILEGE   NONE                          
DROP ANY PROCEDURE                       SYSTEM PRIVILEGE   NONE                          
DROP ANY SQL TRANSLATION PROFILE         SYSTEM PRIVILEGE   NONE                          
DROP ANY TABLE                           SYSTEM PRIVILEGE   NONE                          
DROP DATABASE LINK                       STANDARD ACTION    NONE                          
DROP DIRECTORY                           STANDARD ACTION    NONE                          
DROP PLUGGABLE DATABASE                  STANDARD ACTION    NONE                          
DROP PROFILE                             STANDARD ACTION    NONE                          
DROP PUBLIC SYNONYM                      SYSTEM PRIVILEGE   NONE                          
DROP ROLE                                STANDARD ACTION    NONE                          
DROP USER                                SYSTEM PRIVILEGE   NONE                          
EXECUTE                                  OBJECT ACTION      SYS                           
EXECUTE                                  OBJECT ACTION      REMOTE_SCHEDULER_AGENT        
EXEMPT ACCESS POLICY                     SYSTEM PRIVILEGE   NONE                          
EXEMPT REDACTION POLICY                  SYSTEM PRIVILEGE   NONE                          
GRANT ANY OBJECT PRIVILEGE               SYSTEM PRIVILEGE   NONE                          
GRANT ANY PRIVILEGE                      SYSTEM PRIVILEGE   NONE                          
GRANT ANY ROLE                           SYSTEM PRIVILEGE   NONE                          
LOGMINING                                SYSTEM PRIVILEGE   NONE                          
PURGE DBA_RECYCLEBIN                     SYSTEM PRIVILEGE   NONE                          
SET ROLE                                 STANDARD ACTION    NONE                          
TRANSLATE ANY SQL                        SYSTEM PRIVILEGE   NONE   

It so happens that I’ve installed the HR demo application on this database by following Oracle’s instructions and running $ORACLE_HOME/demo/schema/human_resources/hr_main.sql as SYSTEM.

This policy has captured the DDL from this activity :

select scn, dbusername, system_privilege_used, action_name, object_schema, object_name,
from unified_audit_trail
where unified_audit_policies = 'ORA_SECURECONFIG'
and scn >= 2038863
order by event_timestamp
/

       SCN DBUSERNAME   SYSTEM_PRIVILEGE_USED                    ACTION_NAME          OBJECT_SCHEM OBJECT_NAME         
---------- ------------ ---------------------------------------- -------------------- ------------ --------------------
   2038863 SYSTEM       CREATE USER                              CREATE USER                       HR                  
   2038869 SYSTEM       ALTER USER                               ALTER USER                        HR                  
   2038874 SYSTEM       ALTER USER                               ALTER USER                        HR                  
   2038880 SYSTEM       GRANT ANY PRIVILEGE                      GRANT                                                 
   2038886 SYSTEM       GRANT ANY ROLE, GRANT ANY PRIVILEGE      GRANT                                                 
   2038897 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           REGIONS             
   2038910 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           REGIONS             
   2038923 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           COUNTRIES           
   2038929 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           COUNTRIES           
   2038937 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           LOCATIONS           
   2038949 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           LOCATIONS           
   2038962 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           DEPARTMENTS         
   2038976 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2038988 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           JOBS                
   2039000 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           JOBS                
   2039016 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           EMPLOYEES           
   2039030 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           EMPLOYEES           
   2039036 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2039052 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           JOB_HISTORY         
   2039068 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           JOB_HISTORY         
   2040134 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2040179 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
   2040303 SYSTEM       CREATE ANY PROCEDURE                     CREATE PROCEDURE     HR           SECURE_DML          
   2040318 SYSTEM       CREATE ANY PROCEDURE                     CREATE PROCEDURE     HR           ADD_JOB_HISTORY     
   2041099 SYSTEM       ALTER USER                               ALTER USER                        HR                  

25 rows selected. 

NOTE that the audit trail contains the System Change Number (SCN) as well as a timestamp, which can make it easier to select the part of the audit trail you are interested in at any given time.

As well as the details of who did what, the audit trail holds the SQL that was executed as part of this operation. For example, if we wanted to check what statement was run when the HR.REGIONS table was created, we could run :

select sql_text
from unified_audit_trail
where scn = 2038897;

SQL_TEXT                                                                        
--------------------------------------------------------------------------------
CREATE TABLE regions                                                            
    ( region_id      NUMBER                                                     
       CONSTRAINT  region_id_nn NOT NULL                                        
    , region_name    VARCHAR2(25)                                               
    )                            

Note that SCN is unique to a transaction. As we’re looking for a DDL statement, which is almost always contained in it’s own transaction (except when it isn’t), it’s probably not such a good idea to rely on it as a de facto Primary Key when you’re searching for DML statements, which may share a transaction ( and therefore an SCN) with several others. Obviously, an SCN is completely useless when you’re searching for SELECT statements.
Whilst we’re on the subject of predicates when selecting from the audit trail, it’s worth bearing in mind that the underlying table is actually partitioned by EVENT_TIMESTAMP. By default, these partitions cover one month each. Including EVENT_TIMESTAMP in the predicate will cause the optimizer to perform partition pruning so may help query performance.

Anyway, speaking of auditing DML, it’s time to introduce…

The Demo Application

We have an application to hold details of each manager who has won each of England’s major domestic trophies. The application is owned by MOTTY, who is a bit of a footie geek. We also have an application user called TEDDY who hasn’t been around long and still thinks that football didn’t exist before the Premier League…

create user motty identified by motty
/
alter user motty quota unlimited on users
/

grant connect, create table, create sequence, create procedure to motty
/

create user teddy identified by teddy
/

grant connect to teddy
/

The application objects are the WINNING_MANAGERS table…

create table winning_managers(
    wm_name varchar2(100) constraint wm_pk primary key,
    league_title number(4),
    fa_cup number(4),
    league_cup number(4))
/

comment on table winning_managers is 'Managers who have won a career treble and the years in which they first won each major English domestic trophy'
/

comment on column winning_managers.wm_name is 'The name of the manager'
/

comment on column winning_managers.league_title is 'The year the manager first won the Title'
/

comment on column winning_managers.fa_cup is 'The year the manager first won the FA Cup'
/

comment on column winning_managers.league_cup is 
    'The year the manager first won the League (Coca-Cola/Worthington/Carling/Capital One/EFL/Carabao) Cup'
/

…and a package…

create or replace package managers_api as
    procedure ins( 
        i_name winning_managers.wm_name%type, 
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type);
        
    procedure upd(
        i_name winning_managers.wm_name%type,
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type);
        
    procedure del( i_name winning_managers.wm_name%type);
end managers_api;
/

create or replace package body managers_api is

    procedure ins( 
        i_name winning_managers.wm_name%type, 
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type)
    is
    begin
        insert into winning_managers( wm_name, league_title, fa_cup, league_cup)
        values( i_name, i_title, i_fac, i_lc);
    end ins;
        
    procedure upd(
        i_name winning_managers.wm_name%type,
        i_title winning_managers.league_title%type, 
        i_fac winning_managers.fa_cup%type, 
        i_lc winning_managers.league_cup%type)
    is
    begin
        if i_name is null then
            raise_application_error(-20000, 'Manager name must be specified');
        end if;
        
        update winning_managers
        set league_title = nvl(i_title, league_title),
            fa_cup = nvl( i_fac, fa_cup),
            league_cup = nvl(i_lc, league_cup)
        where wm_name = i_name;
    end upd;
        
    procedure del(i_name winning_managers.wm_name%type)
    is
    begin
        if i_name is null then
            raise_application_error(-20000, 'Manager name must be specified');
        end if;
        delete from winning_managers
        where wm_name = i_name;
    end del;
end managers_api;
/

For reasons which will shortly become apparent, Teddy has been granted access to the application as follows :

grant select, insert, update, delete on winning_managers to teddy
/

grant execute on managers_api to teddy
/

Teddy is working on a project to identify all managers who have won each of the major English domestic trophies in their carrer.
He immediately takes advantage of these new privileges to add all of the Premier League winning managers to the application…

begin
    motty.managers_api.ins('SIR ALEX FERGUSON', 1993, 1994, 2006);
    motty.managers_api.ins('KENNY DALGLISH', 1995, null, 2012);
    motty.managers_api.ins('ARSENE WENGER', 1998, 1998, null);
    motty.managers_api.ins('JOSE MOURINHO', 2005, 2007, 2006);
    motty.managers_api.ins('CARLO ANCELOTTI', 2010, 2010, null);
    motty.managers_api.ins('ROBERTO MANCINI', 2012, 2011, null);
    motty.managers_api.ins('MANUEL PELLIGRINI', 2014, null, 2014);
    motty.managers_api.ins('CLAUDIO RANIERI', 2016, null, null);
    motty.managers_api.ins('ANTONIO CONTI', 2017, 2018, null);
    motty.managers_api.ins('PEP GUARDIOLA', 2018, 2019, 2018);

    commit;
end;
/

When Motty checks the table, he can see that there are a few problems :

select wm_name, league_title, fa_cup, league_cup
from winning_managers
/

WM_NAME              LEAGUE_TITLE     FA_CUP LEAGUE_CUP
-------------------- ------------ ---------- ----------
SIR ALEX FERGUSON            1993       1994       2006
KENNY DALGLISH               1995                  2012
ARSENE WENGER                1998       1998           
JOSE MOURINHO                2005       2007       2006
CARLO ANCELOTTI              2010       2010           
ROBERTO MANCINI              2012       2011           
MANUEL PELLIGRINI            2014                  2014
CLAUDIO RANIERI              2016                      
ANTONIO CONTI                2017       2018           
PEP GUARDIOLA                2018       2019       2018

10 rows selected. 

It looks like someone has not accounted for anything prior to the 1992/93 season.
Motty needs to find out who is doing this so that he can explain that the FA Cup has been going since 1872, the League since 1888, and the League Cup since 1961.

Auditing DML

In order to create a policy, you need to have the AUDIT SYSTEM or AUDIT_ADMIN role.
We’ll connect to the PDB as system and grant the role to MOTTY now :

grant audit_admin to motty;

Now, connected as MOTTY, we can create an audit policy for the table :

create audit policy wm_audit
    actions all on motty.winning_managers
    when 'sys_context(''userenv'', ''session_user'') not in (''SYS'', ''SYSTEM'')' evaluate per session
/

Audit created

…and enable it…

audit policy wm_audit
/

Audit succeeded.

Note that the when condition of the policy needs to be formatted as in this example. If you use the standard SQL method for escaping quotes – i.e. :

when q'['sys_context('userenv', 'session_user') not in ('SYS', 'SYSTEM')']' evaluate per session

you’ll find yourself on the wrong end of :

ORA-46368: Audit policy does not have a simple rule condition.

In terms of specifying which users should be included or excluded from this policy, you could re-create the it as follows :

noaudit policy wm_audit
/

drop audit policy wm_audit
/

create audit policy wm_audit
    actions all on motty.winning_managers
/

audit policy wm_audit except sys, system	
/

Either way, with the policy in place and enabled, the next time Teddy connects and checks the table…

select wm_name 
from motty.winning_managers
/

…MOTTY gets to see an entry in the audit log :


select dbusername, sql_text
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
/

DBUSERNAME           SQL_TEXT                                          
-------------------- --------------------------------------------------
TEDDY                select wm_name                                    
                     from motty.winning_managers                       

Looking at the policy, we can confirm that any activities by SYS and SYSTEM on this table are excluded from any logging under this policy :


select audit_condition
from audit_unified_policies
where policy_name = 'WM_AUDIT'
/

AUDIT_CONDITION                                                       
----------------------------------------------------------------------
sys_context('userenv', 'session_user') not in ('SYS', 'SYSTEM')

So when SYSTEM runs a query against the table :

select wm_name
from motty.winning_managers
where league_title is not null
and fa_cup is not null
and league_cup is not null
/

WM_NAME                                 
----------------------------------------
SIR ALEX FERGUSON
JOSE MOURINHO
PEP GUARDIOLA

It does not show up in the Audit Trail :

select event_timestamp, sql_text
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
and dbusername = 'SYSTEM'
/

no rows selected

At this point, you may be wondering why you don’t simply dispense with any journalling triggers you have hanging around in your application and just use the audit trail instead. Well, to illustrate one major difference in how these mechanisms might behave consider the following…

Teddy has overheard Motty grumbling in the office and decides to insert another record into the table …

insert into motty.winning_managers(wm_name, league_title, fa_cup, league_cup)
values('MOTTY', 1888, 1872, 1961);

…before having second thoughts…

rollback;

and checking that the record has indeed not been committed :

select league_title, fa_cup, league_cup
from motty.winning_managers
where wm_name = 'MOTTY'
/

no rows selected

Unfortunately, when Motty checks the audit trail again, Teddy is busted…

select dbusername, action_name, sql_text
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
and event_timestamp > systimestamp - ((1/24/60) * 10) 
order by event_timestamp
/

DBUSERNAME           ACTION_NAME          SQL_TEXT                                
-------------------- -------------------- ----------------------------------------
TEDDY                INSERT               insert into motty.winning_managers(wm_na
                                          me, league_title, fa_cup, league_cup)   
                                          values('MOTTY', 1888, 1872, 1961)       
                                                                                 

TEDDY                SELECT               select league_title, fa_cup, league_cup 
                                          from motty.winning_managers             
                                          where wm_name = 'MOTTY'                 

So, even though the INSERT statement was not committed, it still appears in the audit trail. Probably not the sort of thing you want from a journalling trigger.

DML from stored procedure calls

Teddy has now started to clean up the data using the MANAGERS_API package :

begin
    -- Correct the dates for Sir Alex Ferguson's first FA Cup and League Cup wins as a manager
    motty.managers_api.upd(i_name => 'SIR ALEX FERGUSON', i_title => null, i_fac => 1990, i_lc => 1992);
    
    -- Delete Kenny Dalglish record...
    motty.managers_api.del(i_name => 'KENNY DALGLISH');
    
    --Re-insert the record to reflect the fact that he has been knighted
    motty.managers_api.ins(i_name => 'SIR KENNY DALGLISH', i_title => 1986, i_fac => 1986, i_lc => 2012);
end;
/

commit;

We can see each of the DML statements resulting from these stored program unit calls. However, whilst the SQL_TEXT shows the bind variable placeholders, the bind variable values themseleves can be found in the SQL_BINDS column :

select dbusername, action_name, sql_text, sql_binds
from unified_audit_trail
where unified_audit_policies = 'WM_AUDIT'
and event_timestamp > systimestamp - ((1/24/60) * 10)
order by event_timestamp
/

DBUSERNAME      ACTION_NAME     SQL_TEXT                                                     SQL_BINDS                     
--------------- --------------- ------------------------------------------------------------ ------------------------------
TEDDY           UPDATE          UPDATE WINNING_MANAGERS SET LEAGUE_TITLE = NVL(:B4 , LEAGUE_  #1(0):  #2(4):1990 #3(4):1992
                                TITLE), FA_CUP = NVL( :B3 , FA_CUP), LEAGUE_CUP = NVL(:B2 ,   #4(17):SIR ALEX FERGUSON     
                                LEAGUE_CUP) WHERE WM_NAME = :B1                                                           

TEDDY           DELETE          DELETE FROM WINNING_MANAGERS WHERE WM_NAME = :B1             #1(14):KENNY DALGLISH        
TEDDY           INSERT          INSERT INTO WINNING_MANAGERS( WM_NAME, LEAGUE_TITLE, FA_CUP,  #1(18):SIR KENNY DALGLISH #2(
                                 LEAGUE_CUP) VALUES( :B4 , :B3 , :B2 , :B1 )                4):1986 #3(4):1986 #4(4):2012 

Realising that not completing the list of managers will annoy anyone who has read this far in the hope of finding out, Teddy knuckles down and finishes the job. Once he’s done, we can see the full list :

select wm_name, league_title, fa_cup, league_cup, 
    greatest( league_title, fa_cup, league_cup) as "Career Treble"
from motty.winning_managers
order by 5
/

MANAGER                        LEAGUE_TITLE       FA_CUP   LEAGUE_CUP Career Treble
------------------------------ ------------ ------------ ------------ -------------
Joe Mercer                             1968         1969         1961          1969
Bill Nicholson                         1961         1961         1971          1971
Don Revie                              1969         1972         1968          1972
Sir Alex Ferguson                      1993         1990         1992          1993
George Graham                          1989         1993         1987          1993
Jose Mourinho                          2005         2007         2006          2007
Sir Kenny Dalglish                     1986         1986         2012          2012
Pep Guardiola                          2018         2019         2018          2019

8 rows selected. 
Useful Links and Acknowledgements

As ever, the Oracle documentation is a mine of useful information.

The page for the AUDIT command is a useful starting point, as is the page for CREATE AUDIT POLICY.

As usual, Tim Hall’s Oracle Base has a number of useful articles on this topic, this one being of particular use to me in putting this post together.

ORA-01775 – Looping Chain of Synonyms and Epic Football Punditry

Tue, 2019-04-30 04:59

In 1990, Liverpool became English League Champions for the 10th time in 15 seasons.
Despite this impressive track record, my Dad came over all Yoda-esque and confidently predicted that they would not win The Title again in his lifetime.
Since then, Liverpool have won everything else, including the Champions League – OK Dad , the European Cup – but the prediction has held.
In fact, it’s gone on so long that it probably qualifies as a prophecy by now.
Before the start of each season, I can assess Liverpool’s prospects, by simply enquiring after his health.
“Musn’t grumble, ‘cos if you do no-one bloody listens !” can be taken as a synonym for “I’ll be around for a while yet, so don’t waste your money on backing Liverpool to win it this season”.
Which brings us to the subject of this post – namely the apparently random nature of the ORA-01775 error, where synonyms are concerned…

To demonstrate what’s going on and (hopefully) why, I’ve created a table in the MIKE schema and granted SELECT on it to HR :

create table predictions (
    predictor varchar2(128), 
    prediction varchar2(4000))
/

insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
/

grant select on predictions to hr
/

As a result, I can query the table when connected as HR :

I can also create a local synonym for the table so I don’t have to remember which schema it’s in …

create or replace synonym predictions for mike.predictions;

Now, if we were to drop the table (back in the MIKE schema, remember) …

drop table predictions;

… and then attempt to reference it directly, the result is entirely, well, predictable…

Meanwhile, the synonym is unaffected by the removal of it’s underlying table …

…which means we get a different error when we use it to try to access the dropped table…

I’m using SQLCL here so we only see the first line of the error. However, we can use SQLCL’s OERR function to get the full error text, which is far more informative :

The error message makes it very clear what might be causing the error and gives you a good idea how to fix it.
Of course, you’ll always get this error if the synonym you’re referencing is pointing to a non-existent table, right ?

When Oracle starts yanking your chain

Let’s re-create the table in MIKE again …

create table predictions (
    predictor varchar2(128), 
    prediction varchar2(4000))
/

insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
/

grant select on predictions to hr
/

This time however, we’re going to create a public synonym (connecting as a user with CREATE PUBLIC SYNONYM privilege) …

create public synonym predictions for mike.predictions
/

…and drop HR’s local synonym…

drop synonym predictions;

…which leaves us with only the Public synonym pointing to the table…

HR can still access the table, but this time it’s using the public synonym…

Of course, if we drop the table again, we’ll get the same error when we attempt to access it via the synonym…

Wait, what ? What happened to that nice, explanatory ORA-00980 ?

OERR isn’t saying anything :


According to the docs :

Oracle Database attempts to resolve references to objects at the schema level before resolving them at the PUBLIC synonym level

In this instance, it seems that Oracle thinks that the synonym is pointing to itself.

We can confirm this hypothesis by using a public synonym with a different name to that of the table it’s pointing to.

First of all though, we need to drop our current public synonym or we’ll wind up right back here in looping chain lunacy :

drop public synonym predictions;

create table predictions (
    predictor varchar2(128), 
    prediction varchar2(4000))
/

insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
/

grant select on predictions to hr
/

create public synonym prophecies for mike.predictions;

HR now accesses the table using the PROPHECIES public synonym…


If we drop the underlying table again and attempt to use the synonym we’re back to a rather more reassuring error :

The moral of this tale ? Well, if you get an ORA-01775 error then it’s probably worth checking the availability of any tables that are a target of a public synonym as a first step to debugging the issue.

Oh, and my Dad knows best.

Easy Listening with Datapump in the SQLDeveloper DBA Module

Mon, 2019-04-01 12:05

There are a number of ways to transfer data between Oracle Databases, one of which is to use the PL/SQL Datapump API – DBMS_DATAPUMP.
If you wish to avail yourself of this utility but find the syntax a bit fiddly, you always have the option of getting SQLDeveloper to do (most of) it for you.
What we’re talking about here is how to persuade the SQLDeveloper DB module to :

  • Create and execute a custom Datapump export job
  • do most of the work creating an import of a subset of the exported data

The Application

I’ve got a simple application which consists of a single table and some data :

select title, artist, original_artist, is_mellow
from mike.tracks
/

TITLE                          ARTIST                ORIGINAL_ARTIST                                    IS_MELLOW
------------------------------ --------------------- -------------------------------------------------- ---------
MOTORHEAD                      COURDUROY             HAWKWIND                                           Y        
MOTORHEAD                      MOTORHEAD             HAWKWIND                                           N        
THUNDERSTRUCK                  STEVE N SEAGULLS      AC/DC                                              Y        
WHISKY IN THE JAR              METALLICA             THIN LIZZY                                         N        
HIGHWAY TO HELL                CARLA BRUNI           AC/DC                                              Y        
SHIPPING UP TO BOSTON          GODDESSES OF BAGPIPES DROPKICK MURPHYS                                   Y        
ANOTHER GIRL ANOTHER PLANET    BLINK 182             THE ONLY ONES                                      N        
SMOKE ON THE WATER             PAT BOONE             DEEP PURPLE                                        Y        
EMERGENCY                      MOTORHEAD             GIRL SCHOOL                                        N        
SMELLS LIKE TEEN SPIRIT        2 CELLOS              NIRVANA                                            Y        
WHOLE LOTTA LOVE               CHICO ARNEZ           LED ZEPPLIN                                        Y        
KING OF TWILIGHT               IRON MAIDEN           NEKTAR                                             N        
PARANOID                       GUNTER NORIS          BLACK SABBATH                                      Y        
C'MON FEEL THE NOISE           OASIS                 SLADE                                              N        

14 rows selected. 

The Datapump Export

Note that I’ve deliberately kept things simple in this example. If you were looking at ways of transferring this kind of data-volume then Datapump is probably overkill.

Anyhow, I want to export this table and all of it’s data. To do this in SQLDeveloper, connected as a DBA user I can to go to the View menu and select DBA.
In the DBA tree, I need to right-click the Data Pump node and select the Data Pump Export Wizard

This brings up the first screen of the Export Wizard. Slightly confusingly, it’s labelled as Step 2.
The table I want to export is in a different Schema to the one I’m connected as so I select Schemas from the radio group :

After clicking the Next button we get the chance to choose one or more schemas to include in the export. In my case, I just want the one schema :

The next screen affords us the opportunity to include or exclude types of object. The Filter Types are the same as those you’d see in the export log file :

On this occasion however, we’re not going use these filters so the screen for this step looks like this :

The next screen allows us to select which specific tables we may want to include in the export.

If I enter a search string in the Name field and click the Lookup button, I will eventually get a list of matching tables :

I can then highlight the required table(s) and add them to the list for export by clciking the down arrow…

In this case I want to export all of the data in all of the columns in the table so I have not specified a column list or a predicate.

Next, I can specify the name and location of a logfile for the export.
I’ve left the other options on this screen at their default setting…

I now need to specify the name and location of the dump file. Once again, all other settings are left at the default :

I’d like to run the job immediately so I don’t need to change any of the scheduling information :

Finally, I can see a summary of the job :

Interestingly, if I click on the PL/SQL tab, I can see the code that SQLDeveloper has generated for this export :

That’s quite interesting. Maybe this feature will come in handy in a bit.
For now though, I want to run the export immediately, so I just click the Finish button and SQLDeveloper starts the job whilst providing me a nice interactive view of it :

If I go to the Data Pump/Export Jobs node in the DBA tree, I can see a record of the job.
It will also display the contents of the log file :

If we now check the directory on the OS – the path for which can be found with…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH                                              
------------------------------------------------------------
/opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

… we can see that the export file has been created, with a timestamp appended in the filename :

cd /opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

ls -l
-rw-r-----. 1 oracle oinstall 331776 Mar 28 12:30 mike_tracks-12_30_11.dmp

We need to make a note of the file name because we’ll need it when we come to do…

The Datapump Import

I want to import this table into the HR schema. However, I don’t want all of the data, just the mellow tracks because I really need something to chill out to. Oh, and drown out the sound of the country going down the toilet.

Fortunately, SQLDeveloper can get us most of the way.

Once again, we need to right-click the Data Pump node of the DBA tree and select Data Pump Import Wizard.
We need to start by telling the Wizard the type of Import we want to do ( Tables) and the name of the dump file ( the export file we’ve just created) :

Next, we select the table using the shuttle buttons :

We want to import into the HR schema rather than the original schema :

I only want to create the table if it does not already exist :

Once again, I don’t want to schedule the job :

Finally, we have the summary :

Unlike the export, you’ll notice that the import Wizard has not given us the option to restrict the actual table rows we will be importing.
However, we do still get the code that has been generated in the PL/SQL tab of the summary window :

This time, rather than executing the job, I’m going to copy the code into an editor and add a manual “tweak” – namely :

    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');

The full script now looks like this :

set scan off
set serveroutput on
set escape off
whenever sqlerror exit 
DECLARE
    s varchar2(1000); 
    h1 number;
    errorvarchar varchar2(100):= 'ERROR';
    tryGetStatus number := 0;
begin
    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'TABLE', job_name => 'mike_playlist', version => 'COMPATIBLE'); 
    tryGetStatus := 1;
    dbms_datapump.set_parallel(handle => h1, degree => 1); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks_imp.log', directory => 'DATA_PUMP_DIR', filetype=>DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1); 
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MIKE'')'); 
    dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN(''TRACKS'')'); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks-12_30_11.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); 
    dbms_datapump.metadata_remap(handle => h1, name => 'REMAP_SCHEMA', old_value => 'MIKE', value => UPPER('HR') ); 
    
    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); 
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); 
    dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP'); 
    dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); 
    dbms_datapump.detach(handle => h1); 
    errorvarchar := 'NO_ERROR'; 
EXCEPTION
    WHEN OTHERS THEN
    BEGIN 
        IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN 
            DBMS_DATAPUMP.DETACH(h1);
        END IF;
    EXCEPTION 
    WHEN OTHERS THEN 
        NULL;
    END;
    RAISE;
END;
/

I can now run this code in a SQLDeveloper Worksheet…

Whilst the feedback is not quite as good in this environment, you can still find the job itself in the DBA module, where you can see all of the details…

If I now check the new HR.TRACKS table :

Appropriately enough given current events, eight tracks is the precise number needed to form a Desert Island Discs playlist.

Adding an item to the Ubuntu Unity Launcher

Sun, 2019-03-24 17:20

I would begin this post by saying something pithy about the latest Brexit crisis gripping the nation. However, watching any news or current affairs program at the moment leaves me feeling rather like this :

Fortunately, I had a new version of SQLDeveloper to install on my Ubuntu 16.04 laptop to take my mind off things.
After installing the software, I forgot – as I almost always do – how to add a new item to the Unity Launcher, so I thought I’d write down the steps this time.

Unity is similar to Gnome, KDE and – as it turns out – XFCE in that it seems to follow the freedesktop.org Desktop Entry Specification.
So, all I need to do is :

  • create a .desktop file in a suitable location to tell Unity how to run SQLDeveloper
  • add it to the Unity Launcher
Creating the desktop file

The desktop files in this Ubuntu version are in /usr/share/applications so…

sudo nano /usr/share/applications/sqldeveloper184.desktop

The file itself, looks like this

[Desktop Entry]
Name=SQLDeveloper 18.4
Exec=/opt/sqldeveloper184/sqldeveloper/sqldeveloper.sh
Icon=/opt/sqldeveloper184/sqldeveloper/icon.png
Type=Application
Categories=Utility;Development;IDE;

It’s probably worth briefly covering some of the entries in the file…

In Unity, the Name is the string which appears in the tooltip on the Launcher.
Exec is the fully qualified path of the command to run for this menu item.
Type is mandatory and must be one of Application, Link, or Directory.
Categories are optional but may be relevant if you are using a desktop which offers access to applications via a menu.

Now we’ve created the file, we need to add it to the Launcher.

In Unity, the Name is the string which appears in the tooltip on the Launcher.
Exec is the fully qualified path of the command to run for this menu item.
Type is mandatory and must be one of Application, Link, or Directory.
Categories are optional but may be relevant if you are using a desktop which offers access to applications via a menu.

Now we’ve created the file, we need to add it to the Launcher.

Adding the application to the launcher

The easiest way to do this, is to simply open the directory where the file is located in the File Manager…

nautilus /usr/share/applications

Sorting the files by Modified date (latest first), I can easily find my new file :

Then, I just need to drag it over to the Launcher…

Useful Links

As is so often the case, this sort of thing is easy when you know how…or when someone is kind enough to tell you. So…

Installing APEX and ORDS on Oracle 18cXE on CentOS

Fri, 2019-03-01 12:06

It’s been rather a trying week.
Wales beat England in the Rugby on Saturday and every Welsh person alive has been observing the ancient tradition of rubbing English noses in it ever since.
My claim to Welsh heritage by marriage have been given short-shrift by Deb, whose accent has become rather more pronounced ever since the final whistle.

All in all, the onslaught of Welsh chauvinism has left me feeling rather like this :

Until things blow over, I’ve decided to spend more time in the shed. Fortunately, the Wifi signal is still pretty good so I’ve decided to use the free time by installing APEX 18.2 into an Oracle 18c RDBMS. As I’ve got time on my hands ( celebrations are unlikely to fizzle out for a couple of months yet), I’ve decided to follow Oracle’s recommendation and configure it to run on ORDS 18.4.
Specifically, what I’ll be covering here is :

  • installing APEX 18c
  • installing ORDS 18c
  • configuring APEX to run on ORDS
  • configuring ORDS to run on HTTPS with self-signed SSL certificates
  • using systemd to start ORDS automatically on boot

That should keep me occupied for a while…

The Environment

The server is running CentOS 7, a Linux distro that is functionally identical to the corresponding RHEL (Red Hat Enterprise Linux) release.
The Oracle Database is 18c. In this context, the edition doesn’t really matter, but I’m using Express Edition.
The APEX (Application Express) version is 18.2.
The ORDS (Oracle Rest Data Services) version is 18.4.

Whilst using ORDS with APEX makes your application architecturally N-tier – the ORDS server is a separate piece of software form the RDBMS hosting APEX – you can physically run ORDS on the same server as the database itself and this is what I’m doing here.
Once again, this should make little (if any) difference to the steps required to complete the installation.

I’m assuming that the server you want to install ORDS on will be headless. Therefore, all of the server-side steps described here are performed on the command line.

Oracle Recommendations

There are several recommendations spread through the relevant Oracle documentation which I have followed :

  • I’m running a multi-tenant database so APEX is installed in a PDB
  • I’ve installed APEX before ORDS
  • I’ve configured ORDS to run on HTTPS

I’ll link to the relevant documentation for each recommendation as and when we get to it.

Helpful Links

I was going to put these at the end but then I realised you might want to refer to them before you get thoroughly confused by my ramblings. So…

First up then…

Installing APEX

APEX itself sits entirely within the Oracle RDBMS.

Checking Requirements

According to the Installation Guide, the database requirements for APEX 18.2 are :

  • Oracle Database 11.2.0.4 or higher
  • a database MEMORY_TARGET of at least 300MB
  • At least 220MB plus 60MB for each additional language in the “Apex” tablespace
  • At least 100MB free in the SYSTEM tablespace
  • If installing the development environment, Oracle XML DB

Note that “Apex” tablespace is in quotes because, by default, the APEX users tend to get installed into the SYSAUX tablespace.

Let’s have a quick check to make sure that all these requirements are met on our system before we go any further.

For the Database version, we can run the following query :

select banner
from v$version
/

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production             

The MEMORY_TARGET setting is a bit more convoluted as it’s set to 0 by default :

select display_value, isdefault, description
from v$parameter
where name = 'memory_target'
/

DISPLAY_VALUE        ISDEFAULT  DESCRIPTION                             
-------------------- ---------- ----------------------------------------
0                    TRUE       Target size of Oracle SGA and PGA memory

The description of the MEMORY_TARGET in this query provides a clue as to how you can make sure that this is the case.

The sga_target parameter holds the target size of the sga
the pga_aggregate_target parameter holds the “Target size for the aggregate PGA memory consumed by the instance”

So…

select sum(value)/1024/1024 as "Total Size (MB)"
from v$parameter
where name in ('sga_target', 'pga_aggregate_target')
/

Total Size (MB)
---------------
            378

Alternatively, if you’re running 12c or later, you can simply use Enterprise Manager Express :

As for the tablespace space availability :

select tablespace_name, 
    round((sum(maxbytes) - sum(bytes))/1024/1024) as "MB Free"
from dba_data_files
where tablespace_name in ('SYSTEM', 'SYSAUX')
group by tablespace_name;

TABLESPACE_NAME                   MB Free
------------------------------ ----------
SYSTEM                              32398
SYSAUX                              32148

Finally, we can check that Oracle XML DB is present with :

select comp_name, version_full
from dba_registry
where upper(comp_name) like 'ORACLE XML%'
/

COMP_NAME            VERSION_FULL                  
-------------------- ------------------------------
Oracle XML Database  18.4.0.0.0                    

Now that’s all done, we can go and get the software.

Downloading APEX

Head over to the APEX Download Page and pick up the latest version ( 18.2.0.0.12 at the time of writing). Note that there’s no OS specific options because APEX sits entirely within the RDBMS.

You can choose between the “All languages” version (705MB uncompressed) or “English language only” (310MB uncompressed). I’ve gone for the latter and therefore ended up with this file :

-rw-rw-r-- 1 mike mike 94421975 Feb 20 11:51 apex_18.2_en.zip

First we need to change the ownership of the file to the oracle user as that’s the os user we’ll be running the install as :

sudo chown oracle:oinstall apex_18.2_en.zip

Now we can switch to the oracle user and unzip the file to what is usually the $ORACLE_BASE directory (/opt/oracle) :

sudo su oracle
unzip -d /opt/oracle apex_18.2_en.zip

echo $ORACLE_BASE
/opt/oracle
cd $ORACLE_BASE/apex

Before we connect to the database, it’s worth noting that the Installation Guide has this to say about APEX on a Multitenant Database :

“Oracle recommends removing Oracle Application Express from the root container database for the majority of use cases, except for hosting companies or installations where all pluggable databases (PDBs) utilize Oracle Application Express and they all need to run the exact same release and patch set of Oracle Application Express. ”

In my case I’m installing into an 18cXE database which does not have APEX pre-installed. Either way, I want to install into a PDB rather than the CDB.
It’s also worth noting that you’ll be prompted for the following when you run the installation script :

  • The tablespace for the APEX application user (usually SYSAUX)
  • The tablespace for the APEX files user (SYSAUX)
  • The temporary tablespace (TEMP)
  • The virtual images directory (“/i/”)

So, still as the oracle user, from /opt/oracle/apex :

sqlplus /nolog
conn / as sysdba
alter session set container = xepdb1;

If you want to make sure that you are where you should be :

select sys_context('userenv', 'session_user') as session_user,
    sys_context('userenv', 'con_name') as container
from dual
/

SESSION_USER		       CONTAINER
------------------------------ ------------------------------
SYS			       XEPDB1

Next, we need to check that the Default Profile’s password complexity function is disabled :

select limit
from dba_profiles
where profile = 'DEFAULT'
and resource_type = 'PASSWORD'
and resource_name = 'PASSWORD_VERIFY_FUNCTION'
/

LIMIT
----------------------------------------
NULL

If there is a password complexity function assigned, you’ll need to disable it.
Remember to make a note of it’s name first so that you can put it back once the installation is complete.
To unset it :

alter profile default password_verify_function null;

Finally, we can start the installation. We want the full development environment so…

@apexins.sql SYSAUX SYSAUX TEMP /i/

This causes screens of messages and can run for some considerable time.
Eventually though, you should end up with :

Thank you for installing Oracle Application Express 18.2.0.00.12

Oracle Application Express is installed in the APEX_180200 schema.

The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex/apex_admin     (Oracle REST Data Services)

The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex     (Oracle REST Data Services)

timing for: Phase 3 (Switch)
Elapsed: 00:00:06.44
timing for: Complete Installation
Elapsed: 00:05:51.38

PL/SQL procedure successfully completed.





1 row selected.

...null1.sql
SYS> 

According to the Installation Guide, we should now have 3 new users, howver, it seems that four are actually created…

select username, 
    default_tablespace as default_ts, 
    temporary_tablespace as temp_ts
from cdb_users
where trunc(created) = trunc(sysdate)
/

USERNAME                  DEFAULT_TS                     TEMP_TS
------------------------- ------------------------------ ------------------------------
APEX_PUBLIC_USER          USERS                          TEMP
FLOWS_FILES               SYSAUX                         TEMP
APEX_180200               SYSAUX                         TEMP
APEX_INSTANCE_ADMIN_USER  USERS                          TEMP

4 rows selected.

APEX_INSTANCE_ADMIN_USER is not mentioned in the documentation but seems to have been created in addition to the three expected accounts.

Setting up the APEX Admin User

The apxchpwd.sql script we run for this purpose will prompt for a password. The script enforces the following password complexity rules :

  • Password must contain at least 6 characters
  • New password must differ from old password by at least 2 characters
  • Password must contain at least one numeric character (0123456789)
  • Password must contain at least one punctuation character (!”#$%&()“*+,-/:;?_)
  • Password must contain at least one upper-case alphabetic character

So…

Setting up the APEX_PUBLIC_USER database account

As we saw, the APEX_PUBLIC_USER account has been created as part of the installation.
At this point, it has been created with a randomly generated password, which we’ll need to change to something we know.
Additionally, you may feel it prudent to make sure that the password, once reset, won’t expire as, if it does, your application will stop working until you change it again.
Note that this is something you need to consider carefully – does the convenience of not having to worry about password expiration for this account outweigh the security risks raised by never changing it ? In my case I think it’s fine because I’m messing about with a VM on my laptop. If your in a more formal environment, you may have a somewhat different risk appetite.

If you’re horrified by the flagrant disregard for password security that I’m about to demonstrate, look away now…

First, we need to create a profile where the password does not expire :

create profile apex_pu limit password_life_time unlimited;

Note that all of the other profile properties will have default values :

select resource_name, limit
from dba_profiles
where profile = 'APEX_PU'
order by resource_type, resource_name
/

RESOURCE_NAME		       LIMIT
------------------------------ --------------------
COMPOSITE_LIMIT 	       DEFAULT
CONNECT_TIME		       DEFAULT
CPU_PER_CALL		       DEFAULT
CPU_PER_SESSION 	       DEFAULT
IDLE_TIME		       DEFAULT
LOGICAL_READS_PER_CALL	       DEFAULT
LOGICAL_READS_PER_SESSION      DEFAULT
PRIVATE_SGA		       DEFAULT
SESSIONS_PER_USER	       DEFAULT
FAILED_LOGIN_ATTEMPTS	       DEFAULT
INACTIVE_ACCOUNT_TIME	       DEFAULT
PASSWORD_GRACE_TIME	       DEFAULT
PASSWORD_LIFE_TIME	       UNLIMITED
PASSWORD_LOCK_TIME	       DEFAULT
PASSWORD_REUSE_MAX	       DEFAULT
PASSWORD_REUSE_TIME	       DEFAULT
PASSWORD_VERIFY_FUNCTION       DEFAULT

Next, we assign this profile to APEX_PUBLIC_USER :

alter user apex_public_user profile apex_pu;

To confirm that the profile has been assigned :

select profile
from dba_users
where username = 'APEX_PUBLIC_USER';

PROFILE
------------------------------
APEX_PU

Security conscientious objectors can look again now

To change the password :

alter user apex_public_user identified by Land0fmyfath3rs;

…replacing Land0fmyfath3rs with your own choice of non-rugby referencing password.

Finally, if you’ve unset the password verify function before starting, you now need to put it back :

alter profile default password_verify_function myfunc;

…where myfunc was the original password verify function.

At this point, the APEX installation is pretty much done.

Until now, I’ve been content to use the PL/SQL Gateway to serve my APEX pages. This involves using the Web Listener that is embedded in the database.
If you want go down this route, the installation steps can be found in Appendix A of the Installation Guide.

However, the Installation Guide has this to say on the subject of choosing a web listener :

“Oracle HTTP Server and Embedded PL/SQL gateway are considered legacy web listeners and their configuration instructions are in the appendix.”

This time, I’m going to go with Oracle’s recommendation and use ORDS.

Installing ORDS

ORDS – or Oracle Rest Data Services to give it it’s full name – is a Java EE based Web Listener. For this installation, we’ll be running it standalone using it’s built-in Jetty Web Server.
The official Installation and Configuration Guide can be found here.

Pre-Requisites

ORDS 18.4 requires an Oracle Database running release 11.1 or later. As the APEX install we’ve just done requires a slightly later minimum database version, we should be fine.

If you really feel the need, you can confirm that we’re good to go with the following query :

select banner
from v$version
/

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production             

The other requirement for ORDS is the presence of a JDK for version 8 or later.
This can be ascertained from the server command line by running :

java -version

Downloading ORDS

Back to Oracle we go, this time to the ORDS Download Page.
Unsurprisingly given that we’re downloading a Java application, the download is not OS specific. The current version is 18.4.

A short time later, you should now be the proud owner of…

-rwxrwx---. 1 mike mike 59777214 Feb 20 12:01 ords-18.4.0.354.1002.zip

As with the APEX install, we’ll want to transfer ownership of the file to oracle…

sudo chown oracle:oinstall ords-18.4.0.354.1002.zip

…as this is the account we’ll be using for the installation…

sudo su oracle

echo $ORACLE_BASE
/opt/oracle

mkdir $ORACLE_BASE/ords

Now we extract the file into the new directory…

unzip -d /opt/oracle/ords ords-18.4.0.354.1002.zip

…which produces screens of output ending with…

...
inflating: /opt/oracle/ords/examples/soda/getting-started/indexSpec1.json  
  inflating: /opt/oracle/ords/examples/db_auth/index.html  
  inflating: /opt/oracle/ords/examples/pre_hook/sql/install.sql  
  inflating: /opt/oracle/ords/examples/pre_hook/sql/uninstall.sql  
  inflating: /opt/oracle/ords/examples/plugins/plugin-echo-cmd/src/EchoMessages.properties  
  inflating: /opt/oracle/ords/examples/plugins/plugin-echo-cmd/src/EchoCommand.java  
  inflating: /opt/oracle/ords/examples/plugins/plugin-demo/build.xml  
  inflating: /opt/oracle/ords/examples/plugins/plugin-demo/.classpath  
  inflating: /opt/oracle/ords/examples/plugins/plugin-demo/.project  
  inflating: /opt/oracle/ords/examples/plugins/lib/javax.inject-1.jar  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.2.json  
  inflating: /opt/oracle/ords/examples/db_auth/sql/install.sql  
  inflating: /opt/oracle/ords/examples/pre_hook/sql/custom_auth_api.pls  
  inflating: /opt/oracle/ords/examples/soda/getting-started/poUpdated.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.3.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.5.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/poPatchSpec.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.1.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.4.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/po.json  
  inflating: /opt/oracle/ords/examples/pre_hook/README.md  
  inflating: /opt/oracle/ords/examples/soda/getting-started/qbePatch.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/POList.json  
  inflating: /opt/oracle/ords/examples/pre_hook/index.html  
Make sure the PL/SQL Gateway is disabled

You can check whether the PL/SQL Gateway is currently enabled by running…

select dbms_xdb.gethttpport
from dual;


GETHTTPPORT
-----------
          0

If this query returns something other than zero then you can disable the PL/SQL Gateway as follows :

exec dbms_xdb.sethttpport(0)
Copy the APEX images

Before we configure ORDS, we need to copy the APEX images somewhere that is visible to the ORDS server so…

cd /opt/oracle/ords
mkdir apex

…then…

cd apex
pwd 
/opt/oracle/ords/apex
cp -r $ORACLE_BASE/apex/images .
ls -l
total 44
drwxr-xr-x. 33 oracle oinstall 28672 Feb 20 16:57 images
Initial ORDS Installation

To start with, we’re going to install ORDS and configure it to run on HTTP. This is simply so that we can sanity check that ORDS and APEX are working together as expected.
Note that I’m accepting the default location for the default tablespace of the two new users that will be created as part of the installation. If you’re planning to do the same then you should make sure that you have a USERS tablespace available in your PDB.

Finally, we can now run the installation. Still connected as oracle :

cd $ORACLE_BASE/ords
java -jar ords.war install advanced

At this point, we now have the option to start ORDS…

…which causes a lot of feedback…

It’s perhaps unsurprising that we hit the ORA-28000 error at this stage…

alter session set container = xepdb1;

select username, account_status
from dba_users
where username like 'ORDS%'
or username like 'APEX%'
/

USERNAME                       ACCOUNT_STATUS                
------------------------------ ------------------------------
APEX_180200                    LOCKED                        
APEX_INSTANCE_ADMIN_USER       OPEN                          
APEX_PUBLIC_USER               LOCKED                        
ORDSYS                         EXPIRED & LOCKED              
ORDS_METADATA                  EXPIRED & LOCKED              
ORDS_PUBLIC_USER               OPEN                          

6 rows selected. 

We’ll sort that out in a bit. For now though, let’s just check that ORDS’ Jetty server is accessible.
As ORDS is running in the foreground, we’ll need to leave it running and start a separate session.
Then we can test it with :

curl -ISs http://frea.virtualbox:8080

…which should return the HTTP header :

Now we’re happy that ORDS itself is running, we can stop it by pressing [CTRL]+C in the Terminal session it’s running in.

Next, we need to run ords with the validate option :

cd $ORACLE_BASE/ords
java -jar ords.war validate 

The output looks innocuous enough :

However, if we look at the log file that has been written, we can see that there’s been a fair bit of activity…

[*** script: ords_alter_session_script.sql] 

PL/SQL procedure successfully completed.

[*** script: ords_schema_mapping.sql] 
INFO: Configuring ORDS_PUBLIC_USER to map APEX Workspaces and ORDS schemas

Session altered.

Configuring APEX and ORDS schemas for url mapping
Made APEX_PUBLIC_USER proxiable from ORDS_PUBLIC_USER
APEX_REST_PUBLIC_USER does not exist
APEX_LISTENER.POOL_CONFIG synonym does not exist, stubbing out
ORDS_METADATA.APEX_POOL_CONFIG


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Grant succeeded.

INFO: Completed configuring ORDS_PUBLIC_USER to map APEX Workspaces and ORDS Schemas

Session altered.

[*** script: ords_repair_proxy_connect.sql] 
INFO: Checking ords enabled schemas and its proxy user

Session altered.


PL/SQL procedure successfully completed.

[*** script: ords_migrate_grant_priv.sql] 

Session altered.

INFO: Verify if Application Express exists to setup the migration privileges for
ORDS.
INFO: Completed setting up the APEX REST migration privileges for ORDS.


PL/SQL procedure successfully completed.

[*** script: ords_validate_objects.sql] 

Session altered.

INFO: 15:25:18 Validating objects for Oracle REST Data Services.
VALIDATION: 15:25:18 Starting validation for schema: ORDS_METADATA
VALIDATION: 15:25:18 Validating objects
VALIDATION: 15:25:19 Validating ORDS Public Synonyms
VALIDATION: 15:25:20 Total objects: 262, invalid objects: 0
VALIDATION: 15:25:20     72  INDEX
VALIDATION: 15:25:20      1  JOB
VALIDATION: 15:25:20     12  PACKAGE
VALIDATION: 15:25:20     12  PACKAGE BODY
VALIDATION: 15:25:20     44  PUBLIC SYNONYM
VALIDATION: 15:25:20      1  SEQUENCE
VALIDATION: 15:25:20     14  SYNONYM
VALIDATION: 15:25:20     27  TABLE
VALIDATION: 15:25:20     26  TRIGGER
VALIDATION: 15:25:20     20  TYPE
VALIDATION: 15:25:20      6  TYPE BODY
VALIDATION: 15:25:20     27  VIEW
VALIDATION: 15:25:20 Validation completed.
INFO: 15:25:20 Completed validating objects for Oracle REST Data Services.


PL/SQL procedure successfully completed.


Session altered.


Commit complete.

[*** script: ords_alter_session_script.sql] 

PL/SQL procedure successfully completed.

In case you’re wondering, the scripts referenced in this log file are located in ords.war itself.

Now we’re ready to…

Configure APEX to run on ORDS

As oracle…

cd $ORACLE_BASE/apex
sqlplus / as sysdba

Once connected to the database…

alter session set container = xepdb1;

@apex_rest_config.sql

This will create two new user accounts :

  • APEX_LISTENER
  • APEX_PUBLIC_USER

You will be prompted for a password for each of them.

Once the script is completed, you should be able to confirm that two further accounts have been created :

select username, account_status
from dba_users
where username in ('APEX_LISTENER', 'APEX_PUBLIC_USER')
order by 1
/

USERNAME                       ACCOUNT_STATUS                
------------------------------ ------------------------------
APEX_LISTENER                  OPEN                          
APEX_PUBLIC_USER               LOCKED                        

Granting access to the APEX owner via ORDS

Once again, connect to the database as sysdba :

alter session set container = xepdb1;

begin
    dbms_network_acl_admin.append_host_ace(
        host => 'localhost',
        ace => xs$ace_type(
            privilege_list => xs$name_list('connect'),
            principal_name => 'APEX_180200',
            principal_type => xs_acl.ptype_db));
end;
/

alter user apex_public_user account unlock
/

Now if we re-start ORDS…

java -jar ords.war standalone

…and in a separate session we should be able to get a sensible header from the apex_admin URL :

curl -ISs http://frea.virtualbox:8080/ords/apex_admin

We could just call it a day at this point. However, if you like your applications to be a little more secure than an England three-quarter under a Dan Bigger garryowen, you’ll want to follow Oracle’s recommendation :
“If you want to use RESTful services that require secure access, you should use HTTPS.”

In order to do this, we’re going to have to do some messing about with SSL certificates.

Generating a self-signed SSL Certificate

We need to connect as root and create directory to hold the key :

sudo -s
mkdir -p /etc/ssl/private/frea.virtualbox
cd /etc/ssl
chmod -R 700 private/frea.virtualbox

Now we can generate the key and the certificate :

cd /etc/ssl/private/frea.virtualbox
openssl req -newkey rsa:2048 -nodes -keyout frea.virtualbox.key -x509 -days 3650 -out frea.virtualbox.crt

Note that the days value I’m using will create a certificate that does not expire for 10 years. Whilst this does mean I won’t have to worry about the certificate expiring and stopping my application from working at an unexpected moment, it’s probably not strictly in line with security best practices. If you find yourself doing this in a production environment, you may want to consider a rather shorter lifetime for your certificate.

Anyhow, we will be prompted to supply some values. The ones I’m using are :

  • Country Name : UK
  • State or Province Name : England
  • Organization Name : The Anti-Kyte
  • Organizational Unit Name : Mike
  • Common Name : frea.virtualbox

…all of which looks like this :

You should now have two new files :

ls -l
total 8
-rw-r--r--. 1 root root 1363 Feb 22 11:45 frea.virtualbox.crt
-rw-r--r--. 1 root root 1708 Feb 22 11:45 frea.virtualbox.key

OK, we can stop being root now.

Incidentally, if you want to verify the expiry date of our your new certificate :

sudo openssl x509 -text -noout -in /etc/ssl/private/frea.virtualbox/frea.virtualbox.crt |grep 'Not After'
 Not After : Feb 19 11:45:07 2029 GMT

The easiest way to reconfigure ORDS to use the certificate – and HTTPS – is to stop any running instances of ORDS, connect as oracle and then start it again, using the appropriate command line parameters :

cd $ORACLE_BASE/ords

java -jar ords.war standalone --secure-port 8443 --secure-host frea.virtualbox --secure-cert-path /etc/ssl/private/frea.virtualbox/frea.virtualbox.crt --secure-cert-key-path /etc/ssl/private/frea.virtualbox/frea.virtualbox.key

If we test using curl…

curl -ISs https://frea.virtualbox:8443/ords/apex_admin

The presence of a self-signed certificate will cause comment :

…so we’ll have to use a bit of TLC…

curl -kISs https://frea.virtualbox:8443/ords/apex_admin

This does mean that your web browser is also likely to object to the cert the first time we point it at this site. We’ll come onto that in a bit.

For now though, we can see that the ssh settings have been added to the properties file in standalone sub-directory :

cd $ORACLE_BASE/ords/ords/standalone
cat standalone.properties 

The file now looks like this :

#Fri Feb 22 11:48:35 GMT 2019
jetty.secure.port=8443
ssl.cert=/etc/ssl/private/frea.virtualbox/frea.virtualbox.crt
ssl.cert.key=/etc/ssl/private/frea.virtualbox/frea.virtualbox.key
ssl.host=frea.virtualbox
standalone.context.path=/ords
standalone.doc.root=/opt/oracle/ords/ords/standalone/doc_root
standalone.scheme.do.not.prompt=true
standalone.static.context.path=/i
standalone.static.path=/opt/oracle/ords/apex/images

From now on, when ORDS starts, it will use these properties.

Now ORDS is installed and configured, we need to get it to start when the server boots…

Creating a Systemd service for ORDS

The documentation mentions the fact that there is a limit on the size of POST data when running standalone and suggests increasing this limit this by starting ORDS like this :

java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=3000000 -jar ords.war

We will implement this suggestion in our service.

So, as root :

cd /etc/systemd/system
nano ords.service

The file should look something like this :

[Unit]

Description=Oracle Rest Data Services (ORDS) Embedded Jetty WEB Server for APEX
After=network.target

[Service]
User=oracle
TimeoutStartSec=0

Type=simple
KillMode=process
ExecStart=/usr/bin/java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=3000000 -jar /opt/oracle/ords/ords.war standalone
Restart=always
RestartSec=2
LimitNOFILE=5555

[Install]

WantedBy=multi-user.target

We now need to make the file executable :

chmod a+x ords.service

Finally, we need to add the ORDS service to systemctl :

systemctl daemon-reload
systemctl enable ords
systemctl start ords

Now we can check that ORDS is up :

systemctl status ords

It’s alive !!!

We can test once again with…

curl -kISs https://frea.virtualbox:8443/ords/apex_admin

…which should return something like :

HTTP/1.1 302 Found
Date: Thu, 28 Feb 2019 22:27:34 GMT
Content-Type: text/html;charset=utf-8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
Cache-Control: no-store
Pragma: no-cache
Expires: Sun, 27 Jul 1997 13:00:00 GMT
Set-Cookie: ORA_WWV_USER_250198699356158=ORA_WWV-R7rbCSQ886zYN9Q6CXIOpnb2; path=/ords; secure; HttpOnly
Location: https://frea.virtualbox:8443/ords/f?p=4550:10:14143531583026:::::
Transfer-Encoding: chunked
Making APEX available to remote machines

Now we’ve got everything configured, we simply need to update the server firewall to allow traffic to the HTTPS port :

sudo firewall-cmd --zone=public --permanent --add-port=8443/tcp
sudo firewall-cmd --reload

We can now confirm that the port is available :

sudo firewall-cmd --list-ports
1522/tcp 5500/tcp 8443/tcp

Finally, we can now access APEX from a remote machine.

When we first hit the APEX URL, Firefox is understandably skeptical of the my self-signed certificate…

…so I need to convince it that I’m trustworthy (or just add an exception)…

…before I can finally see APEX in the browser :

That’s it, I can now leave my sanctuary safe in the knowledge that APEX and ORDS are now configured and that the Welsh Nationalist fervour has abated…except that it’s now St. David’s Day. On the plus side, it looks like I’m having Cheese and Leek Pie for tea rather than the Humble Pie I’ve been eating all week.

Oracle Create Schema – multiple DDL statements in a single transaction

Thu, 2019-02-14 16:08

I was going to begin with some extended Brexit metaphor to illustrate the chaos and confusion that can ensue when you first encounter Oracle’s CREATE SCHEMA command.
Fortunately, the Dutch Government saved me the trouble :

dutch_brexit_monster

Much as I’d like to believe that the Cookie Monster has finally overcome his Sesame Street type casting, I can’t help noticing that the Brexit Monster never seems to in the same room as this guy…

boris

In Oracle, the term “schema” is used interchangeably with the term “user”. Creating a user in Oracle automatically creates a schema of the same name belonging to that user.
The process is so seamless that it’s almost impossible to tell where one begins and the other ends.
You may therefore be somewhat confused the first time you encounter Oracle’s CREATE SCHEMA command…

Contrary to what you might expect, CREATE SCHEMA does not enable you to create a database user. What it does do is to enable you to run multiple DDL statements inside a single transaction.
Now, if you thought that any DDL statement in Oracle would end with an implicit commit of the current transaction, well that makes two of us (at least).

To demonstrate why this is not necessarily true, let’s say we have a user created like this :

create user brexit_monster identified by lets_go_dutch 
    default tablespace users quota 5M on users
/

grant create session, create table, create view, create procedure to brexit_monster
/

…and a role created like this :

create role erg
/

If we want to create a table in the brexit_monster schema, then a view on that table, followed by a grant to the ERG role we could achieve this with three separate statements in a script…

create table brexit_types( brexit_cid varchar2(30),  classification varchar2(65000))
/

create or replace view promised_land as select brexit_cid from brexit_types where classification = 'HARD'
/

grant select on promised_land to erg
/

…however, if the first statement fails, the next two will also fail leaving something rather messy…

script_errors

If instead, we were to wrap these statements into a single CREATE SCHEMA…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(65000))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg
/   
    

…the error stack is somewhat more manageable

cs_err1

Note however, that the View statement has changed a bit. It’s now a straight CREATE rather than CREATE OR REPLACE.
In fact, if you try to plug any DDL statement into CREATE SCHEMA that is not either a CREATE TABLE, CREATE VIEW or GRANT then you will get :

ORA-02422: missing or invalid schema element

If we issue the correct statement…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(5))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg
/   


Schema created.


…we can see that all of the DDL has been executed…

select object_name, object_type
from user_objects
order by 2
/

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -----------------------
BREXIT_TYPES		       TABLE
PROMISED_LAND		       VIEW

select grantee, privilege
from user_tab_privs
where owner = 'BREXIT_MONSTER'
and table_name = 'PROMISED_LAND'
/

GRANTEE 		       PRIVILEGE
------------------------------ ------------------------------
ERG			       SELECT

At this point you may still be somewhat skeptical about whether all this really happens in a single transaction.

I mean, how do we know that Oracle isn’t just parsing each statement to make sure they’re all valid and then trusting it won’t hit a runtime error ?
One way to find out is, of course, to engineer a runtime error.

You remember when I created the BREXIT_MONSTER user and you thought that I was a bit stingy with the tablespace quota allocation ? Well…

set serverout on
exec dbms_output.put_line('Current transaction = '||dbms_transaction.local_transaction_id(true));

create schema authorization brexit_monster
    create wto_terms(is_problem varchar2(3) default 'NO')
    create table little_objects as select * from all_objects fetch first 5 rows only
    create table my_objects as select * from all_objects
/   
exec dbms_output.put_line('Current transaction='||dbms_transaction.local_transaction_id(true));

When we run this we get …

runtime_err

We can see that the local_transaction_id has changed. So the transaction that the CREATE SCHEMA was running in has ended. Question is – has it been commited or rolled back ?
Now to check if any of the tables have been created…

select table_name
from user_tables
where table_name in ('WTO_TERMS', 'LITTLE_OBJECTS')
/

no rows selected

We could go into great detail here and do some digging around in trace files.
Then again, there’s been quite enough procrastination around this whole Brexit business already.
As we can see, the functionality of CREATE SCHEMA is that it does execute multiple DDL statements in a single database transaction – i.e. all statements succeed or none do.
In other words, if it walks like a Brexit Monster and talks about sunlight uplands it’s probably Boris Johnson.

Pi in a time of Brexit – Remote Controlling Raspberry Pi from Ubuntu using VNC

Sun, 2019-01-27 14:52

What with Larry the Downing Street Cat and Palmerston, his counterpart at the Foreign Office, Teddy suspects he knows the real reason for the Country’s current travails.
Here he is, doing his best Boris Johnson impression :

“No wonder Brexit’s a Cat-astrophe !”

In an attempt to distract myself from the prospect of the country being ruined by this feline consipracy, I’ve been playing with my new Raspberry Pi Model 3 B-spec.
At some point, I’m going to want to connect remotely to the Desktop on the Pi. What follows is how I can do this using VNC…

Why VNC ?

Regular readers (hello Mum!) may be wondering why I’m returning to this topic, having previously used RDP to remote to a Pi.

Well, this newer model RaspberryPi is running Raspbian Stretch ( or version 9) as opposed to the older machine, which was running Jessie (version 8).
Stretch has VNC included by default so it makes sense to use this protocol for connecting to the desktop remotely.

Now, the more observant among you will notice that you can simply and easily enable VNC in the same way as you can enable SSH during initial setup.
You can see this option in the Preferences/Raspberry Pi Configuration menu when you click on the Interfaces tab :


If, like me, you don’t discover that this is the way to go until after you’ve put away all those old peripherals you had to dig out of the attic to setup your Pi then fear not, you can also do this from the command line…

On the Pi

First of all, we want to make sure that we do, in fact, have the required VNC software on the Pi.
So, once I’ve connected to the Pi via SSH, I can run this from the command line :

apt list realvnc*

…which should come back with :

Now we want to configure VNC on the pi so, on the command line we need to enter …

sudo raspi-config

This will bring up the Software Configuration Tool screen below.
Using the arrow keys on the keyboard, navigate to the line that starts 5 Interface Options and hit the [Enter] key.


…which brings up a sub-menu. Here, you need to navigate to P3 VNC and hit [Enter]


…and [Enter] again to confirm you want to enable VNC…

…before you receive a message confirming that VNC is now enabled :

To exit, hit [Enter]

I’m not sure if it’s strictly necessary, but at this point, I re-started the pi by entering :

sudo reboot
In Ubuntu

Meanwhile, on the Ubuntu machinea (I’m running Ubuntu 16.04), we need to head over to the VNC Viewer download site.
As I’m on a 64-bit version of Ubuntu, I chose the DEB x64 version to download.

Incidentally, you can tell if you’re running a 32-bit or 64-bit Linux distro, you can run :

uname -i

If this returns x86_64 the you’re on a 64-bit platform.

Anyhow, when prompted, I opted to open the downloaded file – VNC-Viewer-6.19.107-Linux-x64.deb with Software Install


…which results in…


Now we simply click Install and enter our password when prompted.

Once the installation is completed we’re ready to connect remotely.

Running VNC Viewer

To start the viewer, you can simply open a Terminal and run :

vncviewer

After you’ve accepted the licence, enter the address of the server to connect to (in my case pithree) :

You’ll then be prompted to enter the username and password of a user on the Pi :


Press OK and…


You can tweak the display to make it a bit more practical.
In the VNC Window, move the cursor to the top of the screen so that the Viewer menu slides down then select the cog-wheel icon (second from the right) :

In the Options tab, set Picture Quality to High and Scaling to Scale to fit window :

After this, the VNC viewport should scale to the size of the VNC window itself.

Now all I need is to something else to distract myself from the ongoing battle between Project Fear and Project Farce.

Using a Proxy User for Code Deployment in Oracle

Mon, 2019-01-14 12:54

“Proxy Users !” exclaimed Debbie.
“I say, that’s rather harsh don’t you think ?” came the rather startled reply from her boss.
Debbie sighed. They were in the midst of a discussion on the subject of how best to deploy database changes to multiple schemas.
“I meant”, she replied with iron patience, “that we could set up a proxy user to connect as each application owner in turn. That way, we wouldn’t have to grant those CREATE ANY privileges that get auditors so worried”.
“Oh, I see”, said Mike, who didn’t.
Not for the first time, Debbie wondered whether she had been lumbered with a less competent man as her boss simply in order to imbue this post with a semblance of social realism.
“I think”, she said, “that it’s time to move on to the techie bit.”
Debbie is right, as usual…

In order to make a change in Oracle ( or any database for that matter), you need at some point to connect to the database and run some SQL.
This is relatively straightforward if you are using the schema that is – or will be – the owner of the objects you are creating or changing.

However, this may not be possible if the account is identified externally or – in more recent releases – it’s a schema only account.
So, what is the best way to setup and use an account to make such changes in other schemas ?

The changes

NOTE : whilst the code examples that follow are written using Oracle 18c, This technique will also work in releases as far back as 10G.
I’ll point out any container specific syntax as it comes up.

We want to create a new table in the HR schema :

create table new_dept as select * from departments
/

create or replace view it_vw as 
    select * from departments where deptartment_id in (60, 210, 230)
/

As you’d expect, HR itself has the required permissions to do this.
In 18c, you may have to switch to the container in which the HR user is installed first…

alter session set container = xepdb1

…but the privileges are pretty much the same…

select privilege 
from dba_sys_privs
where grantee = 'HR'
union
select privilege
from dba_sys_privs
where grantee in ( 
    select granted_role 
    from dba_role_privs 
    where grantee = 'HR')
order by 1
/


PRIVILEGE                               
----------------------------------------
ALTER SESSION                           
CREATE CLUSTER                          
CREATE DATABASE LINK                    
CREATE INDEXTYPE                        
CREATE OPERATOR                         
CREATE PROCEDURE                        
CREATE SEQUENCE                         
CREATE SESSION                          
CREATE SYNONYM                          
CREATE TABLE                            
CREATE TRIGGER                          
CREATE TYPE                             
CREATE VIEW                             
UNLIMITED TABLESPACE                    

14 rows selected. 

Oracle’s system privileges are not really that granular. You can either have the privileges to create objects in your own schema, or you can have privileges to create them in any schema.
So, if we follow Mike’s suggestion, we’d end up with something like…

create user sledgehammer identified by nutcracker
/

grant create session,
    create any table,
    create any view,
    select any table
    to sledgehammer
/

Which would do what we need…

-- If we're using a container database, check that we're in the right one...
select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')                                                                                                                                      
---------------------------------
XEPDB1                                                                                                                                                                 

alter session set current_schema = HR;

Session altered.

create table new_dept as select * from departments;

Table created.

create or replace view it_vw as
    select *
    from departments
    where department_id in (60, 210, 230)
/

View created.

The problem here is that it doesn’t so much ignore the principle of least privilege as walk up to it and make a rude gesture…

create table system.oops(unexpected varchar2(1));

Table created.

Fortunately, as Debbie knows, there is a better way…

The Proxy Schema

To start with, it’s probably worth taking a quick peek at the documentation for Proxy User which suggests that the only privilege such a user should be granted is CREATE SESSION.

Therefore, we can create our proxy user like this :

grant create session to app_deploy identified by debbieknowsbest
/

Next, we need to configure the HR user to allow the proxy user to connect to it :

alter user hr grant connect through app_deploy
/

Now we can connect to the HR schema using the proxy…

connect app_deploy[hr]@xepdb1
Enter password: 
Connected.
SQL> 

…which we can confirm with this query…

select sys_context('userenv', 'session_user') as "Session User",
    sys_context('userenv', 'proxy_user') as "Proxy User" 
from dual;

Session User  Proxy User
------------ ------------------------------
HR           APP_DEPLOY

Now we can run our DDL statements in the HR schema…

create table new_dept as select * from departments
/

Table created.

create or replace view it_vw as
    select *
    from departments
    where department_id in (60, 210, 230)
/

View created

If we now connect as a sufficiently privileged user, we can confirm that these objects have been created in the HR schema :

select owner, object_name, object_type
from dba_objects
where object_name in ('NEW_DEPT', 'IT_VW')
/

OWNER                          OBJECT_NAME                    OBJECT_TYPE            
------------------------------ ------------------------------ -----------------------
HR                             IT_VW                          VIEW                   
HR                             NEW_DEPT                       TABLE                  

We can now use the APP_DEPLOY schema for any code releases. All we need to do to make an application owner account accessible in this way is :

alter user <application_owner> grant connect through app_deploy
Further Reading

Debbie isn’t the only person to wear clogs of cleverness when it comes to Proxy Users.

There’s this article by Norm on the subject.
Additionally, it’s covered in the context of Schema Only accounts in this post on the Pythian site by Simon Pane.

Installing and Configuring Oracle 18cXE on CentOS

Thu, 2019-01-03 11:50

After seven years, the much anticipated Oracle 18c Express Edition (XE) has finally seen the light of day.
Reaction to the new version can be summed up as…

It’s the Dog’s Wotsits !

Gerald Venzl, the person we have to thank for this new and vastly improved version of XE, has already published an installation guide.

At this point you may well ask yourself that what – apart from gratuitous puppy pics and cheesy-snack-based puns – is the difference between that post and this.

Well, if you’re a long-time user of 11gXE and you’re looking to upgrade then you will find 18cXE a rather different proposition.
The introduction of Multitenant databases aside, 18cXE differs greatly from it’s predecessor in terms of it’s functional scope.
Wheras 11gXE was – broadly speaking – functionally equivalent to Oracle Standard Edition, the approach for 18cXE has been to shoe-horn in as many Enterprise Edition features as possible.
No doubt, this will leave you anxious to play with the new version. However, there are some “home comforts” that were present in the old version that you’ll need to configure yourself this time around.
What I’m going to go through is :

  • Installing 18cXE on a Red Hat compatible distro (CentOS7)
  • Connecting to the database and exploring the containers
  • Checking the TNS Listener
  • Manual and Automatic Startup and Shutdown of the database and listener
  • Setting and persisting the Oracle environment variables
  • Accessing Enterprise Manager Express
  • Installing the HR demo application in a Pluggable Database (PDB)
  • Configuring the firewall to allow remote access to Oracle

The steps documented here have been performed on a vanilla installation of CentOS7. As such, they should work pretty much unaltered for other Red Hat based distros based on or similar to Red Hat Enterprise Linux (RHEL) version 7.

Before all of that though…

Resource Limits for 18cXE

As with previous versions of XE, there are limitations on the system resources that will be used. These include :

  • 2 CPU cores ( up from 1 in 11gXE)
  • 2 GB Ram ( 1 GB in 11g)
  • 12GB of User Data ( 11GB in 11g)
  • A maximum of 3 PDBs

In addition, you can only install one instance of XE per host. However, it does seem to be possible to install XE alongside other Oracle Editions on the same host.

One final point to note – the amount of space taken up by the initial installation is not insignificant. The space usage in the $ORACLE_BASE is :

sudo du -h -d1
5.3G	./product
76K	./oraInventory
0	./checkpoints
12M	./diag
20M	./cfgtoollogs
4.9M	./admin
0	./audit
3.4G	./oradata
8.7G	.

This is worth bearing in mind when sizing your environment.
Additionally, if you’re tight on space, you may also consider removing the rpm files once the installation is complete as this frees up 2.4GB (although not in $ORACLE_BASE).
Speaking of rpm files…

Downloading and installation

Head over to the Downloads page and download the Oracle Database 18c Express Edition for Linux x64 version.

If you’re running a Red Hat compatible distro that’s not Oracle Linux, you’ll also need the Oracle Database Preinstall RPM for RHEL and CentOS. I’m running on CentOS7 so I’ll get the Release 7 version of this file.

At this point, we should now have two rpm files :

-rw-rw-r--. 1 mike mike      18244 Dec 25 17:37 oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm
-rw-rw-r--. 1 mike mike 2574155124 Dec 25 17:37 oracle-database-xe-18c-1.0-1.x86_64.rpm

Next, we need to become root for a bit. If you’re worried that all this power might go to your head, fear not, I’ll let you know when we can become mere mortals again. For now though :

sudo -s
[sudo] password for mike: 
#

Now we can install the RPMs. The preinstall first (note that you need to have an internet connection available when running this)…

yum localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm 

…This results in :

Loaded plugins: fastestmirror, langpacks
Examining oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm: oracle-database-preinstall-18c-1.0-1.el7.x86_64
Marking oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-preinstall-18c.x86_64 0:1.0-1.el7 will be installed
--> Processing Dependency: compat-libcap1 for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
Loading mirror speeds from cached hostfile
 * base: mirrors.vooservers.com
 * extras: mirror.sov.uk.goscomb.net
 * updates: mirrors.vooservers.com
base                                                                                     | 3.6 kB  00:00:00     
extras                                                                                   | 3.4 kB  00:00:00     
updates                                                                                  | 3.4 kB  00:00:00     
--> Processing Dependency: compat-libstdc++-33 for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Processing Dependency: ksh for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Processing Dependency: libaio-devel for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Running transaction check
---> Package compat-libcap1.x86_64 0:1.10-7.el7 will be installed
---> Package compat-libstdc++-33.x86_64 0:3.2.3-72.el7 will be installed
---> Package ksh.x86_64 0:20120801-139.el7 will be installed
---> Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================
 Package                        Arch   Version           Repository                                        Size
================================================================================================================
Installing:
 oracle-database-preinstall-18c x86_64 1.0-1.el7         /oracle-database-preinstall-18c-1.0-1.el7.x86_64  55 k
Installing for dependencies:
 compat-libcap1                 x86_64 1.10-7.el7        base                                              19 k
 compat-libstdc++-33            x86_64 3.2.3-72.el7      base                                             191 k
 ksh                            x86_64 20120801-139.el7  base                                             885 k
 libaio-devel                   x86_64 0.3.109-13.el7    base                                              13 k

Transaction Summary
================================================================================================================
Install  1 Package (+4 Dependent packages)

Total size: 1.1 M
Total download size: 1.1 M
Installed size: 4.0 M
Is this ok [y/d/N]: 

Enter ‘y’ and…

Downloading packages:
(1/4): compat-libcap1-1.10-7.el7.x86_64.rpm                                              |  19 kB  00:00:00     
(2/4): compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm                                       | 191 kB  00:00:00     
(3/4): libaio-devel-0.3.109-13.el7.x86_64.rpm                                            |  13 kB  00:00:00     
(4/4): ksh-20120801-139.el7.x86_64.rpm                                                   | 885 kB  00:00:00     
----------------------------------------------------------------------------------------------------------------
Total                                                                           1.8 MB/s | 1.1 MB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : compat-libstdc++-33-3.2.3-72.el7.x86_64                                                      1/5 
  Installing : libaio-devel-0.3.109-13.el7.x86_64                                                           2/5 
  Installing : compat-libcap1-1.10-7.el7.x86_64                                                             3/5 
  Installing : ksh-20120801-139.el7.x86_64                                                                  4/5 
  Installing : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                              5/5 
  Verifying  : ksh-20120801-139.el7.x86_64                                                                  1/5 
  Verifying  : compat-libcap1-1.10-7.el7.x86_64                                                             2/5 
  Verifying  : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                              3/5 
  Verifying  : libaio-devel-0.3.109-13.el7.x86_64                                                           4/5 
  Verifying  : compat-libstdc++-33-3.2.3-72.el7.x86_64                                                      5/5 

Installed:
  oracle-database-preinstall-18c.x86_64 0:1.0-1.el7                                                             

Dependency Installed:
  compat-libcap1.x86_64 0:1.10-7.el7   compat-libstdc++-33.x86_64 0:3.2.3-72.el7 ksh.x86_64 0:20120801-139.el7
  libaio-devel.x86_64 0:0.3.109-13.el7

Complete!

Now for the main event…

yum localinstall oracle-database-xe-18c-1.0-1.x86_64.rpm

…which results in ( after quite a while) …

Loaded plugins: fastestmirror, langpacks
Examining oracle-database-xe-18c-1.0-1.x86_64.rpm: oracle-database-xe-18c-1.0-1.x86_64
Marking oracle-database-xe-18c-1.0-1.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================
 Package                        Arch           Version       Repository                                    Size
================================================================================================================
Installing:
 oracle-database-xe-18c         x86_64         1.0-1         /oracle-database-xe-18c-1.0-1.x86_64         5.2 G

Transaction Summary
================================================================================================================
Install  1 Package

Total size: 5.2 G
Installed size: 5.2 G
Is this ok [y/d/N]: 

Once again, enter ‘y’…then go and get a coffee (or other bevarage if you prefer), this next bit takes a while…

Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-database-xe-18c-1.0-1.x86_64                                                          1/1 
[INFO] Executing post installation scripts...
[INFO] Oracle home installed successfully and ready to be configured.
To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root.
  Verifying  : oracle-database-xe-18c-1.0-1.x86_64                                                          1/1 

Installed:
  oracle-database-xe-18c.x86_64 0:1.0-1                                                                         

Complete!

Finally, we need to run the configuration.
NOTE : I quit the previous session and began a new one as root before running this.

sudo -s
/etc/init.d/oracle-xe-18c configure

…once again this takes a while to complete, which is fair enough because there’s a bit going on :

Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password: 
*********
Enter SYSTEM user password: 
*********
Enter PDBADMIN User Password: 
**********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: frea.virtualbox:1522/XEPDB1
     Multitenant container database: frea.virtualbox:1522
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE

At this point we can stop being root.

Connecting to the database

First up, we need to make sure that the appropriate environment variables are set. So run the following, entering XE when prompted for the ORACLE_SID…

 . oraenv
ORACLE_SID = [mike] ? XE
ORACLE_BASE environment variable is not being set since this
information is not available for the current user ID mike.
You can set ORACLE_BASE manually if it is required.
Resetting ORACLE_BASE to its previous value or ORACLE_HOME
The Oracle base has been set to /opt/oracle/product/18c/dbhomeXE

Now we should be able to connect to the database via sqlplus :

sqlplus system

We can now confirm that the database is up :

select instance_name, version, status
from v$instance;

INSTANCE_NAME    VERSION	      STATUS
---------------- ----------------- ------------
XE		    18.0.0.0.0	      OPEN

One significant new feature of 18c XE as compared with it’s predecessor is the capability to use the database as a container (CDB) for zero or more Pluggable Databases (PDBs).
In the case of XE, you can have up to three PDBs and we can see that one has already been created as part of the installation :

select con_id, name
from v$containers;

    CON_ID NAME
---------- ------------------------------
	 1 CDB$ROOT
	 2 PDB$SEED
	 3 XEPDB1

In this case :

  • CDB$ROOT is the Container Database
  • PDB$SEED is a read-only template for creating PDBS
  • XEPDB1 is a PDB

In the CDB, we can see details of the PDB seed database and the PDB itself :

select con_id, name, open_mode 
from v$pdbs;

    CON_ID NAME 			  OPEN_MODE
---------- ------------------------------ ----------
	 2 PDB$SEED			  READ ONLY
	 3 XEPDB1			  READ WRITE

However, if we switch to the PDB…

alter session set container = XEPDB1;

…the same query returns information only about the current PDB…

select con_id, name, open_mode 
from v$pdbs;

    CON_ID NAME 			  OPEN_MODE
---------- ------------------------------ ----------
	 3 XEPDB1			  READ WRITE

If you want to check which PDB you are in you can use :

select sys_context('userenv', 'con_name') from dual;

In the CDB this should return :

CDB$ROOT

in our PDB however, we should get :

XEPDB1
Checking the Listener

For ongoing administration operations from the OS, you’ll need to add your user to a couple of groups. In my case, my user is “mike” :

sudo usermod -a -G dba mike
sudo usermod -a -G oinstall mike

Once you’ve added these groups to your user you need to log off and log on again for them to take effect.
You should now be able to check the status of the Net Listener by means of the lsnrctl utility.

Having first run oraenv as before to set your environment…

lsnrctl status

When the listener is up, you should get something like :

LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 26-DEC-2018 20:38:31

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=frea.virtualbox)(PORT=1522)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 18.0.0.0.0 - Production
Start Date                26-DEC-2018 19:24:54
Uptime                    0 days 1 hr. 13 min. 37 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /opt/oracle/product/18c/dbhomeXE/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/frea/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=frea.virtualbox)(PORT=1522)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT=5500))(Security=(my_wallet_directory=/opt/oracle/product/18c/dbhomeXE/admin/XE/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "7de2a3259d9c3747e0530f84f25ce87c" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "xepdb1" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

If however, it’s not running, you’ll get :


LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 26-DEC-2018 20:40:30

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=frea.virtualbox)(PORT=1522)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1522)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 2: No such file or directory

This brings us onto…

Starting and Stopping Oracle

The first time you restart the server after the installation, you will find that neither the database nor the TNS Listener are running.

To start them up from the command line you can run :

sudo /etc/init.d/oracle-xe-18c start

To shut them down, it’s :

sudo /etc/init.d/oracle-xe-18c stop

If, like me, you are configuring your server for the sole or main purpose of running Oracle, then you may want the database and listener to start when the server does.

To do this, switch to root…

sudo -s

…and set the oracle-xe-18c service to start on boot…

systemctl daemon-reload
systemctl enable oracle-xe-18c

The output will probably be something like :

oracle-xe-18c.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig oracle-xe-18c on
[root@frea mike]# systemctl status oracle-xe-18c
 oracle-xe-18c.service - SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services.
   Loaded: loaded (/etc/rc.d/init.d/oracle-xe-18c; bad; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:systemd-sysv-generator(8)

If you then reboot the server, you should be able to confirm that the service is up by running…

systemctl status -l oracle-xe-18c

…which should return something like …

 oracle-xe-18c.service - SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services.
   Loaded: loaded (/etc/rc.d/init.d/oracle-xe-18c; bad; vendor preset: disabled)
   Active: active (exited) since Fri 2018-12-28 13:20:23 GMT; 1min 48s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 3475 ExecStart=/etc/rc.d/init.d/oracle-xe-18c start (code=exited, status=0/SUCCESS)
    Tasks: 0

Dec 28 13:19:59 frea.virtualbox systemd[1]: Starting SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services....
Dec 28 13:19:59 frea.virtualbox oracle-xe-18c[3475]: Starting Oracle Net Listener.
Dec 28 13:19:59 frea.virtualbox su[3510]: (to oracle) root on none
Dec 28 13:20:02 frea.virtualbox oracle-xe-18c[3475]: Oracle Net Listener started.
Dec 28 13:20:02 frea.virtualbox oracle-xe-18c[3475]: Starting Oracle Database instance XE.
Dec 28 13:20:02 frea.virtualbox su[3864]: (to oracle) root on none
Dec 28 13:20:23 frea.virtualbox oracle-xe-18c[3475]: Oracle Database instance XE started.
Dec 28 13:20:23 frea.virtualbox systemd[1]: Started SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services..
Setting Oracle Environment Variables

You’ll have noticed that, up until now, we have to set some environment variables every time we want to interact with the database from the server command line.
To do this, we need to run :

. oraenv

When you run this script, you will always get asked for ORACLE_SID value :

. oraenv
ORACLE_SID = [mike] ? XE
The Oracle base has been set to /opt/oracle

We can see that the oraenv script affects four environment variables :

echo $ORACLE_SID
XE
echo $ORACLE_HOME
/opt/oracle/product/18c/dbhomeXE
echo $ORACLE_BASE
/opt/oracle
echo $PATH
...snip.../opt/oracle/product/18c/dbhomeXE/bin

You have some options as to how you can manage these environment variables.
One option is to setup some environment variables to prevent oraenv prompting for the SID every time it’s run.
Another is to set the environment variables automatically for all sessions.

Stopping oraenv prompting for input

To do this we need to ensure that the ORACLE_SID environment variable is set prior to invoking the script and also that the ORAENV_ASK variable is set to NO.
We can see the result of this with the following quick test :

export ORACLE_SID=XE
export ORAENV_ASK=NO
. oraenv
The Oracle base has been set to /opt/oracle

To set these environment variables automatically, we can simply define them in the /etc/profile.d/sh.local script :

sudo nano /etc/profile.d/sh.local

Add the two variable assigments :

export ORACLE_SID=XE
export ORAENV_ASK=NO

You will need to logout and login again for this change to take effect.

Setting the environment variables automatically

If you want to dispense with the need to call the oraenv script altogether, you can simply add a script with a .sh extension to the /etc/profile.d directory…

nano /etc/profile.d/set_oraenv.sh

…which should contain…

export ORACLE_SID=XE
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
export ORACLE_BASE=/opt/oracle
export PATH=$PATH:/opt/oracle/product/18c/dbhomeXE/bin

Next time you fire up a shell ( assuming you’re using bash – the default on CentOS), these environment variables will be set.

Enterprise Manager Express

Whilst 11gXE came with a database home page which consisted of an Apex application to administer XE, 18c does not come with Apex.
The good news is that, in common with all other 18c Editions, it comes with Enterprise Manager Express – a purpose built Admin tool.
Acessing it should be fairly simple. You just open a web browser (Firefox being the default on CentOS) and point it at the address specified in the output from our configuration run earlier. In our case this is :

https://localhost:5500/em

However, you may be ever so slightly disappointed…

Fortunately for us, someone was good enough to document the solution for this particular problem.

In short, we need to follow the link to the Adobe download site and select the .tar.gz option for the Flash Download :

After this, we should now have the following :

ls -l *.tar.gz
-rwxrwx---. 1 mike mike 9045426 Dec 22 15:02 flash_player_npapi_linux.x86_64.tar.gz

Next, we extract the libflashplayer.so file from the archive…

tar -xf flash_player_npapi_linux.x86_64.tar.gz *libflashplayer.so
ls -l libflashplayer.so
-rw-rw-r--. 1 mike mike 16607528 Nov 29 23:06 libflashplayer.so

…and copy it to the location that Firefox expects it to be…

sudo cp libflashplayer.so /usr/lib64/mozilla/plugins/.

…before finally setting the file ownership and permissions…

cd /usr/lib64/mozilla/plugins
sudo chmod 755 libflashplayer.so
sudo chgrp root libflashplayer.so
sudo chown root libflashplayer.so

Our file should now look like this :

libflashplayer.so
-rwxr-xr-x. 1 root root 16607528 Dec 31 17:05 libflashplayer.so

If we go to the EM page now :

Activate the plugin and login as sys (as sysdba) :

Eventually, you should see the Enterprise Manager Home Page :

I’ll leave you to explore for a bit.

Installing the HR demo application

Unlike it’s predecessor, 18cXE does not come with the HR demo application pre-installed. However, it does include the scripts that enable us to perform this installation ourselves.

As this is an application as opposed to a system-wide utility, we’re going to install it in the PDB rather than the main CDB.

We’ll need to switch to the oracle OS user so that we have permissions to write to the log file that we’re going to specify. Then we connect to the database…

sudo su oracle
sqlplus system

Once connected :

alter session set container = XEPDB1;
select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
XEPDB1

Now we’ve confirmed that we’re in the PDB, simply run :

@$ORACLE_HOME/demo/schema/human_resources/hr_main.sql

This script will prompt for :

  1. the password for the HR user – enter an appropriate password and remember it as you will need it to access the new HR schema
  2. the default tablespace to use for the HR user – enter USERS
  3. the temporary tablespace to use for the HR user – enter TEMP
  4. the path of the log file written by this installation script – enter $ORACLE_HOME/demo/schema/log

NOTE – the script does not obfuscate the password you enter but echos it to the screen. In any case, you may consider that changing it shortly after installation is a wise move.

The output will look something like this :


specify password for HR as parameter 1:
Enter value for 1: mysupersecretpassword

specify default tablespeace for HR as parameter 2:
Enter value for 2: USERS

specify temporary tablespace for HR as parameter 3:
Enter value for 3: TEMP

specify log path as parameter 4:
Enter value for 4: $ORACLE_HOME/demo/schema/log


PL/SQL procedure successfully completed.


User created.


User altered.


User altered.


Grant succeeded.


Grant succeeded.


Session altered.

...snip...
Comment created.


Comment created.


Comment created.


Commit complete.


PL/SQL procedure successfully completed.

We should now see that we have a “local” user called HR :

select account_status, default_tablespace, temporary_tablespace, common
from dba_users
where username = 'HR';

ACCOUNT_STATUS                   DEFAULT_TABLESPACE             TEMPORARY_TABLESPACE           COM
-------------------------------- ------------------------------ ------------------------------ ---
OPEN                             USERS                          TEMP                           NO

As the account is not locked, we can connect to it from SQL*Plus. Note that we’ll have to use the connect string for the PDB (as specified in the installation feedback earlier) as the schema does not exist in the CDB :

sqlplus hr/mysupersecretpassword@frea.virtualbox:1522/xepdb1

Alternatively we could use a method which doesn’t record the password in the bash history…

sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 - Production on Wed Jan 2 16:55:31 2019
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

SQL> conn hr/mysupersecretpassword@frea.virtualbox:1522/xepdb1
Connected.
SQL> 

If you want to see the difference, just try both connection methods in the same Terminal session and then run :

history |grep sqlplus

It’s probably worth remembering this if you are running on a shared environment.

Anyway, we can now see that the HR schema has the following objects :

Acessing the database from remote machines

Up to this point we’ve been working on the database server itself. This is fine if you’re running your Red-Hat based system as your desktop ( although in that case it’s more likely to be Fedora than CentOS), but if you want to be able to access it remotely, you’ll need to configure the firewall to allow remote access to specific ports.

Our objectives here are :

  1. to allow access to the database from a client machine via TNS
  2. to allow access to the Enterprise Manager Express site

For CentOS 7 the default firewall is firewalld :

systemctl status firewalld

 firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: <font color="#8AE234"><b>active (running)</b></font> since Tue 2019-01-01 14:53:08 GMT; 4min 30s ago
     Docs: man:firewalld(1)
 Main PID: 2842 (firewalld)
    Tasks: 2
   CGroup: /system.slice/firewalld.service
           └─2842 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Jan 01 14:53:07 frea.virtualbox systemd[1]: Starting firewalld - dynamic fir....
Jan 01 14:53:08 frea.virtualbox systemd[1]: Started firewalld - dynamic fire....
Hint: Some lines were ellipsized, use -l to show in full.

On my client machine, I’ve added the following entries to the $ORACLE_HOME/network/admin/tnsnames.ora file :

XE18 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = frea.virtualbox)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = XE)
    )
  )
  
xepdb1  =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = frea.virtualbox)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = xepdb1)
    )
  )  

XE18 will allow me to connect to the CDB and xepdb1 will let me connect to the PDB.

At the moment, when we try to connect to the datbase from a client machine we hit…

sqlplus system@XE18

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 1 15:10:34 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 
ERROR:
ORA-12543: TNS:destination host unreachable

Back on the server, we can remedy this by issuing the following command to open the port that the TNS Listener is listening on ( in my case 1522) :

sudo firewall-cmd --permanent --add-port=1522/tcp
success

…and verify with :

sudo firewall-cmd --list-ports
1522/tcp

This then allows the remote connection :

sqlplus system@XE18

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 1 15:12:44 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 
Last Successful login time: Mon Dec 31 2018 23:22:40 +00:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production

SQL> 

As for EM Express, we need to do the same for the port it’s running on (5500 in this case) :

sudo firewall-cmd --permanent --add-port=5500/tcp

However, we also need to connect to the database as SYSTEM and allow remote access by running:

exec dbms_xdb_config.SetListenerLocalAccess(false);

Once this is done we should now be able to access the EM Express home page remotely …

References

As ever, I’ve found a number articles that have proved useful in writing this post. I’d like to make a point of saying thank-you to the people who have essentially provided free consultancy to me (and you, if you’ve followed this guide). So thanks to…

Debbie Saves Christmas – Database Development in a Devops Wonderland : Re-runnable DDL

Wed, 2018-12-19 11:45

Debbie felt a shiver run down her spine. To be fair, that wasn’t much of a surprise since Lapland at this time of the year does tend to be a little chilly.
However, it wasn’t the weather that was the cause of her discomfort. Someone high up in the IT Department of her employer, The National Elf ( aka Santa’s Grotto) had decided that Continuous Integration was the way to go and had decreed that it should be used forthwith across all projects and technologies in the Company.
This included the application that Debbie was responsible for.
Written around 15 years ago, this Stock Control Application had already survived one major database upgrade but was now resolutely “stuck” on Oracle 11g.
The thing about so many modern software development techniques is that they were based on the premise that code was file based. Of course, this was also true ( or at least, true enough) for some database objects, but tables were a little different.
You couldn’t simply “replace” a table like you could any other program as doing so would destroy any data in that table. For this reason, any changes required to tables for a mature application such as this would be applied by means of DDL ALTER statements.
Of course, there are tools around for this sort of thing. Liquibase, FlexDeploy – these were just two of the tools that Debbie had no chance of getting approval to use in the face of a bureaucracy that made the Vogon Civil Service look like it was following Extreme Programming.
If she was going to get her changes through by her Christmas Eve deadline, she would have to get creative…

Object types and their inherent “re-runnability”

Debbie began by making a list of the types of database object in her application and whether or not they could be generated by a re-runnable DDL statement – i.e. a CREATE OR REPLACE stataement.
Those that could included :

  • packages (specifications and bodies)
  • functions
  • procedures
  • views
  • triggers
  • types (specifications and bodies)

These were more of a challenge :

  • tables
  • sequences
  • indexes
  • constraints
  • materialized views

The code changes that Debbie needed to deploy mostly consisted of object types of the latter group…

Elf and Safety Gone Mad !

The staff in the warehouse had gotten fed up with loading parcels that didn’t actually need to be delivered. Elf and Safety had gotten involved, saying something about repetitive heavy lifting and…well..Debbie had come up with the following code.

First of all, there were a couple of new tables :

create table countries(
    iso_code varchar2(3),
    country_name varchar2(4000)
)
/

…and…

create table deliveries(
    id number constraint del_pk primary key,
    recipient_name varchar2(4000) not null,
    date_of_birth date not null,
    country_code varchar2(3),
    belief_end_date date, 
    constraint del_uk unique ( recipient_name, date_of_birth)
)
/

…then a sequence for the DELIVERIES.ID values( this is 11g remember, identity columns are not yet a thing)…

create sequence del_id_seq
    start with 1
    increment by 1
    nocycle
/

In subsequent iterations of her development, Debbie decided to add a Primary Key…

alter table countries add constraint coun_pk primary key (iso_code)
/

…rename a column…

alter table deliveries 
    rename column country_code to coun_iso_code
/

… add a Foreign Key…

alter table deliveries add constraint 
    del_coun_fk foreign key (coun_iso_code) references countries(iso_code)
/

…and a Not Null constraint…

alter table deliveries modify coun_iso_code not null
/

…a physical column…

alter table deliveries add naughty_flag varchar2(1) not null
/

…a check constraint…

alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))
/

…and a virtual column…

alter table deliveries add 
    active_flag generated always as ( 
        case when belief_end_date is not null then 'Y' else 'N' end) virtual visible
/

Finally, there was a view on the table

create or replace view deliveries_vw as
    select del.recipient_name, coun.country_name
    from deliveries del
    inner join countries coun 
        on coun.iso_code = del.coun_iso_code
    where del.active_flag = 'Y'
    and del.naughty_flag = 'N'
/

After all of that, Debbie then needed to add some static (reference) data :

-- Furthest stop on the round !
insert into countries(iso_code, country_name)
values('ATA', 'ANTARCTICA')
/

insert into countries(iso_code, country_name)
values('CAN', 'CANADA')
/

insert into countries(iso_code, country_name)
values('COL', 'COLOMBIA')
/

-- Company Head Office is here...
insert into countries(iso_code, country_name)
values('FIN', 'FINLAND')
/


insert into countries(iso_code, country_name)
values('DEU', 'GERMANY')
/


insert into countries(iso_code, country_name)
values('IND', 'INDIA')
/


insert into countries(iso_code, country_name)
values('MDG', 'MADACASCAR')
/

insert into countries(iso_code, country_name)
values('NZL', 'NEW ZEALAND')
/


-- Can't find the ISO code for Wales for some reason !
insert into countries(iso_code, country_name)
values('GBR', 'UNITED KINGDOM')
/


insert into countries(iso_code, country_name)
values('USA', 'UNITED STATES OF AMERICA')
/

commit;

It’s probably worth noting that Debbie was following the company’s coding standards which – among other things – specified that :

  1. Primary Key, Foreign Key and Check constraints were always explicitly named (although not NOT NULL constraints)
  2. The definitions of tables, constraints etc would not change between runs – i.e. if the Primary Key columns on a table were to change then the appropriate DDL would be written to drop the existing Primary Key

These changes were executed via a master release script :

prompt Creating new tables

@countries_tbl.sql
@deliveries_tbl.sql

prompt Creating a sequence

@del_id_seq.sql

prompt Adding PK to a table

@coun_pk.sql

prompt Renaming a column 

@rename_del_coun_code.sql

prompt Adding Foreign Key

@del_coun_fk.sql

prompt adding a Not Null constraint

@del_iso_code_nn.sql

prompt adding new column 

@naughty_flag.sql

prompt creating check constraint

@del_nf_chk.sql

prompt adding virtual column

@del_active_virt.sql

prompt creating view

@deliveries_vw.sql

prompt creating COUNTRIES records...

@add_countries.sql

prompt Deployment completed.

Whilst the first run of this script would work as expected…

…second and subsequent runs would be cause the CI server to light up like a Christmas Tree ( and not in a good way)…

So, armed with a roll of PL/SQL and a sprinkling of SQL*Plus fairy dust, Debbie prepared to ensure a soft Irish border write some re-runnable DDL…

Option 1 – Exceptional Scripting

The first option was simply to anticipate the errors that might come up when the same DDL statement was executed multiple times.
The general format of such a script would be something like :

declare
    e_obj_exists exception;
    
    -- ORA-00955: name is already used by an existing object
    pragma exception_init( e_obj_exists, -955);    
begin
    -- nested blocks so that each exception can be handled individually and the script can then continue
    begin
        execute immediate 'create table sleeps_till_xmas( sleeps number)';
    exception when e_obj_exists then
        dbms_output.put_line('Someone started counting early this year !');
    end;
end;
/

Using this pattern, Debbie could knock up something like…

set serveroutput on size unlimited
spool rerun_master1.log

declare
    --
    -- Create exceptions for the errors we may anticipate in the event of second or subsequent
    -- execution of DDL. 
    --

    -- Table and object exceptions
    e_obj_exists exception;
    e_no_such_tab exception;
    
    -- Sequence exceptions
    e_no_such_seq exception;

    -- Column change exceptions
    e_dup_col_name exception;
    e_col_exists exception;
    e_no_such_col exception;
    e_col_already_not_null exception;

    -- Constraint exceptions
    e_tab_has_pk exception;
    e_cons_exists exception;
    e_fk_exists exception;
    
    -- ORA-00955: name is already used by an existing object
    pragma exception_init( e_obj_exists, -955);

    -- ORA-00942: table or view does not exist 
    pragma exception_init( e_no_such_tab, -942);

    -- ORA-02289: sequence does not exist
    pragma exception_init( e_no_such_seq, -2289);

    -- ORA-00957: duplicate column name
    pragma exception_init( e_dup_col_name, -957);
    
    -- ORA-01430: column being added already exists in table
    pragma exception_init( e_col_exists, -1430);

    -- ORA-00904: "%s": invalid identifier
    pragma exception_init( e_no_such_col, -904);

    -- ORA-01442: column to be modified to NOT NULL is already NOT NULL
    pragma exception_init( e_col_already_not_null, -1442);

    -- ORA-02260 : table can have only one primary key
    pragma exception_init( e_tab_has_pk, -2260);

    -- ORA-02264: name already used by an existing constraint
    pragma exception_init( e_cons_exists, -2264);

   

    -- ORA-02275: such a referential constraint already exists in the table
    pragma exception_init( e_fk_exists, -2275);


begin
    dbms_output.put_line('Creating new tables');
    -- each DDL statement will need to be in it's own block so we can handle the exceptions separately
    begin
        dbms_output.put_line('COUNTRIES');
        execute immediate 
            'create table countries(
                iso_code varchar2(3),
                country_name varchar2(4000))';
    exception when e_obj_exists then 
        dbms_output.put_line('Table exists - skipping.');
    end;

    begin
        dbms_output.put_line('DELIVERIES');
        execute immediate
            'create table deliveries(
                id number not null,
                recipient_name varchar2(4000) not null,
                country_code varchar2(3),
                belief_end_date date,
                date_of_birth date)';
    exception when e_obj_exists then 
        dbms_output.put_line('Table exists - skipping.');
    end;

    dbms_output.put_line('Creating Sequence');
    begin
        execute immediate 
            'create sequence del_id_seq
                start with 1
                increment by 1
                nocycle';
        exception when e_obj_exists then
            dbms_output.put_line('Sequence exists - skipping');
    end;

    dbms_output.put_line('Adding PK to a table');
    begin
        execute immediate 'alter table countries add constraint coun_pk primary key (iso_code)';
    exception when e_tab_has_pk then
        dbms_output.put_line('PK already exists - skipping');
    end;

    dbms_output.put_line('Renaming a column');
    begin
        execute immediate 'alter table deliveries rename column country_code to coun_iso_code';
    exception when e_dup_col_name then
        dbms_output.put_line('Column already renamed - skipping');
    end;

    dbms_output.put_line('Adding a Foreign Key');
    begin
        execute immediate 
            'alter table deliveries add constraint 
                del_coun_fk foreign key (coun_iso_code) references countries(iso_code)';
    exception when e_fk_exists then
        dbms_output.put_line('FK already exists - skipping');
    end;

    dbms_output.put_line('adding a Not Null constraint');
    begin
        execute immediate 'alter table deliveries modify coun_iso_code not null';
    exception when e_col_already_not_null then
        dbms_output.put_line('Column is already Not Null - skipping');
    end;

    dbms_output.put_line('adding new column');
    begin
        execute immediate 'alter table deliveries add naughty_flag varchar2(1) not null';
    exception when e_col_exists then
        dbms_output.put_line('Column already exists - skipping');
    end;

    dbms_output.put_line('creating check constraint');
    begin
        execute immediate q'[alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))]';
    exception when e_cons_exists then 
        dbms_output.put_line('Constraint already exists - skipping');
    end;

    dbms_output.put_line('adding virtual column');
    begin
        execute immediate 
            q'[alter table deliveries add 
                active_flag generated always as ( 
                    case when belief_end_date is not null then 'Y' else 'N' end) virtual visible]';
    exception when e_col_exists then
        dbms_output.put_line('Column already exists - skipping');
    end;

end;
/

rem 
rem View statement is inherently re-runnable (CREATE OR REPLACE) so just execute the script...
rem

prompt Creating View
@deliveries_vw.sql


prompt Creating COUNTRIES records

declare

    procedure ins( i_code in countries.iso_code%type, i_name countries.country_name%type) 
    is
    begin
        merge into countries
        using dual 
        on ( iso_code = i_code)
        when not matched then 
            insert( iso_code, country_name)
            values( i_code, i_name);
    end ins;

begin
  -- Furthest stop on the round !
ins('ATA', 'ANTARCTICA');
ins('CAN', 'CANADA');
ins('COL', 'COLOMBIA');
-- Company Head Office is here...
ins('FIN', 'FINLAND');
ins('DEU', 'GERMANY');
ins('IND', 'INDIA');
ins('MDG', 'MADACASCAR');
ins('NZL', 'NEW ZEALAND');
-- Can't find the ISO code for Wales for some reason !
ins('GBR', 'UNITED KINGDOM');
ins('USA', 'UNITED STATES OF AMERICA');
commit;

end;
/

prompt Deployment completed.
spool off

On first execution, this script would run pretty much in the same way as the original :

Subsequent runs, however, would be a little smoother…

Whilst it did the job, Debbie felt that this approach had some shortcomings.

For one thing, table creation statements could get quite lengthy and complex so having them in-line as literals could get a bit fiddly.
For another, it was necessary to anticipate which exceptions you would run into and handle them accordingly. This was not always straightforward.
For example, trying to add an existing column to a table would result in :

ORA-01430: column being added already exists in table

However, renaming a column that already exists would give you :

ORA-00957: duplicate column name

On top of that, this approach required the re-coding of the exception handlers every time you wrote a new master release script.
It was for this reason that you would find exceptions that were declared but not used in a script ( as there are in this one).
Additionally, there was the temptation to throw all of the DDL into a single script to allow re-use of the exception declarations. This was likely to lead to a script which would quickly become quite large.

Debbie thought that there was an alternative that, with a bit of preparation, would require her to do rather less typing in the long run…

Using the Data Dictionary

To start with, Debbie created a package containing functions to check for the existing state of objects :

create or replace package ddl_checks
    authid current_user
as

    function object_exists( 
        i_name in user_objects.object_name%type,
        i_type in user_objects.object_type%type)
        return boolean;

    function column_exists(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean;

    function column_is_nullable(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean;

    function constraint_exists(
        i_table user_constraints.table_name%type,
        i_cons_name user_constraints.constraint_name%type,
        i_cons_type user_constraints.constraint_type%type)
        return boolean;
end ddl_checks;
/

create or replace package body ddl_checks
as

    -- PRIVATE package members
    function is_valid_type( i_type in user_objects.object_name%type)
        return boolean deterministic
    --
    -- Returns true if the object type is one that would be included in user_objects
    -- and can be used as the direct subject of a CREATE statement
    --
    is
    begin
        return upper(i_type) in (
            'TABLE', 'SEQUENCE', 'SYNONYM', 'INDEX', 'MATERIALIZED VIEW', 'VIEW',
            'FUNCTION', 'PROCEDURE', 'TRIGGER',
            'PACKAGE', 'PACKAGE BODY',
            'TYPE', 'TYPE BODY');
    end is_valid_type;

    -- PUBLIC package members
    function object_exists( 
        i_name in user_objects.object_name%type,
        i_type in user_objects.object_type%type)
        return boolean
    is
        dummy pls_integer;
        e_invalid_type exception;
    begin
        if not is_valid_type( i_type) then
            raise e_invalid_type;
        end if;
        select null into dummy 
        from user_objects
        where object_name = upper( i_name)
        and object_type = upper( i_type);

        return true;
    exception 
        when no_data_found then
            return false;
        when e_invalid_type then
            raise_application_error(-20900, 'Cannot verify the existence of this type of object');
    end object_exists;

    function column_exists(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean
    is
        dummy pls_integer;
    begin
        select null into dummy
        from user_tab_columns
        where table_name = upper(i_table)
        and column_name = upper(i_column);

        return true;
    exception when no_data_found then
        return false;
    end column_exists;

    function column_is_nullable(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean
    is
        l_nullable user_tab_columns.nullable%type;
    begin 
        select nullable into l_nullable  
        from user_tab_columns 
        where table_name = upper(i_table)
        and column_name = upper( i_column);

        return( l_nullable = 'Y');
    end column_is_nullable;

    function constraint_exists(
        i_table user_constraints.table_name%type,
        i_cons_name user_constraints.constraint_name%type,
        i_cons_type user_constraints.constraint_type%type)
        return boolean
    is
        dummy pls_integer;
        e_invalid_type exception;
    begin
        if upper(i_cons_type) not in ('P', 'U', 'R', 'C', 'V') then
            -- valid constraint types in 11g are :
            -- P(rimary Key)
            -- U(nique Key)
            -- R(eferrential Integrity Constraint or Foreign Key)
            -- C(heck constraint)
            -- V(iew - usually a check option)
            raise e_invalid_type;
        end if;

        select null into dummy
        from user_constraints
        where table_name = upper(i_table)
        and constraint_name = upper(i_cons_name)
        and constraint_type = upper(i_cons_type);

        return true;
    exception
        when no_data_found then
            return false;
        when e_invalid_type then
            raise_application_error( -20901, 'Not a valid constraint type value');
    end constraint_exists;

end ddl_checks;
/

Nothing too exotic here – Debbie simply use the data dictionary to work out the existence or state of database objects in the current schema.
The use of invoker’s rights ensures that these function will only report on objects in the calling schema.
This fitted in with the deployment practices for this application where the application owner schema was used to run the deployment.

Now, Debbie may have been tempted at this point to make use of some of SQLCL’s nifty new features. Unfortunately the Vogons had put they kybosh on that particular option, so she just had to do her best with good old SQL*Plus…

set serverout on
var command varchar2(4000)
set verify off

declare
    i_type varchar2(30) := '&1';
    i_name varchar2(30) := '&2';
    i_path varchar2(4000) := '&3';

begin    
    if ddl_checks.object_exists( i_name, i_type) then
        :command := 'prompt '||i_type||' '||i_name||' already exists - skipping';
    else
        :command := '@'||i_path;
    end if;   
end;
/
set heading off 
set feedback off 
set termout off

spool create_object.tmp
print :command
spool off

set termout on
set feedback on
@create_object.tmp

undef command

It’s probably worth pausing here to go through what this script is doing.
First, Debbie declared a SQL*Plus variable called command.

This variable is then populated in an anonymous PL/SQL block based on whether the object specified by the first two arguments passed in already exists. If it does then command will simply be set to output a message to this effect. Otherwise it will be set to call the script specified in the third argument.

The value of command is written to a file called create_object.tmp which is then executed.

So, to use this script for the COUNTRIES table which has it’s DDL in the script ddl/tables/countries_tbl.sql we can run…

@create_object.sql TABLE COUNTRIES ddl/tables/countries_tbl.sql

The first time this is run ( i.e. when the table does not already exists), the output file – create_object.tmp
looks like this :

@ddl/tables/countries_tbl.sql

As a result, the script is executed and the table is created :


The second and subsequent runs produce a file containing…

prompt table countries already exists - skipping

which produces the output :

A similar script can then be used for column creation :

set serverout on
var command varchar2(4000)
set verify off
declare
    i_table varchar2(30) := '&1';
    i_column varchar2(30) := '&2';
    i_path varchar2(4000) := '&3';

begin    
    if ddl_checks.column_exists( i_table, i_column) then
        :command := 'prompt Column '||i_table||'.'||i_column||' already exists - skipping';
    else
        :command := '@'||i_path;
    end if;   
end;
/
set heading off 
set feedback off 
set termout off

spool add_column.tmp
print :command
spool off

set termout on
set feedback on
@add_column.tmp

undef command

To add the new COUNTRIES records, Debbie’s preferred the option of simply moving the merge statement into a package :

create or replace package manage_countries as
    procedure save_country( i_code countries.iso_code%type, i_name countries.country_name%type);
end manage_countries;
/

create or replace package body manage_countries as
    procedure save_country( i_code countries.iso_code%type, i_name countries.country_name%type)
    is 
    begin
        merge into countries
            using dual
            on( iso_code = i_code)
            when matched then update
                set country_name = i_name
            when not matched then 
                insert( iso_code, country_name)
                values( i_code, i_name);
    end save_country;
                
                
end manage_countries;
/

With these changes in place, her individual scripts could remain largely recognisable (and in some cases, unchanged) as simple DDL statements.
The scripts that she would have to change were those to do with constraints…

The Primary Key :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'countries', 'coun_pk', 'P') then
        msg := 'PK already exists - skipping';
    else
        execute immediate 'alter table countries add constraint coun_pk primary key (iso_code)';
        msg := 'PK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

The Foreign Key :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'deliveries', 'del_coun_fk', 'R') then
        msg := 'FK already exists - skipping';
    else
        execute immediate 
            'alter table deliveries add constraint del_coun_fk foreign key (coun_iso_code) references countries(iso_code)';
        msg := 'FK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

The Not Null constraint :

declare
    msg varchar2(4000);
begin
    if ddl_checks.column_is_nullable( 'deliveries', 'coun_iso_code') then
        execute immediate 'alter table deliveries modify coun_iso_code not null';
        msg := 'Column made mandatory';
    else 
        msg := 'Column is already Not Null - skipping';
    end if;
    dbms_output.put_line(msg);
end;
/

The check constraint :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'deliveries', 'del_nf_chk', 'C') then
        msg := 'FK already exists - skipping';
    else
        execute immediate 
            q'[alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))]';
        msg := 'FK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

…and the insert script :

begin
    manage_countries.save_country('ATA', 'ANTARCTICA');
    manage_countries.save_country('CAN', 'CANADA');
    manage_countries.save_country('COL', 'COLOMBIA');
    manage_countries.save_country('FIN', 'FINLAND');
    manage_countries.save_country('DEU', 'GERMANY');
    manage_countries.save_country('IND', 'INDIA');
    manage_countries.save_country('MDG', 'MADACASCAR');
    manage_countries.save_country('NZL', 'NEW ZEALAND');
    manage_countries.save_country('GBR', 'UNITED KINGDOM');
    manage_countries.save_country('USA', 'UNITED STATES OF AMERICA');

    commit;
end;
/

With these changes in place, Debbie was now able to run the following master release script, confident that any errors reported would be genuine…

prompt Creating new tables

@create_object.sql TABLE COUNTRIES ddl/tables/countries_tbl.sql
@create_object.sql Table Deliveries ddl/tables/deliveries_tbl.sql

prompt Creating a sequence

@create_object.sql sequence del_id_seq ddl/sequences/del_id_seq.sql

prompt Adding PK to a table

@ddl/table_alter/coun_pk.sql

prompt Renaming a column 

@add_column.sql DELIVERIES coun_iso_code ddl/table_alter/rename_del_coun_code.sql

prompt Adding Foreign Key

@ddl/table_alter/del_coun_fk.sql

prompt adding a Not Null constraint

@ddl/table_alter/del_iso_code_nn.sql

prompt adding new column 
@add_column.sql deliveries Naughty_Flag ddl/table_alter/naughty_flag.sql

prompt creating check constraint

@ddl/table_alter/del_nf_chk.sql

prompt adding virtual column

@add_column.sql Deliveries ACTIVE_FLAG ddl/table_alter/del_active_virt.sql

prompt creating view

@ddl/views/deliveries_vw.sql

prompt Creating package

@ddl/packages/manage_countries.pks
@ddl/packages/manage_countries.pkb

prompt creating COUNTRIES records...

@static_data/add_countries.sql

prompt Deployment completed.

The initial run looked like this :

Subsequent runs worked as expected :

“Well”, thought Debbie, “I think I’ve really earned my Christmas Bonus this year !”

NOTE : This story is fictional. Any resemblance to any Debbies, living or Welsh is purely intentional.

I’ll drink to that !

VirtualBox – configuring a Host-Only Network

Sat, 2018-11-17 08:34

I’m currently indulging in the pastime that’s sweeping the country – trying not to think about Brexit.
It’s a craze that’s even spread as far our political elite. In their case, it manifests itself in slightly different ways.
On the one hand, there are those who are refusing to accept any solution offered to maintain a “soft” border on the island of Ireland. As far as I can tell, they haven’t managed to offer any practical solution that they would accept as that would involve thinking about Brexit.
On the other hand there are those who are pushing for a new referendum because, apparently, some politicians lied when campaigning. Maybe someone was “Putin” ’em up to it ?

For my part, as I don’t quite have the space for a bunker at the bottom of my garden, I’ve decided to hide out in to a world of make-believe…well Virtual Machines at any rate.

I want to setup a CentOS Virtual Machine (VM) that I can then use as to clone environments to host various software stacks that I may want to play with.
I’d like to be able to connect to these VMs directly from my host OS, just like a real-world server. However, I’d also like to be able to connect the VM to the outside world occasionally so I can run package updates via yum.
The specific steps I’m going to go through are :

  • Install CentOS7 into a Virtualbox VM
  • Setup Host Only Network in VirtualBox
  • Create a Network Interface on the Guest to use the Host Only Network
  • Assign a static IP address to the Guest

The software I’m using for this is :

Before we get cracking, it’s probably a good idea to have a quick look at…

VirtualBox Networking

VirtualBox supports four networking modes of which Network Address Translation (NAT) is the default.
Whilst this is perfectly fine for allowing internet access directly from the Guest, it does have some limitations, including the inability to connect via ssh from the Host machine.

To overcome this, we can setup a Host Only Network.
The catch here is that, using this networking mode, the Guest will not be able to see the internet so updates via yum will be problematic.
Fortunately, VirtualBox allows you to configure multiple Network Cards (NICs) on a VM so it’s possible to toggle between the two modes depending on what’s required.
This is the approach I’m going to take here.

Incidentally, the VirtualBox documentation includes a chapter on Virtual Networking which you may find useful.

Right, onwards…

Installing CentOS7 into a VirtualBox VM

Essentially, I’ve followed these steps to do the basic installation. However there are some changes around configuring the Guest Additions, which I’ll come onto shortly.

Installation Options

When you first fire-up CentOS in the VM, the installer kicks in.
At this point, I’ve selected Server with GUI as the Base Environment along with the Development Tools add-on :

The next step is to rename the server to something a bit more memorable. Now, I’ve known data centres where server names followed a theme – they could all be artists or even race horses.
In this case I’ve decided to call this server “thor” – as it’s the only one you’re going to see in this article, you’ll be left to ponder whether I’m following a high-brow intellectual theme ( Norse Gods), or whether I’m just using characters from the Marvel Comic Universe.
We also need to make sure that networking is enabled by ensuring the default NIC ( enp0s3 in this case) is on :

No, it’s not pronounced “phwoar”…even in Welsh.

Note that, at this point, the domain name “virtualbox” is arbitrary.

After the installation is complete, you’ll need to restart the machine and accept the license.
Before we go any further we should now see that networking is enabled and that we have access to the internet :

Getting sudo access

In order to keep things simple, I’m going to give my user sudo privileges. In CentOS, these are derived from the wheel group. So, in a Terminal:

su root
usermod -aG wheel mike

NOTE – you’ll need to logout and log back in (or simply restart the VM) for this change to take effect.

Once the server has restarted, you can check that things have worked as expected :

groups mike
mike : mike wheel
Update packages with yum

However recently (or otherwise) you downloaded your CentOS iso, it’s always a good idea to update the packages before we get too much further :

sudo yum update
Installing Guest Additions

There’s a whole chapter on Guest Additions in the documentation.
For this VM, I’m installing Guest Additions 5.1.38. This is done in the usual way – i.e.
With the Guest running, go to the VirtualBox Devices Menu and select Insert Guest Additions CD Image.
When prompted, hit the Run button.

For my part, I’m installing Guest Additions to take advantage of the bi-directional clibpoard and ability to cut and paste between Guest and Host.
If you’re tempted to go further and try to maximise the viewport for the GUI, a word of warning, enabling 3D Accelaration for the VM’s display caused the VM to fail to start.
This may simply be an issue with the versions of CentOS/Virtualbox/Guest Additions that I’m using, but I thought I should mention it, just in case.

Now we’ve completed the initial setup of the CentOS VM, the next step is to…

Create a Host Only Network

In VirtualBox itself, go to the File menu and select Preferences.
Then choose the Network icon and go to the Host-only Networks tab.
Click on the Add icon on the right-hand side and create a new network :

Click on the screwdriver icon and you should be able to see Details of the new network, including it’s starting IP address :

Initially, we’re going to use DHCP to confirm that our configuration has worked. Therefore, we need to go to the DHCP Server tab and check Enable Server.

Note that, in this example, I’ve set the Lower and Upper Address bounds manually.
Of course, using DHCP means that an IP address will be allocated to the VM each time it starts. We’ll come onto how to configure a fixed IP address in a bit. For now though, we just want to make sure everything is working.

Add a Host Only NIC to the VM

For our VM to use our new network, we need to add a Host Only Network Interface to it.

To do this, we need to open the Network settings for that VM in Virtualbox and click on the Adapter 2 tab.
Make sure that Enable Network Adapter is checked
Then set Attached to to Host Only Adapter and the Name to that of the network we’ve just created :

If we now start the VM again, we can see that we have a second NIC, which is now connected :


However, we no longer have access to the internet from within the VM :

We can, however, connect to the running VM via ssh.
First of all, we need to determine the IP address that’s been allocated to the VM by the DHCP server. We can do this from the host using :

vboxmanage guestproperty enumerate CentOS7_HO1 |grep VirtualBox/GuestInfo/Net/0/V4/IP
Name: /VirtualBox/GuestInfo/Net/0/V4/IP, value: 192.168.57.100, timestamp: 1541956114954604000, flags: 

Now, using this IP address, we can connect from the host …

If we look in the VM itself, we can see that there is now a second Network Interface – enp0s8

We want to make this the default NIC so we need to go to the settings…

…and set it to Connect Automatically

We then need to uncheck the Connect Automatically checkbox for the original NIC ( enp0s3 in my case), so that it does not connect unless we want it to.
Now, when the VM starts up, it will be using the Host Only Network by default.

Right, we’ve got the Host Only Network up and running but finding out the IP address for the VM every time we start it up is going to be a bit of messing about.
Fortunately we can dispense with that by simply…

Assigning a Static IP Address

To be on the safe side, I had the VM powered down when I started these steps.

First, we need to go back to the Host Only Network Settings in Virtualbox for our network, go to the DHCP tab and uncheck the enable server option :

Next we need to fire up the VM and create a config file for the NIC we’re using for the Host Only Network ( en0s8) :

sudo nano /etc/sysconfig/network-scripts/ifcfg-en0s8

The file should look like this. Well, actually it’s probably more accurate to say that my file looks like this :

TYPE=ETHERNET
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.57.123
PREFIX=24
GATEWAY=192.168.57.254
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_PRIVACY=no

NOTE :

  • DEVICE is the name of the Host-only NIC we created
  • IPADDR is the static IP address you want to assign to the VM
  • GATEWAY is the upper IP address in the range where IPADDR is located (as far as I can tell)

Restart networking on the VM …

sudo systemctl restart network

…and confirm that the static IP address is now being used…

ip address show dev enp0s8

…in my case returns…

3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:5b:02:a1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.57.123/24 brd 192.168.57.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::ad98:f0f4:9406:5348/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

At this point I’ve chosen to re-start the VM completely to allow it to pick up the new networking changes.
In fact, as I’m feeling brave, I’ve started the VM headless.

Anyway, we can now ping the new static IP address, and connect to the server using it :

I don’t want to have to remember the IP address every time I want to connect to the server so I’ve simply added an entry for it in the hosts file on the Host :

sudo nano /etc/hosts

At the end of the file I’ve added the lines…

#VirtualBox Host Only Network hosts
192.168.57.123 thor.virtualbox

I can now use the alias I’ve set to hit the VM :

Using the Host-Only network should be fine for me for the most part. However, if I do want to update the packages on the CentOS VM, all I need to do is to switch to the NAT network card and I’ll be able to run an update with no issues.

Well, that’s taken my mind off things for a bit, now I just need to find something else to distract myself until the end of March when Brexit will (might) finally be over.

New Dog, Old Tricks – how to save yourself some typing with sed

Sun, 2018-11-04 10:07

We have a new addition to our household –

Teddy


Cute and fluffy he may be, but he’s got to earn his keep. He can start making himself useful by helping me with this post.

It begins one Friday afternoon when an urgent request lands on my desk with a large splat.

The requirement is that some csv files be uploaded into the Oracle 11g Datbasae serving the UAT environment to facilitate some testing.
There are around 20 files, each with a slightly different set of attributes.
The files are currently sitting on the on the Red Hat Linux Server hosting the database.
I have sufficient OS permissions on the server to move them to a directory that has a corresponding database object in the UAT instance.
Nevertheless, the thought of having to knock out 20-odd external tables to read these files might leave me feeling a bit like this…


Fortunately, a certain Lee E. McMahon had the foresight to predict the potential risk to my weekend and wrote the Stream Editor (sed) program

The File

The file I’m using as an example is here on the Database Server :

and it’s contents are :

employee_id,first_name,last_name,email,salary,commission%
145,"John","Russell","JRUSSEL",14000,0.4
146,"Karen","Partners","KPARTNER",13500,0.3
147,"Alberto","Errazuriz","AERRAZUR",12000,0.3
148,"Gerald","Cambrault","GCAMBRAU",11000,0.3
149,"Eleni","Zlotkey","EZLOTKEY",10500,0.2
150,"Peter","Tucker","PTUCKER",10000,0.3
151,"David","Bernstein","DBERNSTE",9500,0.25
152,"Peter","Hall","PHALL",9000,0.25
153,"Christopher","Olsen","COLSEN",8000,0.2
154,"Nanette","Cambrault","NCAMBRAU",7500,0.2
155,"Oliver","Tuvault","OTUVAULT",7000,0.15
156,"Janette","King","JKING",10000,0.35
157,"Patrick","Sully","PSULLY",9500,0.35
158,"Allan","McEwen","AMCEWEN",9000,0.35
159,"Lindsey","Smith","LSMITH",8000,0.3
160,"Louise","Doran","LDORAN",7500,0.3
161,"Sarath","Sewall","SSEWALL",7000,0.25
162,"Clara","Vishney","CVISHNEY",10500,0.25
163,"Danielle","Greene","DGREENE",9500,0.15
164,"Mattea","Marvins","MMARVINS",7200,0.1
165,"David","Lee","DLEE",6800,0.1
166,"Sundar","Ande","SANDE",6400,0.1
167,"Amit","Banda","ABANDA",6200,0.1
168,"Lisa","Ozer","LOZER",11500,0.25
169,"Harrison","Bloom","HBLOOM",10000,0.2
170,"Tayler","Fox","TFOX",9600,0.2
171,"William","Smith","WSMITH",7400,0.15
172,"Elizabeth","Bates","EBATES",7300,0.15
173,"Sundita","Kumar","SKUMAR",6100,0.1
174,"Ellen","Abel","EABEL",11000,0.3
175,"Alyssa","Hutton","AHUTTON",8800,0.25
176,"Jonathon","Taylor","JTAYLOR",8600,0.2
177,"Jack","Livingston","JLIVINGS",8400,0.2
179,"Charles","Johnson","CJOHNSON",6200,0.1

The database object for this directory is :

select directory_name
from dba_directories
where directory_path = '/u01/app/oracle/myfiles'
/

DIRECTORY_NAME
------------------------------
MYFILES

Building the External Table DDL

As you can see from the file, external table columns can be taken from the header record, with a couple of tweaks :

  1. all columns will be defined as varchar2(4000)
  2. the “%” in “commission%” needs to be replaced with “_pct” to make the column name legal in Oracle

First up then we get sed to replace each of the pipes in the header row…

head -1 emps.dat|sed s/,/" varchar2(4000),"/g

employee_id varchar2(4000),first_name varchar2(4000),last_name varchar2(4000),email varchar2(4000),salary varchar2(4000),commission%

The /s switch means “substitute string1 with string2
The /g means – apply this whenever you find string1

We can use the same technique to replace the “%” signs…

$ head -1 emps.dat|sed s/%/_pct/g
employee_id,first_name,last_name,email,salary,commission_pct

This means that we now have the basis for a simple shell script to do the External Table creation legwork for us…

#!/bin/sh
# Script to generate an external table based on the .dat file supplied as $1
baseFname=`basename -s .dat $1`
tsuff=_xt
tname=$baseFname$tsuff
fname=$tname.sql
echo "create table $tname (" >$fname
echo -n `head -1 $1`|sed s/,/" varchar2(4000),\n"/g|sed s/%/_pct/g >>$fname
echo " varchar2(4000))">>$fname
echo "organization external ( type oracle_loader">>$fname
echo "default directory MYFILES access parameters (">>$fname
echo -e "\t records delimited by newline">>$fname
echo -e "\t\t badfile '$baseFname.bad'">>$fname
echo -e "\t\t logfile '$baseFname.log'">>$fname           
echo -e "\t\t skip 1">>$fname
echo -e "\t\t fields terminated by ',' (">>$fname
echo -ne "\t\t\t">>$fname
echo -n `head -1 $1`|sed s/,/" char(4000),\n"/g|sed s/%/_pct/g >>$fname
echo -e " char(4000)">>$fname
echo -e "\t\t)">>$fname
echo -e "\t ) location ( '$1' )">>$fname
echo -e ") reject limit unlimited;">>$fname

Running this for our file…

. ./gentxt.sh emps.dat

… gives us a complete External Table definition in a file called emps_xt.sql …

create table emps_xt (
employee_id varchar2(4000),
first_name varchar2(4000),
last_name varchar2(4000),
email varchar2(4000),
salary varchar2(4000),
commission_pct varchar2(4000))
organization external ( type oracle_loader
default directory MYFILES access parameters (
	 records delimited by newline
		 badfile 'emps.bad'
		 logfile 'emps.log'
		 skip 1
		 fields terminated by ',' (
			employee_id char(4000),
first_name char(4000),
last_name char(4000),
email char(4000),
salary char(4000),
commission_pct char(4000)
		)
	 ) location ( 'emps.dat' )
) reject limit unlimited;

OK, the formatting could use some work. however, the sql itself is valid…

SQL> @emps_xt.sql

Table created.

With the external table in place, we can now upload the data from the file…

Hours of drudgery have been avoided, which is just as well because someone gets a bit grumpy when they don’t get their walkies !

Cocktails and Traffic Cones – party time with DVDs and Blu-Rays in Ubuntu

Wed, 2018-08-29 14:37

This title may evoke images of a rumbustious night out filled with exotic drinks and highjinks followed by a morning waking up in possession of a traffic cone, the acquisition of which has somehow escaped the wreckage of your short-term memory.
If this is the case, you may be a tiny bit disappointed. This is all about how to play and rip DVDs and Blu-rays on Ubuntu.
Whilst that may not sound like quite as much fun, it’s less to leave you with a raging hangover. It should however, enable you to enjoy your video on your OS of choice.
What cocktails and traffic cones have to do with all of this will become apparent shortly.

What I’m going to cover here is :

  • How to Decode and Play DVDs using VLC
  • How to Convert DVD and Blu-ray files to mp4 video using Handbrake
  • How to Transcode DVD and Blu-ray discs to Matroska (mkv) format using MakeMKV

This should give you all of the steps required to watch and – if required – copy movies, tv shows etc from an optical disc.

First of all though…

The Legal Disclaimer
The legality of ripping copyrighted material differs across jurisdictions. You may want to check the situation where you are before you follow any of the steps detailed in this article.

Whilst we’re on the subject of disclaimers…

The Taste Disclaimer
The subject matter at hand means that there is a strong temptation to include quotes and (possibly) oblique references to movies here and there. Of course I wouldn’t dream of stooping so low just to get cheap laughs…much.

Oh, one more thing…

Efficacy disclaimer – The steps described here will work most discs. In the rare instances where this is not the case do not seem to follow and discernible pattern.
For example, the same steps to persuade a dark comedy to present you with a Marmalade Sandwich (in mp4 format), may cause a loveable cartoon bear to fix you with a stare that’s harder than a coffin nail.

Moving swiftly on…

Required Libraries

In order to read DVDs and Blu-rays there are a number of packages that you’ll need. As I don’t know which desktop you’re running (I’m currently on Unity on Ubuntu 16.04 LTS), I’ll do this bit in the Terminal (the CLI, not the Tom Hanks film)…

To start with, you should find that most of the packages we need are there already…

apt list libdvd* libaacs* libbluray* --installed

…should confirm that the following packages are installed :

There are a few more packages required which we can acquire in two stages. First of all…

sudo apt-get install libdvd-pkg

…which produces the following output


[sudo] password for mike: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  autoconf automake autopoint autotools-dev debhelper dh-autoreconf dh-strip-nondeterminism libfile-stripnondeterminism-perl libltdl-dev libmail-sendmail-perl libsigsegv2
  libsys-hostname-long-perl libtool m4 po-debconf
Suggested packages:
  autoconf-archive gnu-standards autoconf-doc dh-make libtool-doc gfortran | fortran95-compiler gcj-jdk libmail-box-perl
The following NEW packages will be installed
  autoconf automake autopoint autotools-dev debhelper dh-autoreconf dh-strip-nondeterminism libdvd-pkg libfile-stripnondeterminism-perl libltdl-dev libmail-sendmail-perl libsigsegv2
  libsys-hostname-long-perl libtool m4 po-debconf
0 to upgrade, 16 to newly install, 0 to remove and 496 not to upgrade.
Need to get 2,897 kB of archives.
After this operation, 8,428 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y

Enter Y to continue.

Next up :

sudo apt-get install libbluray-bdj

…initially the output will be something like :

The following additional packages will be installed:
  ca-certificates-java default-jre-headless java-common libasm4-java openjdk-8-jre-headless
Suggested packages:
  default-jre fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei fonts-wqy-zenhei fonts-indic
The following NEW packages will be installed
  ca-certificates-java default-jre-headless java-common libasm4-java libbluray-bdj openjdk-8-jre-headless
0 to upgrade, 6 to newly install, 0 to remove and 496 not to upgrade.
Need to get 27.8 MB of archives.
After this operation, 101 MB of additional disk space will be used.
Do you want to continue? [Y/n] 

This time, hitting Y will cause a number of things to happen. Apart from a Java Runtime being installed ( if one is not already present on the system), you are likely to find yourself presented with this screen :

Say Yes ( hit Enter) and you’ll then get :

Once again, say Yes (hit Enter).

Now that’s all done, if we re-run our initial check…

apt list libdvd* libaacs* libbluray* --installed

we should now get :

Before we leave the Command Line (for now, at least) there’s one more step requried to facilitate reading blu-rays…

mkdir -p ~/.config/aacs/
cd ~/.config/aacs/ && wget http://vlc-bluray.whoknowsmy.name/files/KEYDB.cfg

After all that, it’s probably a good idea to check that you can now play any discs that you’re planning to rip.
Whilst Totem is default tool for this on my desktop, in the wider Linux world – and pretty much everywhere else for that matter – the best app for playing video is undoubtedly…

VLC

The Video Lan Media Player (VLC) is available across multiple platforms and will pretty much play anything. As a clincher, it’s versions are named after Discworld characters. I’ve used Wetherwax (2.2.2) here.
Oh, and it’s icon is a traffic cone, which goes some way to explaining the title of this article.

To see if you already have VLC installed, you can run :

which vlc

If this returns something like…

/usr/bin/vlc

…then you’re good to go. Otherwise, you can either install from the Terminal…

sudo apt-get install vlc

…or via the Software centre

Either way, you can now use VLC to play your DVDs.

With a DVD disc in the drive, start VLC and go to the Media menu and select Open disc…

Playing a Blu-Ray requires some slightly different settings when opening the disc in VLC :

When playing blu-rays with VLC there are a couple of points to note –

  1. don’t try to open the disc with VLC because VLC will assume it’s a DVD. Instead, open VLC then open the disc from the VLC menu
  2. make sure that “No disc menus” is checked when you open the disc in VLC

If all you need is a means of playing discs on your Ubuntu machine, then VLC and the aforementioned packages do the job.
If however, you’ve ever opened a DVD case in anticipation of an hour-and-a-half of explosions and mayhem, only to find a Disney Musical, and reflected that high spirits are just no substitute for 800 rounds-a-minute, you’ll be interested in finding a way to store all your movies in one place on file.

Fortunately, as well as playing video from disc, VLC allows you to see which Title holds the movie (on DVDs at least) :


Whilst this is usually fairly obvious, there are discs that contain several titles of the same or similar length.

Using VLC enables you to determine which track you need to rip.

All we need now is a cocktail tool to do the ripping …

Handbrake

Of all the ripping tools available in Linux, Handbrake appears to be the most well-supported and long-lived.
Whilst it is included in the Ubuntu repositories, the documentation on the Handbrake Website suggests that sourcing the package from there may not be the best approach.
If you want to follow this advice, but persist with the package route, you can add the Ubuntu handbrake repository to your system…

 
sudo add-apt-repository ppa:stebbins/handbrake-releases 
sudo apt-get update

The screenshots here are taken from version 1.1.0 (64-bit version) :

Hanbrake is a converter – it will take your source video and convert it to another format. In this case, we’re going to convert to MP4.
The MP4 file you end up with is likely to be a fraction of the size of the source video file(s) you start with.
The act of conversion is rather CPU-intensive. As a result, you will find that the time taken to rip a movie will be as dependent on the speed of the available hardware as it is on the settings you choose.

The other point to note is that prolonged high CPU activity can cause your machine to run quite hot. Therefore, I’ve found that it’s a good idea to :

  • Ensure that your machine is sitting somewhere that is well-ventilated
  • Allow the machine to cool down between prolonged bouts of ripping

If you want to get an idea of the internal temperature of your machine, you could use the sensors command :

By now, you’ll probably have noticed Handbrake’s distinctive icon. This should give you a hint that, even with an extremely fast processor, transcoding is an activity that you can kick off and leave to run for a while.

When you start Handbrake, the first decision you are presented with is which preset to use.
These presets provide default settings for how you want to rip your movie.

Not being particularly fussy about such things, I usually select Very Fast 1080p30 because it makes the transcoding process…well…a bit faster than the default (Fast 1080p30).

Once you’ve made your selection, simply close the Presets Window.
You then need to click the Open Source button to select the location of the source you want to transcode. If there is an optical disc available, Handbrake should defualt to this.
Simply click Open

The disc will then be scanned ( which can take a while) before Handbrake presents you with it’s choice of which title to encode :

…which you can then change if you with using the drop-down.

Once you have chosen which title, you may want to have a look at the Subtitles tab.
By default, Handbrake will only attempt to rip “forced” subtitles. These are subtitles for parts of the movie that aren’t in the main language of the movie. If the movie you are working on does not include these then you can save yourself 1 encode pass ( and a fair bit of time) by removing them by clicking the ‘X’ button

Once done, you can return to the Summary Tab and hit the Start Encoding button

If you do select subtitiles, Handbrake will do two encoding passes. The first appears to be for the subtitles and has no appreciable impact on system resource usage.

When the second pass starts, however, you’ll observe a spike in the CPU usage…

For most discs, this approach should work perfectly well.
But what can you do when a disc decides to do an impression of a barnacle covered in bioluminescent algae ?

MakeMKV

MakeMKV is a transcoder – i.e. it converts video into MKV format.
MakeMKV has also been in Beta since at least 2008.

Usually, I’m a bit reluctant to risk beta programs on my machine. On this occasion however, I’ve decided that I just need to chill out and Let It Go…

Also, whilst Blu-Ray processing will be an additional cost feature when (if) the tool ever moves to a “production” version, it is currently free.

The installation route I followed was :

sudo add-apt-repository ppa:heyarje/makemkv-beta
sudo apt-get update
sudo apt-get install makemkv-oss makemkv-bin

Note that you may occasionally be prompted for a license key when starting this tool. If so, you can find it on this MakeMKV official forum post.

Fire up MakeMKV and it will automatically scan any optical drives.
Once the tool recognises that there is a disc in the drive, click on the drive graphic to open the disc :

MakeMKV will then scan the disc and then present you with a list of titles.
You will need to check/uncheck the appropriate titles to copy…

For DVDs this should simply correspond to the title selected by VLC (see above).
For Blu-Rays, things are slightly more complex.

If you check the right-hand pane for each Title, it will show the duration. You’re looking for one that is the approximate runtime of the film.
If there are multiple titles that fit the bill then check the number of chapters as well. This should also match the number of chapters in the Title auto-selected by VLC
Note If the Title does not contain any chapters then the Chapters row will be omitted from the Title details pane.

Once you’re happy with your selection – uncheck the checkboxes for all of the other titles.
Expand your chosen title and make sure that you’re happy with the sub-menu selections.

Once all that’s done, hit the Save button.

By default the tool will save the file to a newly created sub-directory under $HOME/Videos. It will prompt you before creating this. In my case, I just say yes at this point.
Whilst the transcoding is not resource intensive, it also does not tend to be that fast. You may be twiddling your thumbs for a bit…

Current size is shown as Output size

MakeMKV demand on system resources is modest…

…and you will eventually end up with an mkv file…although it’s quite big (over 20GB in this case).

The good news is that you can now point handbrake at said file…

Converting mkv to mp4 using Handbrake

Now we’ve got our rather large .mkv file, we can turn it into a more reasonably sized mp4 file by using Handbrake.
Simply open Handbrake, and select Open Source as before.
This time however, instead of using a disc directly, navigate to the .mkv file and select it.

Handbrake should then work as before, down to the resource intensive conversion. Ultimately however, you will be left with an mp4 file which is a rather more manageable size. For movies originally copied from Blu-Ray it’s usually no more than 2GB, depending on the Handbrake settings you’ve chosen.

Wrap-up

Hopefully, the tools and techniques covered here will work for most (if not all) of the discs you want to view or copy on your Ubuntu device.
Once you have your videos on file, you may find it useful to use some kind of media management software, such as Plex.

There are a number of articles around that may also be of use in your continuing Ubuntu video adventures…

As for me, I’m off in search of some cookie robots.

Read Only Access for providing backend support for an Oracle Application

Wed, 2018-07-25 15:59

The World Cup is finally over and “It’s Coming Home !”
For quite a long time, we English laboured under the illusion that “it” was football.
Fortunately for Scots everywhere, “It” turned out to be the World Cup which, like so many international sporting competitions, was conceived in France.

Another area that is often subject to flawed assumptions is what privileges are required to provide read-only access for someone to provide support to an Oracle Application.
So, for any passing auditors who may be wondering why “read only” access to an Oracle application sometimes means Write, or even Execute on certain objects…

The Application

We’re using the standard HR sample application provided with any Oracle database. For the purposes of this post, we’ve added couple of enhancements.
The application has use of a directory object called HR_FILES :

create directory hr_files as '/u01/app/oracle/hr_files'
/

grant read, write on directory hr_files to hr
/

There is an External Table which is used for an ETL process…


create table countries_xt
(
    iso_code varchar2(2),
    country_name varchar2(100)
)
    organization external
    (
        type oracle_loader
        default directory hr_files
        access parameters
        (
            records delimited by newline
            badfile 'countries.bad'
            logfile 'countries.log'
            skip 1
            fields terminated by ','
            (
                iso_code char(2),
                country_name char(100)
            )
        )
            location('countries.csv')
    )
reject limit unlimited
/

In addition to it’s standard grants, HR also has create any context :

grant create any context to hr
/

…which means the application can use contexts in a view.

The context package created for this is :

create or replace package hr_deptno_ctx as
    procedure set_ctx_val( i_dept_id departments.department_id%type);
    function get_ctx_val return departments.department_id%type;
end hr_deptno_ctx;
/

create or replace package body hr_deptno_ctx as

    procedure set_ctx_val( i_dept_id departments.department_id%type) 
    is
    begin
        dbms_session.set_context('hr_deptno', 'department_id', i_dept_id);
    end set_ctx_val;
    
    function get_ctx_val 
        return departments.department_id%type 
    is
    begin   
        return sys_context('hr_deptno', 'department_id');
    end get_ctx_val;
end hr_deptno_ctx;
/

The context itself is created as follows :

create context hr_deptno using hr_deptno_ctx
/

…and used in a view…

create or replace view emp_restricted as
    select *
    from employees
    where department_id = hr_deptno_ctx.get_ctx_val
/    

Remember, what we want to do here, is give “Read Only” access to this application to a database user. Let’s start with something simple…

Creating a Read Only Role

To begin with, let’s simply create a role called HR_READONLY and grant select on all HR tables and views to that role :

create role hr_readonly
/

grant select on hr.countries to hr_readonly;
grant select on hr.countries_xt to hr_readonly;
grant select on hr.departments to hr_readonly;
grant select on hr.employees to hr_readonly;
grant select on hr.jobs to hr_readonly;
grant select on hr.job_history to hr_readonly;
grant select on hr.locations to hr_readonly;
grant select on hr.regions to hr_readonly;
grant select on hr.emp_details_view to hr_readonly;
grant select on hr.emp_restricted to hr_readonly;

Now we can simply grant this role to our read only user …

set verify off
accept pwd prompt 'Enter password for new user MIKE_RO : ' hide
create user mike_ro identified by &pwd;
grant create session, hr_readonly to mike_ro
/

…and this is a really short post…

Selecting from External Tables

Let’s just connect as MIKE_RO and confirm that all is well…

select region_id, region_name
from hr.regions
order by 1
/

REGION_ID REGION_NAME              
--------- -------------------------
        1 Europe                   
        2 Americas                 
        3 Asia                     
        4 Middle East and Africa  
        

…see, no problem. Let’s try the new external table…

…so, it looks like our read only user will need READ access on the directory. No biggie, it’s still “READ” only…

grant read on directory hr_files to mike_ro
/

It’s when we have this privilege and then attempt to access the external table again, where things get interesting…

select *
from hr.countries_xt
/

Error starting at line : 1 in command -
select *
from hr.countries_xt
Error report -
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04074: no write access to directory object HR_FILES

The problem here is that the act of selecting from an external table will cause one or more files to be written (logfile, badfile, discardfile). Therefore, WRITE permissions are required on the directories to which these files will be written. In our case, the files are all written to HR_FILES so…

grant write on directory hr_files to mike_ro
/

…means that we can now select from the table :

select *
from hr.countries_xt
/

ISO_CODE                  COUNTRY_NAME             
------------------------- -------------------------
FR                        FRANCE                   
HR                        CROATIA                  
BE                        BELGIUM                  

NOTE – it’s always an outstandingly good idea to check the privileges a user already has on a Directory object before granting more.

“That’s not a problem”, I hear you say, “after all, our read only user won’t have EXECUTE permissions on anything. Well…

Changing context values in a session

Now let’s have a look at our new view. Connected as HR, we can see that the context value must be set for any rows to be returned…

select hr.hr_deptno_ctx.get_ctx_val 
from dual
/

GET_CTX_VAL
-----------


select count(*) 
from hr.emp_restricted
/

 COUNT(*)
----------
         0
         
exec hr.hr_deptno_ctx.set_ctx_val(60)



PL/SQL procedure successfully completed.


select count(*) 
from hr.emp_restricted
/


  COUNT(*)
----------
         5

So, in order for our Read-Only account to be able to “read” this view, it will require execute privileges on a package.
Now, you might wonder why we can’t simply grant execute on DBMS_SESSION, which is the package called by HR_DEPTNO_CTX to get and set the context values.
The answer can be found in the Oracle docs for the SET_CONTEXT procedure in DBMS_SESSION which state :

“The caller of SET_CONTEXT must be in the calling stack of a procedure that has been associated to the context namespace through a CREATE CONTEXT statement. ”

Therefore, our read only user needs to be granted execute on the HR package itself :

grant execute on hr.hr_deptno_ctx to mike_ro
/
Viewing stored source code

The next requirement for our read only user is to be able to see the source code that’s actually in the data dictionary (as opposed to say, in a source control repository somewhere).
Yes, I know that your Source Control Repo master/trunk/main branch should be a faithful copy of your production code. However, biter experience to the contrary leaves me reluctant to make this assumption. It’s much safer to see the actual code that’s being executed, not what it probably is.

Unfortunately, as things stand, we do not even have access to DBA_SOURCE.

At this point though, we can give our auditor a moment’s respite, we only want to grant the SELECT_CATALOG_ROLE role.

grant select_catalog_role to mike_ro
/

This now enables our Read Only account to view the source for HR’s objects…

set lines 130
set heading off
set feedback off
select text
from dba_source
where owner = 'HR'
and name = 'SECURE_DML'
and type = 'PROCEDURE'
order by line
/


PROCEDURE secure_dml
IS
BEGIN
  IF TO_CHAR (SYSDATE, 'HH24:MI') NOT BETWEEN '08:00' AND '18:00'
        OR TO_CHAR (SYSDATE, 'DY') IN ('SAT', 'SUN') THEN
	RAISE_APPLICATION_ERROR (-20205, 
		'You may only make changes during normal office hours');  
  END IF;
END secure_dml;

An additional benefit of this role is that it now gives us access to the dynamic performance views which in turn allows us to do some performance investigations.
For example, we can now run the Session Monitor in SQLDeveloper.

Conclusion

As we’ve demonstrated, the definition of Read Only access to an application running on an Oracle database is usually dependent on the features being used in that application.
Hopefully our imaginary auditor now has some understanding of this and won’t get their SOX in a twist when they find out that the developers providing Level 3 support have these privileges.

You have chosen not to trust… – Citrix Receiver and SSL error 61 on Ubuntu

Fri, 2018-06-08 14:01

After months of trouble-free operation, Citrix Receiver decided to wreak some havoc one morning last week.
Connecting to work (using Firefox on Ubuntu and Citrix Receiver for Linux 13.8) was trouble free as usual.
However, when I then tried to select a PC to remote into, Citrix informed me that …

“You have chosen not to trust Entrust Root Certification Authority – G2. SSL error 61”

At that point, I reflected that what I knew about Citrix and SSL certificates would fit on the back of a fag packet.
After some intensive “research” it should now fit into a short blog post…

Citrix Receiver for Linux has a bug…again

A quick internet search lead me to the Citrix Support site.
Their advice ( perhaps understandably), was to upgrade Citrix Receiver to the latest version.
After some fiddling around – steps for installing on Ubuntu can be found here if you’re interested, I had the latest version.
Now, I’m not sure whether it’s just an unfortunate coincidence or whether I’ve upset the good people at Citrix, but it seems that whenever I install the latest version, there is a Linux specific bug.
So, after about half an hour of messing about, I was back where I started with the same error and the same version of Citrix Receiver.

Attempting to connect via Chrome gave exactly the same result.

Re-installing my certificates

Re-reading the error message, I noticed that it was a specific certificate that was the problem.
Running a search, I was able to confirm that the said certificate is available from Entrust.

Once I’d downloaded the certificate in question, it was simply a matter of putting it where Citrix Receiver could see it.
So…

sudo cp entrust_g2_ca.cer /opt/Citrix/ICAClient/keystore/cacerts/.

Magically, Citrix Receiver was happy again and I was able to connect.

Some points to note for next time

A colleague of mine had the same issue. He is running Debian.
His solution was to :

– delete the files in the Citrix Receiver certs directory :

/opt/Citrix/ICAClient/keystore/cacerts/

– create a symlink in the directory from the certificates in

/etc/ssl/certs

If you’re reading this because you have a similar problem and the first solution doesn’t work, then perhaps this may be worth a try ( backup the certificate files before deleting them though !)
I’m still not sure of the root cause of this issue, although I suspect it may be something to do with browser updates.
On the plus side I’ve avoided having to drag myself into the office…for now.

Pages