Development
Free Oracle Cloud: 15. The request could not be mapped to any database
At some point you might face the following message: "The request could not be mapped to any database":
Oracle is monitoring usage on your Always Free Account and whenever it finds there's no activity for 7 days, it will stop your database automatically. It will preserve the data in the database, but it won't be accessible anymore.
To fix the issue, log in to your Oracle Cloud Account and go to your Autonomous Database:
You will see the database is in a stopped state. Click the Start button:
The state will change to Starting...
And after a minute it becomes available again:
The above behavior is written in the end-user documentation:
- After being inactive for 7 days, the database will be stopped automatically, preserving its stored data. Inactivity measurements leading up to 7 days are based on database connections. Successfully making a SQL*Net or HTTPS connection resets these measurements to zero.
- A database that is automatically or manually stopped and stays inactive for 90 days, cumulative, may be reclaimed and permanently deleted. Inactivity measurements leading up to 90 days are based on the database being inactive or in the stopped state. Starting a stopped database resets these measurements to zero.Start an Always Free Autonomous Database by clicking the Start button on the Oracle Cloud Infrastructure console. Start a stopped Always Free Autonomous Database before 90 days to avoid losing access to its data.
But this week there were some people complaining that although they had activity, their database was stopped anyway. I witnessed the same behavior in my account, so I reached out to Oracle and they confirmed their code to identify inactivity, is not properly accounting for APEX/ORDS usage. They are already working on a fix, which they hope to apply very soon. I will update this post when I get confirmation the fix is in the data centers.
Native Oracle DB JSON functionality as alternative for using cursor() in AOP (and APEX_JSON)
In the case of APEX Office Print (AOP), we made it super simple to communicate with the AOP server from the database through our PL/SQL API. You just have to enter a SQL statement and the AOP PL/SQL API, which uses APEX_JSON behind the scenes, generates the necessary JSON that the AOP Server understands.
Here's an example of the Order data in JSON: a customer with multiple orders and multiple order lines:
As we are living in the Oracle database, we have to generate this JSON. The data is coming from different tables and is hierarchical. In SQL you can create hierarchical data by using the cursor() syntax.
Here's an example of the SQL statement that you would typically use in AOP (the cursor highlighted in red):
select
'file1' as "filename",
cursor(
select
c.cust_first_name as "cust_first_name",
c.cust_last_name as "cust_last_name",
c.cust_city as "cust_city",
cursor(select o.order_total as "order_total",
'Order ' || rownum as "order_name",
cursor(select p.product_name as "product_name",
i.quantity as "quantity",
i.unit_price as "unit_price"
from demo_order_items i, demo_product_info p
where o.order_id = i.order_id
and i.product_id = p.product_id
) "order_lines"
from demo_orders o
where c.customer_id = o.customer_id
) "orders"
from demo_customers c
where customer_id = 1
) "data"
from dual
From AOP 19.3 onwards, the AOP PL/SQL API not only supports this cursor() syntax but also the native JSON functionality of the Oracle Database (version 12c and upwards).
The query above can also be written as the following using JSON support in the Oracle Database:
select
json_arrayagg(
json_object(
'filename' value 'file1',
'data' value (
select
json_arrayagg(
json_object(
'cust_first_name' value c.cust_first_name,
'cust_last_name' value c.cust_last_name,
'cust_city' value c.cust_city,
'orders' value (
select
json_arrayagg(
json_object(
'order_total' value o.order_total,
'order_name' value 'Order ' || rownum,
'order_lines' value (
select
json_arrayagg(
json_object(
'product_name' value p.product_name,
'quantity' value i.quantity,
'unit_price' value i.unit_price
)
returning clob)
from demo_order_items i, demo_product_info p
where o.order_id = i.order_id
and i.product_id = p.product_id
)
)
returning clob)
from demo_orders o
where o.customer_id = c.customer_id
)
)
returning clob)
from demo_customers c
where c.customer_id = 1
)
)
returning clob) as aop_json
from dual
You have to get used to this syntax and have to think a bit differently. Unlike the cursor syntax where you define the column first and give it an alias, using the JSON functions, you define the JSON object and attributes first and then map it to the correct column.
I find the cursor syntax really elegant, especially in combination with APEX_JSON, it's a really cool solution to generate the JSON you need. But I guess it's a personal choice what you prefer and I must admit, the more I use the native JSON way, the more I like it. If performance is important you most likely want to use native database functionality as much as possible, but I go in more detail further in this post. Lino also found an issue with the cursor syntax in the Oracle Database 19c, so if you are on that database release you want to look at the support document.
Before I move on with my test case, if you need more info on JSON in the database: Carsten did a nice blog post about parsing JSON in APEX, and although it's about parsing JSON and this blog post is more about generating JSON, the conclusions are similar. You can read more about APEX_JSON and the native JSON database functions in Tim's write-up on Oracle-Base.
As I was interested in the performance of both implementations, I run a few test cases. There are different ways to test performance, e.g. use dbms_profiler, Method R Workbench, trace, timing the results, ... Below I use Tom Kyte's script to compare two PL/SQL implementations. The interesting thing with the script it's not only comparing timings but also latches, which give you an idea of how hard the database has to work. You can download it from AskTom under the resources section:
Here's my test script:
declare
l_sql clob;
l_return blob;
l_output_filename varchar2(100);
l_runs number(5) := 1;
begin
runStats_pkg.rs_start;
-- sql example with cursor
for i in 1..l_runs
loop
l_output_filename := 'cursor';
l_sql := q'[
select
'file1' as "filename",
cursor
(select
c.cust_first_name as "cust_first_name",
c.cust_last_name as "cust_last_name",
c.cust_city as "cust_city"
from demo_customers c
where c.customer_id = 1
) as "data"
from dual
]';
l_return := aop_api_pkg.plsql_call_to_aop (
p_data_type => aop_api_pkg.c_source_type_sql,
p_data_source => l_sql,
p_template_type => aop_api_pkg.c_source_type_aop_template,
p_output_type => 'docx',
p_output_filename => l_output_filename,
p_aop_remote_debug=> aop_api_pkg.c_debug_local);
end loop;
runStats_pkg.rs_middle;
-- sql example with native JSON database functionality
for i in 1..l_runs
loop
l_output_filename := 'native_json';
l_sql := q'[
select
json_arrayagg(
json_object(
'filename' value 'file1',
'data' value (select
json_arrayagg(
json_object(
'cust_first_name' value c.cust_first_name,
'cust_last_name' value c.cust_last_name,
'cust_city' value c.cust_city
)
)
from demo_customers c
where c.customer_id = 1
)
)
) as aop_json
from dual
]';
l_return := aop_api_pkg.plsql_call_to_aop (
p_data_type => aop_api_pkg.c_source_type_sql,
p_data_source => l_sql,
p_template_type => aop_api_pkg.c_source_type_aop_template,
p_output_type => 'docx',
p_output_filename => l_output_filename,
p_aop_remote_debug=> aop_api_pkg.c_debug_local);
end loop;
runStats_pkg.rs_stop;
end;
/
I ran the script (with different l_runs settings) a few times on my 18c database and with the above use case on my system, the native JSON implementation was consistently outperforming the cursor (and APEX_JSON) implementation.
Run1 ran in 3 cpu hsecs
Run2 ran in 2 cpu hsecs
run 1 ran in 150% of the time
Name Run1 Run2 Diff
STAT...HSC Heap Segment Block 40 41 1
STAT...Heap Segment Array Inse 40 41 1
STAT...Elapsed Time 4 3 -1
STAT...CPU used by this sessio 4 3 -1
STAT...redo entries 40 41 1
STAT...non-idle wait time 0 1 1
LATCH.simulator hash latch 27 26 -1
STAT...non-idle wait count 13 12 -1
STAT...consistent gets examina 41 43 2
LATCH.redo allocation 1 3 2
STAT...active txn count during 21 23 2
STAT...cleanout - number of kt 21 23 2
LATCH.transaction allocation 1 3 2
LATCH.In memory undo latch 1 3 2
LATCH.JS Sh mem access 1 3 2
STAT...consistent gets examina 41 43 2
LATCH.keiut hash table modific 3 0 -3
STAT...calls to kcmgcs 64 69 5
STAT...dirty buffers inspected 6 0 -6
STAT...workarea executions - o 2 12 10
STAT...free buffer requested 71 52 -19
STAT...lob writes unaligned 80 60 -20
STAT...lob writes 80 60 -20
STAT...sorts (rows) 0 20 20
STAT...execute count 91 71 -20
STAT...sorts (memory) 0 20 20
LATCH.active service list 0 25 25
STAT...consistent gets 183 156 -27
STAT...consistent gets from ca 183 156 -27
STAT...consistent gets pin (fa 142 113 -29
STAT...consistent gets pin 142 113 -29
STAT...lob reads 160 130 -30
LATCH.JS queue state obj latch 0 42 42
LATCH.object queue header oper 151 103 -48
STAT...workarea memory allocat 66 -6 -72
STAT...db block changes 431 358 -73
STAT...consistent changes 390 315 -75
LATCH.parameter table manageme 80 0 -80
STAT...undo change vector size 8,748 8,832 84
LATCH.enqueue hash chains 1 88 87
STAT...parse count (total) 100 10 -90
STAT...session cursor cache hi 171 71 -100
STAT...opened cursors cumulati 171 71 -100
STAT...free buffer inspected 126 0 -126
STAT...calls to get snapshot s 470 330 -140
STAT...db block gets from cach 958 744 -214
STAT...hot buffers moved to he 220 0 -220
STAT...redo size 12,016 12,248 232
STAT...db block gets 1,039 806 -233
STAT...db block gets from cach 1,029 796 -233
STAT...session logical reads 1,222 962 -260
STAT...file io wait time 5,865 6,279 414
STAT...recursive calls 561 131 -430
LATCH.cache buffers chains 3,224 2,521 -703
STAT...session uga memory 196,456 0 -196,456
STAT...session pga memory 1,572,864 0 -1,572,864
STAT...logical read bytes from 9,928,704 7,798,784 -2,129,920
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
3,853 3,180 -673 121.16%
There are many different iterations of this test, using bind variables, etc. It seems "logical" that a native DB implementation is better performance-wise than a combination of PL/SQL (APEX_JSON) and SQL (cursor). But I always recommend you just run the test in your own environment. What is true today, might be different tomorrow and a lot comes into play, so if there's one thing I learned from Tom Kyte, it's don't take things for granted, but test in your unique situation.
So, in real life using AOP, will you see a big difference? It depends on the complexity of your SQL statement and data, how many times you call a report etc. but my guess is, in most cases, it's probably not much of a difference in user experience.
A simple test would be to do "set timing on" and compare the implementations:
Or if you are using AOP on an Oracle APEX page, you can run your APEX page in Debug mode and you will see exactly how long the generation of the JSON took for the data part:
Happy JSON'ing :)
Free Oracle Cloud: 14. Your Oracle Cloud Free Trial has expired (but FREE still running)
Today I got an email that my Oracle Cloud account was Expired. While I have an Always FREE Oracle Cloud, when I signed up I also got some extra credits that lasted for a month. Those credits are no longer valid.
When you log in to your Oracle Cloud Dashboard you will get a notification on top too, but nothing to worry about.
It has some consequences tho, on the menu, some options are grayed out. The one I actually use is the Email Delivery, which seems to be grayed out too although normally you should be able to send 1,000 emails per month. So maybe grayed out also means, not full service.
When I checked it out, it said it's part of the paid plan. I remember some discussions at Oracle Open World where they recommend upgrading to a Paid account, but as you only use the Always FREE services, you are not charged.
So I decided to upgrade to a Paid account: Pay As You Go:
You have to provide a Credit Card, but that was a bit of an issue for me. Apparently, Safari is not really working well with this screen, so I switched to Chrome. The next hick-up I had was when I added my AMEX card... it said it was an invalid card.
Then I used my VISA card and that seemed to work well:
Click the Start Paid Account:
Finally, it will say your payment method will be reviewed and after that you are live.
It wasn't immediately clear for me I had to wait for the confirmation email, but when I went to Payment Method again, I saw the review was still in progress:
And a few minutes later I got the email that my account was upgraded:
When you look at your Oracle Cloud Dashboard, there's a cost calculator, so you see how much you have to pay. As long as I use the Always FREE components, I expect the amount to stay 0 :)
But the nice thing now is that you have access to all of Oracle Cloud again (e.g. Email Delivery).
OGB Appreciation Day: add an error in a PL/SQL Process to the inline notification in Oracle APEX
Before I give my tip on how to add an error message from your PL/SQL code in your Page Process to a notification message in Oracle APEX, I want to start with thanking some people.
What keeps me going are a few things:
- The innovations of technology and more specifically the Oracle Database, ORDS, and Oracle APEX. I want to thank all the developers and the people behind those products. They allow me to help other people with the tools they create and keep on learning about the new features that are released.
- I want to thank the fantastic #orclapex (Oracle APEX) and Groundbreakers community. I believe we are a great example of how people help and support each other and are motivated to bring the technology further. Over time I got to know a lot of people, many I consider now friends.
- I want to thank you because you read this, show your appreciation and push me forward to share more. I'm passionate about the technology I use. I love helping people with my skill set of developing software and while I learn, share my knowledge through this blog.
Free Oracle Cloud: 13. Final things to take away
By now we have seen how you can set up the different components from the Always Free Oracle Cloud.
During Oracle Open World I talked to the people behind the Always Free Oracle Cloud, and they told me that when your account is inactive for a specified amount of time (I forgot if it's 5 days, or a week or more?), your instance is being backed-up to the Object Storage. You can see it as a VM which is being put in stand-by or halted and saved to disk. When you need it again, it can be restored, but it takes time and it might be annoying when you don't know this is what is happening.
If you have a production app running in the Fee Oracle Cloud, be sure people use your app at least once inside the window Oracle foresees. Maybe in the future, Oracle could foresee a setting where we can specify the (in-)activity window as a developer.
- 2 Autonomous Databases, each with 1 OCPU and 20 GB storage
- 2 Compute virtual machines, each with 1/8 OCPU and 1 GB memory
- Storage: 2 Block Volumes, 100 GB total. 10 GB Object Storage. 10 GB Archive Storage.
- Additional Services: Load Balancer, 1 instance, 10 Mbps bandwidth. Monitoring, 500 million ingestion data points, 1 billion retrieval data points. Notifications, 1 million delivery options per month, 1,000 emails sent per month. Outbound Data Transfer, 10 TB per month.
Free Oracle Cloud: 12. Create a 2nd Compute Instance and a Load Balancer
In my blog post Create a VM Instance (Compute Cloud) we created a VM instance in the Free Oracle Cloud. The cool thing is that you get two VMs for free. In this post, we will set up the other always free compute instance.
Just like when we created our first instance, hit the Create a VM instance:
Give your instance a name and before I just hit the Create button, BUT this time you want to create the Show Shape, Network and Storage Options first:
Clicking the Create button will show that your instance is being provisioned.
When you go back to the overview you should see both of your Always Free Compute instances:
Clicking on the name, you will get the details. This screenshot shows when you don't specify a public IP address.
To access that machine, as it doesn't have a public IP, I connected to my first instance and from there, as I am on the subnet, I can connect to the Private IP Address:
An alternative for a URL to go directly to your VM instance is to front it with a Load Balancer.
Which brings us to the Load Balancer topic. With the Always Free Oracle Cloud, we also get a Load Balancer for free. There are different use cases for using a Load Balancer, but here are my own reasons why I have used a Load Balancer before:
- Distribute the traffic automatically over different machines. For example, when you use our APEX Office Print (AOP) Cloud you will actually hit our load balancer, behind the load balancer we have two to five different machines. It's not only to handle the large number of prints we get, but it also makes our lives easier when we want to upgrade without downtime. We upgrade one clone instance, and when done, new machines are brought online and old ones are shutdown. We patch our own service with zero downtime.
- The Load Balancer has the SSL certificate and handles the HTTPS requests while the backend servers have HTTP.
- On a Load Balancer, you have integrated health checks, so you can be warned when things go wrong, even when there's only one server behind the Load Balancer.
So lets get started to set up a Load Balancer in the Oracle Cloud:
Click on Networking > Load Balancers:
Click the Create Load Balancer button:
It will ask for a name and type. For the Always free instance, use Micro with Maximum Total Bandwidth.
By default Small is selected, so don't forget to change it:
Next you want to add a Backend to this Load Balancer, so click the Add Backends button:
In the pop-up you can select the instances you want to put behind this Load Balancer:
Furthermore, on the screen you can select a Health Check Policy:
In the next step, you can upload the SSL certificate, in case you want the Load Balancer to be accessible through HTTPS. You can also choose to just configure the Load Balancer for HTTP (which I don't recommend):
Hit the Create Load Balancer and you will get an overview that the Load Balancer is being created:
Instead of putting the IP Address of your instance directly in the DNS of your domain name, you put the IP Address of the Load Balancer in.
A Load Balancer can do much more, you can have different Rules, SSL tunneling, etc. You can read more about that in the online documentation.
Hopefully, now you know how to set up a second compute instance and you have an idea what a Load Balancer can do for you.
We are almost done with this series... but you definitely want to read the next blog post, which is the last one where I give some important information to keep your Always Free instance running.
Free Oracle Cloud: 11. Sending Emails with APEX_MAIL on ATP
In this post, we will configure the Oracle Cloud to support our instances, databases and Oracle APEX to send out emails. In my blog post 5. Setup APEX in ATP and create first APEX app, I initially said you can't use APEX_MAIL in APEX in ATP, but I was wrong, so I few days after my post I updated it, to point you to the documentation with the configuration steps you have to do to make it work.
The reason I thought you can't use APEX_MAIL was that during my tests, sending emails failed. I hadn't read the documentation ;) In this post, I will share how I got it to work after all.
The first thing you have to do is create an SMTP Credential for your user. You do that by logging into your Oracle Cloud account, go to Identity > Users and select your user:

Click the SMTP Credentials in the left menu:
Hit the Generate SMTP Credentials button, and give it a name:
On the next screen, you will see the USERNAME and PASSWORD. Take a note of this as you won't get to see it anymore after:
BEGIN
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_HOST_ADDRESS', 'smtp.us-ashburn-1.oraclecloud.com');
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_USERNAME', 'ocid1.user.oc1..xxxxxxxxxxx@ocid1.tenancy.oc1..xxxxxxx');
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_PASSWORD', 'Fxxxx');
COMMIT;
END;
/
Here is a screenshot when running:
Log in to Oracle APEX, go to SQL Workshop and try to send out emails:
It says statement processed, but when you query the APEX_MAIL_QUEUE, you will see it's still stuck with an error ORA-29278: SMTP transient error: 471 Authorization failed:
There's one more step you have to do, specify the email addresses you want to allow to send emails from. Go in your Oracle Cloud Account Dashboard to Email Delivery and click the Email Approved Senders and hit the Create Approved Sender button

In the overview you will see all allowed email addresses:
When we try to send again and check the APEX_MAIL_LOG, we see the emails are effectively sent:
That's it, you can now send emails out of your APEX apps :)
We are almost done with the series. In the next post we will create a second compute instance and set up a Load Balancer.
Free Oracle Cloud: 10. Running SQLcl and Datapump from the Compute Instance (VM) to ATP
In this post, we will install and use SQLcl and Datapump from the Compute Instance (VM) connecting to our Oracle Database in the Autonomous Transaction Processing (ATP) Cloud.
Although I use most of the time SQL Developer to connect to the database, I find it important to be able to use command-line tools too, as this is what you can automate and it's really fast.
In the previous post, we installed the command line tools of the Oracle Cloud on our own machine, but for the Oracle Tools, I prefer to install them on our Compute Instance in the Cloud. Especially when we want to automate something, it's easier to do this from another machine in the cloud. It also makes it easier to follow as we only have to focus on how to install the Oracle Tools on Linux.
Oracle Instant Client
In order to connect to an Oracle database from a machine, we will use the Oracle Instant Client software. You can download the software for the different operating systems, but as our VM is running Oracle Linux we can install it with just a few commands:
First, update yum so it's smarter where to find Oracle software:
yum install oracle-release-el7
Next, we can search for the Oracle Instant Client version we need:
yum search oracle-instant
yum install oracle-instantclient18.5-tools.x86_64
As a last step we set some environment variables:
export PATH=/usr/lib/oracle/18.5/client64/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib/oracle/18.5/client64/lib
export TNS_ADMIN=/usr/lib/oracle/18.5/client64/lib/network/admin
Before we move on to installing SQLcl, make sure you still have the credentials (wallet) file we used when connecting with SQL Developer to our database. Just like with SQL Developer, we also need this with SQLcl to connect to our database. As a reminder here's the screenshot I'm talking about:
Upload both the SQLcl and Credentials zip file to the Compute Instance (VM):
ssh -i .ssh/oraclecloud opc@132.145.215.55
yum install java
ln -s /opt/sqlcl/bin/sql /usr/lib/oracle/18.5/client64/bin/sql
sql admin@dbdimi_high
There we go... we can connect from our VM to our ATP database.
The next thing we want to do is export the data from our ATP database. We will use Datapump that came with the installation of the tools.
Run the command to export the schema CLOUD:
expdp admin@dbdimi_high \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materialized_zonemap,db_link \
data_options=group_partition_table_data \
parallel=1 \
schemas=cloud \
dumpfile=export%u.dmp
So where did this export go? To the default DATA_PUMP_DIR directory we don't have direct access to... but to list the files in the directory we can do:
SELECT * FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR');
Remember my previous blog post about the Object Storage, in which we set up a Backups bucket?
Oracle allows you to connect your Object Storage to your ATP database and that is exactly what we will do further on :)
We will use the same user we created earlier for CLI. In order to connect to ATP we need to set up an Auth Token. Go to the User Details of cliUser and click the Auth Tokens:
Click the Generate Token button:
There's the token... you only see it once, so make sure to copy it:
Next, connect to your ATP database and run the script to add the credentials to the ATP database:
begin
dbms_cloud.create_credential(
credential_name => 'DEF_CRED_NAME'
, username => 'cliUser'
, password => 'Frx}R9lD0O}dIgZRGs{:'
);
end;
/
Now that the DBMS_CLOUD package has credentials, we can do other calls with this package.
To add the Datapump export files to the Object Storage, we can use the PUT_OBJECT procedure.
I created a small script to take all the files from the DATA_PUMP_DIR and put them in the backups Bucket in the Object Storage:
begin
for r in (select object_name, bytes
from dbms_cloud.list_files('DATA_PUMP_DIR'))
loop
dbms_cloud.put_object(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/id9u4qbhnjxj/b/backups/o/'||r.object_name,
directory_name => 'DATA_PUMP_DIR',
file_name => r.object_name);
end loop;
end;
/
And when we check our bucket, we see the Datapump export files! Yay!
We also want to export our Oracle APEX apps. In some projects, I use the APEXExport utility, but now we will use SQLcl to export our APEX app 101:
apex export 101
In real life I typically create a few scripts which I can run one-by-one or combined in a general backup script. The script will export the Oracle schemas, the APEX apps and save the files to another location, in our case the Object Storage.
vi make_backup.sh
You can schedule this script with crontab, for example, every day at 2AM:
The above is just an example of what you can do to automate your backups. You have to decide how frequently you want to do those backups.
If you want to move your existing Oracle database and APEX apps to the Oracle Cloud, the steps are similar to above. You upload your Datapump export file to your Object Storage. Next, run the Data Pump Import with the dump file parameter set to the list of file URLs on your Cloud Object Storage and the credential parameter set to the name of the credential you created earlier. For example:
impdp admin/password@dbdimi_high \
directory=data_pump_dir \
credential=def_cred_name \
dumpfile= https://objectstorage.us-ashburn-1.oraclecloud.com/n/adwc/b/adwc_user/o/export%u.dmp \
parallel=1 \
partition_options=merge \
transform=segment_attributes:n \
transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materialized_zonemap,db_link
Free Oracle Cloud: 9. Setup Object Storage and use for File Share and Backups
In this post, we will look into how we can store and share files, with ourselves, with others, but also with our ATP Database, we set up earlier. The Oracle Cloud provides for this purpose Object Storage, and we get 20GB for free forever. This storage is also being used when you want to make backups of the database, and use Datapump to export and import data. In the previous blog post, when you followed the installation of the on-premises version of AOP, you actually connected to my Object Storage as the AOP zip file is on it.
Now we know the purpose of this Object Storage, let's get started to set it up.
Log in to the Oracle Cloud and navigate in the menu to Object Storage:
You arrive at the screen where you have to pick a compartment in the dropdown on the left. Compartments are used to organize your resources.
After selecting my compartment "dimi (root)", we get an overview of Buckets and we can create a new one by clicking on the Create Bucket button:
In the Create Bucket screen, you enter a name and the type of storage you want to add in that bucket.
First, we create an archive bucket, in which to store backups of our database:
In the next screen we create a standard bucket to share files with others:
We can now see we have two buckets: apexofficeprint (standad storage) and backups (archive storage). Note the 3 dots on the right of the bucket, which give you the ability to view details or perform some other actions:
As I wanted to share the AOP zip file in my object storage, in the menu I click the Edit Visibility link to make the bucket accessible for everybody (public).
Next, we will add a file to the bucket. Click the View Bucket Details:
We get the details of the Bucket and see an overview of the Objects in this Bucket:
Click the Upload Objects button:
Drag a file in the Drop files area and hit the Upload Objects button:
We arrive back in the overview screen:
Just like in the Buckets overview, in the Objects Overview next to the objects you have the 3 dots on the right to perform actions on the object:
Click the View Object Details and you find next to some info, the URL where your object is accessible from:
So to conclude, an Object Storage exists out of Buckets which live in a certain Compartment and a Bucket exists out of Objects. (Object Storage => Compartments => Buckets => Objects)
Above we used the Oracle Cloud website to work with our Object Storage, but I'm also interested to do this from the command line. For example, we automated our build process when we make a new release of APEX Office Print. In the final step, we want to upload the new zip file to the cloud. When we can do everything from the command line, we can script and automate it.
You can follow the documentation on how to install the Command Line Interface (CLI).
I did the following on OSX (from Terminal) to install CLI:
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
The install script asked a few questions (where to install etc.) but I used all defaults and at the end, all was installed fine and I got the following screen:
At the end I moved everything from the bin directory in my default directory, so it's easier to call the CLI:
mv bin/* .
Next, configure CLI. We have to let the CLI know who we are. This is done by creating a config file and specifying the user CLI can connect as. Here we go:
./oci setup keys
Go to the Oracle Cloud website and set up a User based on the keys you created with CLI.
Go to Identity - Users in the menu and click the Create User button:
Enter a new user, as I wanted to create a user-specific for CLI I called it cliUser. Note that the email address you provide needs to be unique. I initially used the same email, but that didn't work.
Now we will need to add the key to this user. Click the View User Details button in the action menu of the user:
Click the Add Public Key button:
And copy the content of the ~/.oci/oci_api_key_public.pem file:
Next, we want to give this user Administrator privileges. To do so, go to Groups:
Click the Add User to Group button:
Select Administrators and hit the Add button:
Now, this user is all set to work with the Command Line Interface (CLI):
We will now configure the CLI to connect as the cliUser we just created.
In your terminal run in your home directory:
./oci setup config
The user OCID, go to Identity > Users and the details of the user. Next to OCID click the Copy link:
The tenancy OCID you find in Administration > Tenancy Details:

The location you find when you scroll down in the Regions section of the Tenancy screen. Yours will be highlighted in green.
The final question of the wizard is if you want to create a new RSA key, answer No and point to the file (and not the directory like I first did). This is how it looks like after answering the questions:
cat .oci/config
Before we can run the command, we have to know the OCI of the Compartment. Here're the steps to get to that. Identity > Compartments:
In the Details you find the OCI:
Now that we have everything we need, we can run the command to create a bucket called clibucket:
./oci os bucket create -c ocid1.tenancy.oc1..aaaaaaaakmf6mlauyaqmkkcikiuu2ckmklhffxf2weheu3qtfnsvcuzfuiuq --name clibucket
On success, we get a JSON back with the details of the bucket. If it errors, you will get an error back with details in JSON format.
Just to make sure the bucket is there, go to the Oracle Cloud website and check if you see the bucket we created with CLI:
To add an object to the bucket, you can do:
./oci os object put -bn clibucket --file test1.txt
Other commands I use more frequently:
# Get a list of the objects:
./oci os object list -bn clibucket
# Download a file called test1.txt and save it on your system as test2.txt
./oci os object get -bn clibucket --file test2.txt --name test1.txt
A useful resource to know which commands the CLI understands is in this documentation.
I did the above as an administrator, but if you want to read more about restricted use, you can read about that in Lawrence Gabriel's blog post.
I believe now you have a good understanding of the Object Storage and how to work with it through the website or through CLI. One of the reasons I mentioned was to use the Object Storage as a place for backups... let's look into that now.
When we go to our Autonomous Database, select your database and go into details. In the Resources section you find a link Backups:
Normally backups are automatically taken in ATP, but you can also create a manual backup:
When you create a manual backup, you have to specify the Object Storage bucket you want the backup to be saved in. But, when I tried to create a manual backup it told me that this option is not available in the Always Free Tier. Also, it says that restore is not possible, so not sure what happens when you want to restore an automated taken backup... for now, I'm not relying on those backups, in my next blog post I will tell you what I do for backups.
In the next blog post of this series, I will walk you how to use the Object Storage with the ATP Oracle Database to export and import data.
Free Oracle Cloud: 8. Setup APEX Office Print (AOP) to export to PDF, Excel, Word, Powerpoint, HTML and Text
In the previous posts we setup our Always Free Oracle Cloud machine and an Autonomous Database with Oracle Application Express (APEX). In this post I want to show you how to get started with the popular printing and reporting engine, APEX Office Print (AOP). The AOP software makes it super easy to export your data into a nice looking PDF, a custom Excel file, a fancy Powerpoint or other output formats of your choice, just the way you want it.
AOP is being used by many customers, even Oracle internally, to export their data in the format they want. The data can come from the database, a REST or GraphQL web service, or even components like the Interactive Report/Grid from Oracle APEX. Although AOP works with any technology, it is most known in the Oracle APEX community as it's the easiest and most integrated print engine for Oracle APEX. You create a template in DOCX, XLSX, PPTX, HTML or TEXT, specify the data source, and tell AOP in which format you want the output to be (PDF, Excel, Word, Powerpoint, HTML, Text) and AOP will do the rest! You can find more information in this presentation about AOP.
Christina Moore of Storm Petrel wrote me a few days ago following: "We have a client in one of our systems who generates a 1,888 page invoice monthly (about 2,000 pages). The most recent invoice was $1.3M USD and took 384MB. AOP handles it brilliantly. Well done. I can’t email it to you for confidentiality reasons, but know it has multiple sections that are merged with your tool too." I love feedback on the usage of AOP and am amazed how creative people are when developing with AOP!
I use AOP in every project because exporting/printing is a requirement sooner or later and an essential part of my Oracle APEX apps. So I thought to write how to use this in the Oracle Cloud :)
Enter your email and hit Signup:
You will receive an email. Push the Confirm your email address button:
The browser will open where you can set a password for your account:
After hitting the Set Password button, you are logged in automatically and will see a Getting Started wizard:
Follow the wizard and you are all set! It should take less than 15 minutes :)
In short this is what the wizard will tell you:
- Download the AOP software and unzip the file
- Go to APEX > SQL Workshop > SQL Scripts > Upload and Run the file aop_db_pkg.sql which you find in the db folder. This will install the AOP PL/SQL API.
- Go to APEX > Your APP > Shared Components > Plug-ins and Import the APEX Plug-ins you find in the apex folder.
- Go to APEX > Your APP > Shared Components > Component Settings > APEX Office Print (AOP) and enter your API Key which you find in the Dashboard on the AOP site:
The Component Settings in your APEX app:
When you look closely at the previous screenshot of the Component Settings, look at the AOP URL.
The URL specifies where the AOP Server is running, which the AOP APEX Plug-in and AOP PL/SQL API communicate with. By default this is set to the AOP Cloud, so you don't have to setup an AOP Server in your own environment.
Although the AOP Cloud is really convenient as it's maintained and support by the APEX Office Print team, some customers prefer to run the AOP Server on their own machine, especially when data can't leave the datacenter.
So if you read on, I will walk you through Setting up the AOP Server on your own Compute VM in the Oracle Cloud. Just be sure you have already installed the AOP Sample Application, plug-ins, and Database Objects, if needed, as instructed in the Getting Started section, above.
From a Terminal connect to your Oracle Cloud VM:
ssh -i ssh_key opc@public_ip
The first thing we do, is change to the root user, as we want to install some supporting objects for AOP it will be easier to do it with the root user. Alternatively in front of every command you can add sudo.
We logged in as the OPC user, to become the ROOT user we do:
sudo su
Unlike other reporting engines, AOP software exists only out of couple of files and is installed in no time. We will download the software in the tmp folder on our machine and unpack it in /opt/aop:
cd /tmp
wget https://objectstorage.us-ashburn-1.oraclecloud.com/n/id9u4qbhnjxj/b/apexofficeprint/o/aop_free_oracle_cloud.zip
unzip aop_free_oracle_cloud.zip -d /opt/aop
That's it!! The AOP Server is installed!
To support PDF output, AOP relies on a 3rd party converter like MS Office or LibreOffice. Here are the steps to install LibreOffice:
yum install java-1.8.0-openjdk.x86_64
yum install cups.x86_64
wget http://ftp.rz.tu-bs.de/pub/mirror/tdf/tdf-pub/libreoffice/stable/6.2.7/rpm/x86_64/LibreOffice_6.2.7_Linux_x86-64_rpm.tar.gz
tar -xvf LibreOffice_6.2.7_Linux_x86-64_rpm.tar.gz
cd /tmp/LibreOffice_6.2.7.1_Linux_x86-64_rpm/RPMS/
yum localinstall *.rpm
ln -s /opt/libreoffice6.2/program/soffice /usr/sbin/soffice
LibreOffice is installed. To see if everything is fine you can run "soffice --version" and you should see something like this:
AOP comes with a built-in web server. When you start AOP you can define the port where AOP will listen to incoming requests. The default port is 8010. We will need to tell Linux this port can handle HTTP and HTTPS requests.
semanage port -a -t http_port_t -p tcp 8010
To start AOP on the default port do:
cd /
./opt/aop/v19.2.3/server/APEXOfficePrintLinux64 --enable_printlog &
You should see something like this:
Yay!! AOP is running.
AOP comes with a cool Web Editor, we will make this Editor available on our domain dgielis.com/aop/. In order to do that, we will adapt Nginx to also be a reverse proxy for the AOP Web Editor.
Here we go;
vi /etc/nginx/conf.d/dgielis.com.conf
And add following section:
location /aop/ {
proxy_pass http://127.0.0.1:8010/;
}
The server part of the config file becomes:
We need to reload Nginx:
nginx -s reload
And now when we go in a browser to dgielis.com/aop/ we see the AOP Web Editor:
You can now, for example, load a sample by clicking the "Load sample" button and select PDF.
Scroll down a bit lower and click the Process button and a PDF is being generated :)
The Web Editor is built in React.js and you can drag-drop your template and add some data to test the features of AOP. There's also a Logging tab (toggle between Editor and Logging), so you can see incoming requests, results and debug output in case of errors.
Now if we want to tell our Oracle APEX apps to use our own AOP Server, the only thing we have to do is change the AOP URL.
In your Oracle APEX app, go to Shared Components > Component Settings > APEX Office Print (AOP) and change the AOP URL to the URL of your own Compute VM:
Now I hope you enough knowledge so that you can please your customers with nice looking PDF, Excels and other documents in the format they want.
In the next post we will add an Object Storage to our Always Free Oracle Cloud Plan so we have a place to store files and backups.
Free Oracle Cloud: 7. Setup a web server on the Virtual Machine
In this blog post we will configure a web server on our Compute VM Instance. This allows us to host some websites and have a custom URL for our Oracle APEX instance and applications.
Lets start with connecting to our VM:
From a Terminal connect to your Oracle Cloud VM:
ssh -i ssh_key opc@public_ip
The first thing we do, is change to the root user, as we want to install a web server it will be easier to do it with the root user. Alternatively in front of every command you can add sudo.
We logged in as the OPC user, to become the ROOT user we do:
sudo su
Although it doesn't really have anything to do with setting up a web server, I do want to share this... The first thing I always like to do on a machine, is get the system updated, so all latest software is being used. To do this, run following command:
yum update
It will take some time the first time, but after a couple of minutes you should see that all packages were updated:
So the purpose of this post was to install a web server so when we type in a certain domain, it will arrive at our machine. As web server, I typically chose between Apache and Nginx. Which one to choose is a hard one... if you search Google for "Apache vs Nginx" you can start reading ;) Since last year I started to use Nginx for all my new systems, before I always used Apache.
Following steps show how you install the Nginx web server and run it:
yum install nginx
Now we need to start the web server:
systemctl start nginx
To see if Nginx is successfully running, do:
systemctl status nginx
You should see something like:
The next thing we have to do is open the firewall on the Linux box, so incoming connections are allowed. The first line will open HTTP, the second HTTPS and then we reload the firewall:
firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --reload
Opening the firewall on the Linux box is not enough. Oracle added some extra security layers around the VM (Compute Instance). We have to allow HTTP and HTTPS access to our machine in the Oracle Firewall too.
Click the Virtual Cloud Network link:
Click the Security Links:
And add two Ingress Rules, one for HTTP and one for HTTPS:
As Source add 0.0.0.0/0 so everything can connect to it and as destination port you specify the first time 80 (for HTTP) and the second time 443 (for HTTPS):
Once both Ingress Rules are added, your list looks like this:
Now you can navigate in a browser to your Public IP and you should see:
Now that we have a web server running and it's accessible through the IP address, we know things are working. Most of the time however you don't want to access your server through an IP address, rather you want people to use a domain name. To access my Free Oracle Cloud server for example I want to use the dgielis.com domain.
The first step to do, is in the domain provider you specify for the A record, the IP address of your Oracle Cloud VM (Compute) Instance. I typically also setup some CNAME so any sub-domain will work too. For example I could point apex.dgielis.com to Oracle APEX Builder.
Now that the domain points to our VM, we have to make sure our web server listens to this domain and knows what to do. We will need to configure Nginx for this dgielis.com domain.
Here are the steps to do this (do this in your Terminal which is connected to your VM):
vi /etc/nginx/conf.d/dgielis.com.conf
# Add following to the file (change dgielis.com by your domain):
server {
listen 80;
listen [::]:80;
server_name dgielis.com www.dgielis.com;
root /usr/share/nginx/html/dgielis.com;
index index.html;
try_files $uri /index.html;
}
# Create a directory where your website resides:
mkdir /usr/share/nginx/html/dgielis.com
# Add an index.html file to the directory
# Quickest way is vi or cp an index.html in this folder
# Or develop your site locally first and when ready upload with scp
# scp -i .ssh/oraclecloud /Users/dgielis/site.zip opc@132.145.215.55:/tmp
# to test Nginx configuration
nginx -t
# to restart Nginx
nginx -s reload
# note: in case nginx doesn't restart, kill the nginx process and try to restart again
ps -ef | grep nginx
kill ~pid~
When you go in your browser to your domain name, it should show your website!
This website runs over HTTP, but these days it's recommended to use HTTPS for your sites. Lets setup HTTPS for our website by using LetsEncrypt, a free service for SSL certificates.
First we have to install some supporting packages:
yum install certbot python2-certbot-nginx # not necessary
yum install python27-python-pip
scl enable python27 bash
pip install certbot
pip install setuptools --upgrade
pip install certbot-nginx
Once the supporting packages are there, we can run certbot to setup the SSL certificates for our domain:
certbot --nginx
After completion of the wizard, you should see something like below:
During the Certbot wizard which configures LetsEncrypt, it asks if you want to redirect all HTTP access to HTTPS and I would answer Yes here.
HTTPS is better for Google, better for security, so no reason not to do it :)
If we want to use our Nginx web server as reverse proxy for our APEX environment we can do that by adapting our /etc/nginx/conf.d/dgielis.com.conf file (see the location sections):
vi /etc/nginx/conf.d/dgielis.com.conf
Add following to the server:
location /ords/ {
proxy_pass your_apex_url/ords/;
proxy_set_header Origin "" ;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
location /i/ {
proxy_pass your_apex_url/i/;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The final configuration file looks like this:
There's one other configuration change I would suggest you do straightaway; increase the size of the max_body_size in Nginx. Add following line to nginx.conf
vi /etc/nginx/nginx.conf
Test and reload the configuration of Nginx:
nginx -t
nginx -s reload
When going in a browser to https://dgielis.com/ords/ we arrive at Oracle APEX:
There's a lot of optimisation we can do on the reverse proxy. To gain performance we can put the images folder of APEX on the Nginx server, as Nginx is super fast in transferring static files. We can add more redirects, for example that apex.dgielis.com goes to the APEX builder and app.dgielis.com goes to one of our custom APEX apps. As this post is already long, I'm not including that in here.
Once Oracle provides vanity urls, we don't need to do the above, and the URL will point directly to ATP.
Update 26-SEP-2019 (thanks Kris Rice): Note that setting the Origin would negate any CORS info that ORDS is enforcing. That could be a security issue for some people. Oracle is looking into the ability to have your ORDS running on your own Compute VM (the webserver we just setup), which would solve the issue. The vanity URLs would not have the CORS issue either.
In the next post we will use this server to add an on-premises version of APEX Office Print (AOP), so we have everything we need to export data from the database in the format we want, for example in Excel and PDF.
Free Oracle Cloud: 6. Create a VM Instance (Compute Cloud)
In this post we will setup a virtual machine with Oracle Linux. If you need a machine in the Cloud, this service is for you... and you can't beat the price, it's FREE!
So why would you need a machine in the Cloud? For anything that you do on your own machine, you can do on your machine in the cloud. You can work with co-workers on the same app, test applications, host some PHP app or Wordpress blog or you name it. You have a full Oracle Linux machine where you can do whatever you want.
So lets get started to setup this machine, but first lets do one step on our own machine, as it's a pre-requisite to get started: create an SSH key pair. In the link you find how you create a keypair on Windows, Linux and OSX. You can see your SSH key pair as a more secure way to enter your machine in the cloud than a username/password.
As I'm on OSX here's the command I use:
ssh-keygen -t rsa -N "" -b 2048 -C "oraclecloud2019" -f oraclecloud2019
This will generate two files:
Now you are all set to setup your VM instance in the Oracle Cloud.
When you login to your Oracle Cloud account, select the Create a VM instance:
Give your instance a name:
When you scroll down, you see the section where you have to upload the SSH key file with the name .pub:
Hit the Create button and you are done... You will see the orange icon on the left stating it's being orange provisioned:
Once the provisioning is done the icon on the left will turn green and it's ready to be used:
In case you don't want to wait for the provisioning, you can always come back later by clicking the left top hamburger icon, and in the Compute section click on Instances:
You will get an overview of your instances, see the status and go to the details from there:
Now if you want to connect to your VM in the Oracle Cloud you can use Putty (Windows) or SSH (Linux, OSX). The command looks like this:
ssh -i ssh_key opc@public_ip
The OPC user is the one you use to connect to an Oracle Compute (VM) Instance.
In the following screenshot you see me connecting to my Oracle Cloud VM (for security reasons I used another ssh key, normally it would have been oraclecloud2019 but as I exposed that with my screenshot I setup another one):
There you go, now you have a full system running in the Cloud. In the next post we will setup a webserver on this machine and configure a custom domain.
Free Oracle Cloud: 5. Setup APEX in ATP and create first APEX app
Login to your Oracle Cloud Account. Once in, navigate on the left hamburger icon to Autonomous Transaction Processing and select the Autonomous Database. In the overview screen click the Service Console:
Make sure you allow pop-ups as it opens a new tab.
You will get a graphical overview, in here you want to click on the Development link.
There you find the link to Oracle APEX:
When you click the link the first time, it might take a bit of time as APEX is being configured behind the scenes. You first enter your cloud account password.
As it recognises you are logging in the first time, it will prompt you if you want to create a new Workspace. A Workspace is a space where your Oracle APEX applications live.
Click the button and it will prompt to create the Workspace.
Hit the Create Workspace button and you are good to go :)
As you might not click the Create Workspace the first time, I also want to show you how to create a new Workspace moving on.
When you click the Oracle APEX link in the Development dashboard again, it recognises Oracle APEX is already setup and it will prompt you in which workspace you want to login to. You want to login into the INTERNAL workspace with the ADMIN user and your cloud account password (same credentials as with SQL Developer Web).
You will arrive in the Instance Administration screen. Compared to the on-premises version of Oracle APEX, the functionality is a bit more limited as the Oracle Autonomous Database is pre-configured and managed by Oracle. So you can't tinker, for example, with the Security settings or provisioning of workspaces.
The first thing you want to do is create a new Workspace, a place where you will build your Oracle APEX app. In the Manage Workspaces menu, click on Create Workspace:
A modal dialog will appear where you can enter a new or existing database user, password and workspace name. In the following screenshot I want Oracle APEX to create a new database user called CLOUD linked to the workspace CLOUD.
Once that is provisioned, you can sign-out of the INTERNAL workspace, and sign-in to the new created workspace. As workspace name you enter the one you created above, username is the database user and the password is the one you provided when creating the workspace.
And voila... we are in Oracle APEX and we can start to create our APEX applications.
I'm not adding all screenshots to create a first Oracle APEX app in this blog post, but if you are new to APEX and want to see step by step how to start, check out the Oracle APEX Tutorial on YouTube by Caleb Curry. In the next video he walks you how to create an APEX app from a File.
The same counts if you want to use APEX Office Print in the free Oracle Autonomous Database Cloud to export the data in the format you want (PDF, Excel, Word, Powerpoint, HTML, ...), use the HTTPS url https://api.apexofficeprint.com or the HTTPS url of your own environment.
Update on 26-SEP-2019: you can actually use APEX_MAIL, but you first have to configure the mail server as specified in the documentation.
Free Oracle Cloud: 4. Connecting with SQL Developer Web to ATP
Login to your Oracle Cloud Account. Once in, navigate on the left hamburger icon to Autonomous Transaction Processing and select the Autonomous Database. In the overview screen click the Service Console:
Make sure you allow pop-ups as it opens a new tab.
You will get a graphical overview, in here you want to click on the Development link.
There you find the link to SQL Developer Web:
You can login with the same credentials as in SQL Developer (Desktop). Username: admin and your cloud account password.
Then you arrive at SQL Developer Web. The first time you login you get an introduction tour so you get used to the interface, but it's very similar to the SQL Developer Desktop version.
Free Oracle Cloud: 3. Connecting with SQL Developer (Desktop) to ATP
A modal dialog appears to download the credentials (wallet). Click the Download button:
The modal dialog changes and you need to enter a password and hit download:
Now we will make a new connection in SQL Developer. Right click on the Oracle Connections and click the New Connection... link:
Enter a name of your choice. In the User Info section as Username you enter admin and as paswoord, the paswoord you entered when you setup your database (previous blog post).
Select for Connection Type Cloud Wallet. A details section will open where you can select the file you downloaded before in the Configuration File.
So now you are all set to play with your database. You can setup new users, create tables, load and query data etc.
Free Oracle Cloud: 2. Setup Autonomous Transaction Cloud (ATP)
Give the database a name and select the type of workload you want to create. In most Oracle APEX projects I believe Transaction Processing is the one to go with:
Scrolling down on the page will show the Always Free switch, make sure to enable that :)
Further down you will need to provide some credentials that you will use to login later on.
Hit the Create Autonomous Database button and you will receive a detail screen that the provisioning has started:
Once it's available you can click on it and see all details:
In the next post we will connect to our new Free Oracle Autonomous Database.
Free Oracle Cloud: 1. Sign-up for Oracle Cloud Free
To get started, go to https://www.oracle.com/cloud/free/ and click the Start for free button:
Follow the wizard to sign up for the Oracle Cloud Program:
When you enter your Account details, it's important to point out that your Home Region is really important. It's only in your Home Region that you can get the Free Oracle Cloud service.
I would select the Home Region based on where most of your visiters/customers and yourself are based. Also provide correct information about yourself and a valid phone number.
You will get a notification on your phone to continue:
Provide a password:
Enter your payment information. It's important to add a valid credit card in order to get the Free Oracle Cloud, if you don't you enter in Trial mode.
Your credit card is not being charged (well for 1 USD to test the card), but it's important to have the credit card as Oracle sees your account as a Paid account and it will open more functionality :)
I setup a couple of Free Oracle Cloud services and in most instances the account is created within a minute, and you get redirected to the sign in form:
When signing in you will see the Oracle Cloud dashboard. Note that in the following screenshot, the Free Oracle Cloud Tier is not ready yet.
You will receive a welcome email.
And a few minutes later you will receive a message you are all set, and your FREE Oracle Cloud account should be ready to be selected.
But sometimes, the account is not created straightaway and you need to wait a bit. In a few cases I had to wait a few hours for the email, but at the end it always worked for me. It might be because I used this service when it was just released and maybe some datacenters weren't 100% ready yet.
In the next post we will setup a free Autonomous Database (ATP).
Best and Cheapest Oracle APEX hosting: Free Oracle Cloud
The first time was when I first saw and worked with the Oracle Database, the second time when I saw HTMLDB (now APEX) and today, again, with the announcement of the Free Autonomous Oracle Cloud.
The Free Oracle Tier is a hardcore smackdown on any other cloud offering which includes a virtual machine, database, and development environment!
The Free Oracle Cloud was announced by Andy Mendelsohn at Oracle Open World 2019.
You get a free Oracle Autonomous Database, 2 Virtual Machines (Compute), Storage and other services:
There's nothing like this Oracle offering on the market. The best database in the world, running on amazing hardware and all of this for free. This is not just for APEX Developers, this is for anybody who wants a datastore or virtual machine in the cloud without wanting to manage things themselves.
Jon, from JMJ Cloud, wrote RIP Exadata Express, where do I run my RAD stack now?, the answer today for me is in the Free Oracle Autonomous Oracle Cloud. For many of my customers, I also used the Exadata Express service before, as it was ideal to start developing new Oracle APEX apps. The Free Oracle Autonomous Database Cloud is a better version of what Exadata Express has ever been. I would always start my development in this new free Oracle Cloud offering as it allows you to scale up while you grow.
I really believe everybody should sign-up for this service. As with every new service, the beginning might have some hiccups, but to me, it's worth starting with this service straight away.
I signed up for the Free Oracle Cloud in multiple regions, and love it so far.
So I thought to write a series of blog posts on how you can get started:
1. Sign-up for Free Oracle Cloud
2. Setup Autonomous Transaction Cloud (ATP)
3. Connecting with SQL Developer (Desktop) to ATP
4. Connecting with SQL Developer Web to ATP
5. Setup APEX in ATP and create the first APEX app
6. Create a VM Instance (Compute Cloud)
7. Setup a Webserver on the Compute Instance (VM)
8. Setup APEX Office Print (AOP) to export to PDF, Excel, Word, Powerpoint, HTML and Text
9. Setup Object Storage and use for File Share and Backups
10. Running SQLcl and Datapump from the Compute Instance (VM) to ATP
11. Sending Emails with APEX_MAIL on ATP
12. Create a 2nd Compute Instance and a Load Balancer
13. Final things to take away
14. Your Oracle Cloud Free Trial has expired (but FREE still running)
See APEX Debug info in PL/SQL and SQL Developer
With our APEX Office Print (AOP) PL/SQL API and APEX Plug-in we did the same, we use APEX_DEBUG and Logger behind the scenes to allow you to see what it going on. But when I tried to view the APEX debug messages in SQL Developer, I didn't see any. Christian Neumueller of the APEX Dev team, gave me the answer: APEX debug is buffering it's output, to reduce the I/O overhead. Buffering is disabled for LEVEL9, but the other levels only write:
- after 1000 records
- at the end of request processing
- when you detach the session
This explained perfectly what was happening for me.
Here's a quick example when you want to debug the AOP PL/SQL API from PL/SQL and SQL Developer. The code downloads an Interactive Report of page 200 to Excel and stores it in a table.
declare
l_return blob;
l_output_filename varchar2(100) := 'output';
begin
apex_session.create_session(p_app_id=>498,p_page_id=>200,p_username=>'DIMI');
apex_debug.enable(p_level => apex_debug.c_log_level_info);
-- for more details, use: c_log_level_app_trace
apex_debug.message(p_message => 'Debug enabled.');
l_return := aop_api_pkg.plsql_call_to_aop (
p_data_type => aop_api_pkg.c_source_type_rpt,
p_data_source => 'report1',
p_template_type => aop_api_pkg.c_source_type_apex,
p_template_source => 'aop_template_ir_customers.xlsx',
p_output_type => aop_api_pkg.c_excel_xlsx,
p_output_filename => l_output_filename,
p_aop_url => apex_app_setting.get_value('AOP_URL'),
p_api_key => apex_app_setting.get_value('AOP_API_KEY'),
p_app_id => 498,
p_page_id => 200);
insert into aop_output (output_blob,filename)
values (l_return, l_output_filename);
commit;
dbms_output.put_line('To view debug messages:');
dbms_output.put_line('select * from apex_debug_messages where session_id = '
||apex_util.get_session_state('APP_SESSION') ||' order by message_timestamp');
apex_session.detach;
end;
Running the SQL statement to view the debug messages:
select * from apex_debug_messages where session_id = 16458652970080 order by message_timestamp
Et voila... the APEX debug info is available straight away :)
Workarounds for JPPD with view and table(kokbf$), xmltable or json_table functions
You may know that table() (kokbf$ collection functions), xmltable and json_table functions block Join-Predicate PushDown(JPPD).
Simple example:
create table xtest(a, b, c) as select mod(level,1000),level,rpad('x',100,'x') from dual connect by level<=1e4 / create index itest on xtest(a) / create or replace view vtest as select a,count(b) cnt from xtest group by a / call dbms_stats.gather_table_stats(user,'xtest'); /
select distinct v.* from table(sys.odcinumberlist(1,2,3)) c, vtest v where v.a = c.column_value; Plan hash value: 699667151 ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 19 | 80 (4)| 00:00:01 | | 1 | HASH UNIQUE | | 1 | 19 | 80 (4)| 00:00:01 | |* 2 | HASH JOIN | | 1 | 19 | 79 (3)| 00:00:01 | | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | | 4 | VIEW | VTEST | 1000 | 17000 | 49 (3)| 00:00:01 | | 5 | HASH GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 6 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("V"."A"=VALUE(KOKBF$))
select/*+ cardinality(c 1) use_nl(v) push_pred(v) */ * from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) c ,vtest v where c.a = v.a; Plan hash value: 664523328 -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 28 | 78 (2)| 00:00:01 | | 1 | NESTED LOOPS | | 1 | 28 | 78 (2)| 00:00:01 | | 2 | JSONTABLE EVALUATION | | | | | | |* 3 | VIEW | VTEST | 1 | 26 | 49 (3)| 00:00:01 | | 4 | SORT GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 5 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("V"."A"="P"."A") Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (U - Unused (1)) --------------------------------------------------------------------------- 1 - SEL$F534CA49 / V@SEL$1 U - push_pred(v)
select/*+ leading(c v) cardinality(c 1) use_nl(v) push_pred(v) */ v.* from xmltable('(1,3)' columns a int path '.') c,vtest v where c.a = v.a(+); Plan hash value: 564839666 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 28 | 78 (2)| 00:00:01 | | 1 | NESTED LOOPS OUTER | | 1 | 28 | 78 (2)| 00:00:01 | | 2 | COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE | 1 | 2 | 29 (0)| 00:00:01 | |* 3 | VIEW | VTEST | 1 | 26 | 49 (3)| 00:00:01 | | 4 | SORT GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 5 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("V"."A"(+)=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0), 50,1,2)) AS int )) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (U - Unused (1)) --------------------------------------------------------------------------- 1 - SEL$6722A2F6 / V@SEL$1 U - push_pred(v)
And compare with this:
create global temporary table temp_collection(a number); insert into temp_collection select * from table(sys.odcinumberlist(1,2,3)); select/*+ cardinality(c 1) no_merge(v) */ distinct v.* from temp_collection c, vtest v where v.a = c.a; Plan hash value: 3561835411 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 26 | 41 (3)| 00:00:01 | | 1 | HASH UNIQUE | | 1 | 26 | 41 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 1 | 26 | 40 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL | TEMP_COLLECTION | 1 | 13 | 29 (0)| 00:00:01 | | 4 | VIEW PUSHED PREDICATE | VTEST | 1 | 13 | 11 (0)| 00:00:01 | |* 5 | FILTER | | | | | | | 6 | SORT AGGREGATE | | 1 | 8 | | | | 7 | TABLE ACCESS BY INDEX ROWID BATCHED| XTEST | 10 | 80 | 11 (0)| 00:00:01 | |* 8 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 5 - filter(COUNT(*)>0) 8 - access("A"="C"."A")
You can see that JPPD works fine in case of global temporary tables and, obviously, the first workaround is to avoid such functions with complex views.
But in such simple queries you have 2 other simple options:
1. you can avoid JPPD and get optimal plans using CVM(complex view merge) by just simply rewriting the query using IN or EXISTS:
select * from vtest v where v.a in (select/*+ cardinality(c 1) */ c.* from table(sys.odcinumberlist(1,2,3)) c); Plan hash value: 1474391442 --------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 | | 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 4 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | |* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID | XTEST | 10 | 80 | 11 (0)| 00:00:01 | --------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("A"=VALUE(KOKBF$))
select * from vtest t where t.a in (select/*+ cardinality(v 1) */ v.a from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) v); Plan hash value: 2910004067 --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 | | 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 4 | SORT UNIQUE | | | | | | | 5 | JSONTABLE EVALUATION | | | | | | |* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID| XTEST | 10 | 80 | 11 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("A"="P"."A") select v.* from vtest v where exists(select/*+ cardinality(c 1) */ 1 from xmltable('(1,3)' columns a int path '.') c where c.a = v.a); Plan hash value: 1646016183 --------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 | | 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 4 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 5 | COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE | 1 | 2 | 29 (0)| 00:00:01 | |* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID | XTEST | 10 | 80 | 11 (0)| 00:00:01 | --------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("A"=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),50,1,2)) AS int ))
2. Avoid JPPD using lateral():
select/*+ cardinality(c 1) no_merge(lat) */ distinct lat.* from table(sys.odcinumberlist(1,2,3)) c, lateral(select * from vtest v where v.a = c.column_value) lat; Plan hash value: 18036714 ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 190 | 41 (3)| 00:00:01 | | 1 | HASH UNIQUE | | 10 | 190 | 41 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 190 | 40 (0)| 00:00:01 | | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | | 4 | VIEW | VW_LAT_4DB60E85 | 10 | 170 | 11 (0)| 00:00:01 | | 5 | SORT GROUP BY | | 10 | 80 | 11 (0)| 00:00:01 | | 6 | TABLE ACCESS BY INDEX ROWID BATCHED| XTEST | 10 | 80 | 11 (0)| 00:00:01 | |* 7 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 7 - access("A"=VALUE(KOKBF$))
Let’s see a bit more complex query:
create table xtest1(id primary key, a) as select level,level from dual connect by level<=1000; create table xtest2(a, b, c) as select mod(level,1000),level,rpad('x',100,'x') from dual connect by level<=1e4 / create index itest2 on xtest2(a) / create or replace view vtest2 as select a,count(b) cnt from xtest2 group by a /
select v.* from xtest1 t1, vtest2 v where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c) and v.a = t1.a; Plan hash value: 4293766070 ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 36 | 80 (3)| 00:00:01 | |* 1 | HASH JOIN | | 1 | 36 | 80 (3)| 00:00:01 | | 2 | JOIN FILTER CREATE | :BF0000 | 1 | 10 | 31 (4)| 00:00:01 | | 3 | NESTED LOOPS | | 1 | 10 | 31 (4)| 00:00:01 | | 4 | NESTED LOOPS | | 1 | 10 | 31 (4)| 00:00:01 | | 5 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 6 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | |* 7 | INDEX UNIQUE SCAN | SYS_C0026365 | 1 | | 0 (0)| 00:00:01 | | 8 | TABLE ACCESS BY INDEX ROWID | XTEST1 | 1 | 8 | 1 (0)| 00:00:01 | | 9 | VIEW | VTEST2 | 1000 | 26000 | 49 (3)| 00:00:01 | | 10 | HASH GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 11 | JOIN FILTER USE | :BF0000 | 10000 | 80000 | 48 (0)| 00:00:01 | |* 12 | TABLE ACCESS FULL | XTEST2 | 10000 | 80000 | 48 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("V"."A"="T1"."A") 7 - access("T1"."ID"=VALUE(KOKBF$)) 12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"))
As you see, CVM can’t help in this case, but we can use lateral():
select/*+ no_merge(lat) */ lat.* from xtest1 t1, lateral(select * from vtest2 v where v.a = t1.a) lat where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c); Plan hash value: 1798023704 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 10 | 360 | 42 (3)| 00:00:01 | | 1 | NESTED LOOPS | | 10 | 360 | 42 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 1 | 10 | 31 (4)| 00:00:01 | | 3 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID | XTEST1 | 1 | 8 | 1 (0)| 00:00:01 | |* 6 | INDEX UNIQUE SCAN | SYS_C0026365 | 1 | | 0 (0)| 00:00:01 | | 7 | VIEW | VW_LAT_A18161FF | 10 | 260 | 11 (0)| 00:00:01 | | 8 | SORT GROUP BY | | 10 | 80 | 11 (0)| 00:00:01 | | 9 | TABLE ACCESS BY INDEX ROWID BATCHED | XTEST2 | 10 | 80 | 11 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | ITEST2 | 10 | | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("T1"."ID"=VALUE(KOKBF$)) 10 - access("A"="T1"."A")
There is also another workaround with non-documented “precompute_subquery” hint:
select v.* from xtest1 t1, vtest2 v where t1.id in (select/*+ precompute_subquery */ * from table(sys.odcinumberlist(1,2,3)) c) and v.a = t1.a; Plan hash value: 1964829099 ------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 30 | 480 | 37 (3)| 00:00:01 | | 1 | HASH GROUP BY | | 30 | 480 | 37 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 30 | 480 | 36 (0)| 00:00:01 | | 3 | NESTED LOOPS | | 30 | 480 | 36 (0)| 00:00:01 | | 4 | INLIST ITERATOR | | | | | | | 5 | TABLE ACCESS BY INDEX ROWID| XTEST1 | 3 | 24 | 3 (0)| 00:00:01 | |* 6 | INDEX UNIQUE SCAN | SYS_C0026365 | 3 | | 2 (0)| 00:00:01 | |* 7 | INDEX RANGE SCAN | ITEST2 | 10 | | 1 (0)| 00:00:01 | | 8 | TABLE ACCESS BY INDEX ROWID | XTEST2 | 10 | 80 | 11 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("T1"."ID"=1 OR "T1"."ID"=2 OR "T1"."ID"=3) 7 - access("A"="T1"."A")
It can help even in most difficult cases, for example if you can’t rewrite query (in this case you can create sql patch or sql profile with “precompute_subquery”), but I wouldn’t suggest it since “precompute_subquery” is non-documented, it can be used only with simple collections and has limitation in 1000 values.
I’d suggest to use the workaround with lateral, since it’s most reliable and very simple.
Pages
