Feed aggregator

Partitioning -- 8 : Reference Partitioning

Hemant K Chitale - 7 hours 43 min ago
Like Interval Partitioning, another enhancement in 11g is Reference Partitioning.

Reference Partitioning allows you to use a Referential Integrity Constraint to equi-partition a "Child" Table with a "Parent" Table.

Here is a quick demonstration :

SQL> l
1 create table orders
2 (order_id number primary key,
3 order_date date not null,
4 customer_id number)
5 partition by range (order_date)
6 (partition P_2017 values less than (to_date('01-JAN-2018','DD-MON-YYYY')),
7 partition P_2018 values less than (to_date('01-JAN-2019','DD-MON-YYYY'))
8* )
SQL> /

Table created.

SQL>
SQL> l
1 create table order_lines
2 (line_unique_id number primary key,
3 order_id number not null,
4 order_line_id number,
5 product_id number,
6 product_quantity number,
7 constraint order_lines_fk foreign key (order_id)
8 references orders(order_id)
9 )
10* partition by reference (order_lines_fk)
SQL> /

Table created.

SQL>
SQL> col high_value format a28 trunc
SQL> col table_name format a16
SQL> col partition_name format a8
SQL> select table_name, partition_name, high_value
2 from user_tab_partitions
3 where table_name in ('ORDERS','ORDER_LINES')
4 order by table_name, partition_position
5 /

TABLE_NAME PARTITIO HIGH_VALUE
---------------- -------- ----------------------------
ORDERS P_2017 TO_DATE(' 2018-01-01 00:00:0
ORDERS P_2018 TO_DATE(' 2019-01-01 00:00:0
ORDER_LINES P_2017
ORDER_LINES P_2018

SQL>


Notice the "automatically" created Partitions for the ORDER_LINES ("Child") Table that match those for the ORDERS ("Parent") Table.

.
.
.

Categories: DBA Blogs

Recovering from failed patch on virtualized ODA

Yann Neuhaus - 11 hours 9 min ago

When a patch fails on a virtualized Oracle Database Appliance (ODA), this ODA is often unusuable because Linux and OAKD are patched to new release but Grid Infrastructure is still on old version. OAKD cannot be restarted in default mode because in this mode the active Grid Infrastructure version is checked, which will fail due to old version. Also Grid Infrastructure cannot be started due to the fact that OAKD controls access of shared hardware on ODA and if OAKD does not run, shared hardware cannot be accessed.

One way to resolve this problem is to reimage the ODA, which is time consuming and means that all databases and VMs have to be restored.

A workaround of this chicken and egg problem (I cannot guarantee that it is supported) as a last try before reimaging the ODA could be to start OAKD in non-cluster mode. This not very good documented mode does not check active grid infrastructure but gives access to shared hardware. Additional VMs cannot be started because there is no master OAKD. In this mode manual patching/upgrade of Grid Infrastructure is possible.

The non cluster mode can be entered like following (on every ODA node):


cp /opt/oracle/oak/install/oakdrun /opt/oracle/oak/install/oakdrun_orig
echo "non-cluster" > /opt/oracle/oak/install/oakdrun
cd /etc/init.d
./init.oak start

[root@xx init.d]# ps -ef | grep oakd
root 49697 49658 11 11:05 ? 00:00:02 /opt/oracle/oak/bin/oakd -non-cluster
root 50511 42821 0 11:05 pts/0 00:00:00 grep oakd

Now Grid Infrastructure patching or upgrade can be done.

If only an ODA_BASE VM exists and timeframe for manual patching/upgrade is too short, it also can be tried is to start Grid Infrastructure on one ODA node and then start the services. Patching or reimaging has to be done in next suitable timeframe.

After running Grid Infrastructure on new version, OAKD can be tried to start in default mode:


echo "start" > /opt/oracle/oak/install/oakdrun
cd /etc/init.d
./init.oak start

[root@xx init.d]# ps -ef | grep oakd
root 30187 30117 13 10:18 ? 00:00:02 /opt/oracle/oak/bin/oakd foreground
root 31902 7569 0 10:18 pts/1 00:00:00 grep oakd

Perhaps manual patching/upgrade of other components has to be done afterwards.

After patching/upgrading, ODA has to be checked with:


oakcli show version -detail
oakcli validate -a

Cet article Recovering from failed patch on virtualized ODA est apparu en premier sur Blog dbi services.

AWS: Networking - Virtual Privat Cloud

Dietrich Schroff - Sun, 2018-11-11 14:06
After changing my AWS plans from docker to kubernetes, i decided to put the aws services inside a vpc (virtual private cloud).
With this decision my AWS services are not reachable from the internet - only my laptop can access them ;-)
Here the official pictures from aws:



Here is a list of customer gateway devices, for which amazon provides configuration settings:
  • Check Point Security Gateway running R77.10 (or later) software
  • Cisco ASA running Cisco ASA 8.2 (or later) software
  • Cisco IOS running Cisco IOS 12.4 (or later) software
  • Dell SonicWALL running SonicOS 5.9 (or later) software
  • Fortinet Fortigate 40+ Series running FortiOS 4.0 (or later) software
  • Juniper J-Series running JunOS 9.5 (or later) software
  • Juniper SRX running JunOS 11.0 (or later) software
  • Juniper SSG running ScreenOS 6.1, or 6.2 (or later) software
  • Juniper ISG running ScreenOS 6.1, or 6.2 (or later) software
  • Netgate pfSense running OS 2.2.5 (or later) software.
  • Palo Alto Networks PANOS 4.1.2 (or later) software
  • Yamaha RT107e, RTX1200, RTX1210, RTX1500, RTX3000 and SRT100 routers
  • Microsoft Windows Server 2008 R2 (or later) software
  • Microsoft Windows Server 2012 R2 (or later) software
  • Zyxel Zywall Series 4.20 (or later) software for statically routed VPN connections, or 4.30 (or later) software for dynamically routed VPN connections
The following requirements have to be met:
IKE Security Association (required to exchange keys used to establish the IPsec security association)
IPsec Security Association (handles the tunnel's encryption, authentication, and so on.)
Tunnel interface (receives traffic going to and from the tunnel) Optional
BGP peering (exchanges routes between the customer gateway and the virtual private gateway) for devices that use BGP
I do not own one of these devices, but i hope that the linux laptop can configured as customer gateway with appropriate ipsec settings.

So let's configure the VPC at AWS:


 And create a subnet for this vpc:



After that you have to add a virtual private gateway:




and attach it to your vpc:



You have to add a route from the VPC to your local network:


Then create a vpn connection:





 Then download the configuration:
and hurray: AWS provides a strongswan configuration:
After i downloaded the file an followed the instructions provided there, i was able to connect and the aws dashboard showed that the connection is up:


and on my local machine:
root@zerberus:~/AWS# ipsec status
Security Associations (1 up, 0 connecting):
     Tunnel1[1]: ESTABLISHED 3 seconds ago, 192.168.178.60[XX.YY.YY.XX8]...34.246.243.178[34.246.243.178]
     Tunnel1{1}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: cb84b8e5_i 488e669b_o
     Tunnel1{1}:   0.0.0.0/0 === 0.0.0.0/0

Docker and AWS: Is there really an AND? Moving to Kuberenetes

Dietrich Schroff - Sun, 2018-11-11 10:49
After my first steps into AWS i did not find a way to run docker-swarm at AWS without installing the software on my own. (take a look here). At least you have to add task definitions to your dockerfiles to let them run on ECS.
This is not really bad, but the idea was to move to a cloud provider and just run the microservices inside the cloud without caring about the infrastructure (Some people call this iaas or paas ;-) ).
But with ECS i am not convinced, that a cluster orchestrator like docker-swarm is included. Today everyone talks about kubernetes as cluster orchestrator. Last year as i read Kubernetes: Up & Running, there was a subchapter with:

But this has changed!
Amazon offer EKS:


So the next step is get a microservice with kubernetes on AWS working.
To do the docker setup once again only on servers which are running as EC2 compute nodes and not on my local virtualbox is not interesting. 

Last remark: EKS uses Kubernetes, which orchestrates Docker - so there is an AND for "Docker and AWS", AWS does not provide orchestration with docker-swarm, which was the orchestrator which i used.


AWS: Docker and AWS - creating my first EC2 compute node

Dietrich Schroff - Sat, 2018-11-10 22:11
My first idea after all my experiences with docker was to run my docker application with AWS.
After registration i searched for docker inside AWS and i only found this:
Hmm. Does not look like i expected. I thought, that i have just to upload my docker image and then i can select on which AWS compute nodes i want to run this image.

But let's give it a try.

First step is to install Docker on an Amazon Linux instance:

Startpoint is  https://console.aws.amazon.com/ec2/


At the left side in the top bar you have to choose the region, where you EC2 instance should be launched:

Then:

i choose the one with "the repositories include docker, PHP, MySQL, PostgreSQL, ..."








 You have to download the keypair. Otherwise you will not be able to connect to your machine!
And after a short time:


And here we go:
schroff@zerberus:~/AWS$ chmod 400 181111-first-aws-keypair.pem 

schroff@zerberus:~/AWS$ ssh -i 181111-first-aws-keypair.pem ec2-user@ec2-35-180-192-27.eu-west-3.compute.amazonaws.com



       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|



https://aws.amazon.com/amazon-linux-ami/2018.03-release-notes/

14 package(s) needed for security, out of 30 available

Run "sudo yum update" to apply all updates.

[ec2-user@ip-172-31-47-127 ~]$


Moving data across DB link when one database uses pass-through configuration

Tom Kyte - Sat, 2018-11-10 12:26
There is a source with the settings below: NLS_LANGUAGE AMERICAN NLS_TERRITORY CIS NLS_CURRENCY ?. NLS_ISO_CURRENCY CIS NLS_NUMERIC_CHARACTERS , NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD.MM.RR NLS_DATE_LANGUAGE AMERICAN NLS_CHARACTERSET W...
Categories: DBA Blogs

Amazon Web Services: A Start into AWS

Dietrich Schroff - Sat, 2018-11-10 06:23
After spending a lot of time with docker / docker swarm i decided to see, how this all works by using AWS.

First step is to do the registration (only registered users have access to the AWS documentation!):

The start is https://aws.amazon.com/








An index only scan in PostgreSQL is not always index only

Yann Neuhaus - Sat, 2018-11-10 05:19

PostgreSQL supports index only scans since version 9.2 which was released in September 2013. The purpose of an index only scan is to fetch all the required values entirely from the index without visiting the table (the heap) at all. Of course that can speed up a query because avoiding to touch the heap, means reading less data and reading less data is obviously faster than reading more data. So index only scans are a good thing but unfortunately it does not always mean that the heap is not touched.

As always, lets start by creating a sample table and populate it with some data:

postgres=# create table t1 ( a int, b int, c int );
CREATE TABLE
postgres=# insert into t1 select a.*,a.*,a.* from generate_series(1,1000000) a;
INSERT 0 1000000
postgres=# \d+ t1
                                    Table "public.t1"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
--------+---------+-----------+----------+---------+---------+--------------+-------------
 a      | integer |           |          |         | plain   |              | 
 b      | integer |           |          |         | plain   |              | 
 c      | integer |           |          |         | plain   |              | 

Without any index a query like the following one needs to read the whole table for getting the result:

postgres=# explain (analyze,buffers,costs off) select a from t1 where b = 5;
                                 QUERY PLAN                                 
----------------------------------------------------------------------------
 Gather (actual time=2.187..158.023 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   Buffers: shared hit=5406
   ->  Parallel Seq Scan on t1 (actual time=68.645..119.828 rows=0 loops=3)
         Filter: (b = 5)
         Rows Removed by Filter: 333333
         Buffers: shared hit=5406
 Planning time: 0.209 ms
 Execution time: 158.079 ms
(10 rows)

In this case PostgreSQL decides to do a parallel sequential scan which is fine. The only other option would be to do a serial sequential scan as we do not have any indexes on that table. What people usually do in such cases is to create an index like this one:

postgres=# create index i1 on t1(b);
CREATE INDEX
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | integer |           |          | 
 c      | integer |           |          | 
Indexes:
    "i1" btree (b)

Having that index in place PostgreSQL can use it to return the results faster:

postgres=# explain (analyze,buffers,costs off) select a from t1 where b = 5;
                             QUERY PLAN                              
---------------------------------------------------------------------
 Index Scan using i1 on t1 (actual time=0.035..0.037 rows=1 loops=1)
   Index Cond: (b = 5)
   Buffers: shared hit=4
 Planning time: 0.174 ms
 Execution time: 0.081 ms
(5 rows)

As you can see above the index is used but PostgreSQL will still have to visit the heap for getting the value of “a”. We can improve that even further by creating an index that contains all the information we need to satisfy the query:

postgres=# create index i2 on t1 (b,a);
CREATE INDEX
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | integer |           |          | 
 c      | integer |           |          | 
Indexes:
    "i1" btree (b)
    "i2" btree (b, a)

What will happen now is, that PostgreSQL will switch to an index only scan:

postgres=# explain (analyze,buffers,costs off) select a from t1 where b = 5;
                                QUERY PLAN                                
--------------------------------------------------------------------------
 Index Only Scan using i2 on t1 (actual time=0.111..0.113 rows=1 loops=1)
   Index Cond: (b = 5)
   Heap Fetches: 1
   Buffers: shared hit=1 read=3
 Planning time: 0.515 ms
 Execution time: 0.161 ms
(6 rows)

But: There is still a fetch from the heap. Why that? For answering that, lets list the files on disk for that table:

postgres=# select pg_relation_filepath('t1');
 pg_relation_filepath 
----------------------
 base/34013/34082
(1 row)

postgres=# \! ls -l $PGDATA/base/34013/34082*
-rw-------. 1 postgres postgres 44285952 Nov  8 04:51 /u02/pgdata/10/PG103/base/34013/34082
-rw-------. 1 postgres postgres    32768 Nov  8 04:46 /u02/pgdata/10/PG103/base/34013/34082_fsm

… and here we go: The table has a free space map but the visibility map is not yet there. Without the visibility map PostgreSQL can not know if all the rows in that page are visible to all current transactions and therefore has to visit the heap to get that information. As soon as we create the visibility map:

postgres=# vacuum t1;
VACUUM
postgres=# \! ls -l $PGDATA/base/34013/34082*
-rw-------. 1 postgres postgres 44285952 Nov  8 04:51 /u02/pgdata/10/PG103/base/34013/34082
-rw-------. 1 postgres postgres    32768 Nov  8 04:46 /u02/pgdata/10/PG103/base/34013/34082_fsm
-rw-------. 1 postgres postgres     8192 Nov  8 07:18 /u02/pgdata/10/PG103/base/34013/34082_vm
postgres=# explain (analyze,buffers,costs off) select a from t1 where b = 5;
                                QUERY PLAN                                
--------------------------------------------------------------------------
 Index Only Scan using i2 on t1 (actual time=0.052..0.054 rows=1 loops=1)
   Index Cond: (b = 5)
   Heap Fetches: 0
   Buffers: shared hit=4
 Planning time: 0.446 ms
 Execution time: 0.106 ms
(6 rows)

… the fetch from the heap is gone and we have a real index only scan (although the visibility map is always scanned). To demonstrate that in more detail lets get the physical location of the row we want to read:

>
postgres=# select ctid,* from t1 where b=5;
 ctid  | a | b | c 
-------+---+---+---
 (0,5) | 5 | 5 | 5
(1 row)

Now we know that the row is in block 0 and it is the 5th row in that block. Let’s check, for that block, if all rows are visible to all current transactions:

postgres=# create extension pg_visibility;
CREATE EXTENSION
postgres=# select pg_visibility_map('t1'::regclass, 0);
 pg_visibility_map 
-------------------
 (t,f)
(1 row)

Yes, they are (the first “t”, which is true, means all visible). What happens when we update the row in a second session?

postgres=# update t1 set a=8 where b=5;
UPDATE 1

Do we still get a “true” when we ask if all rows in that block are visible to all transactions?

postgres=# select pg_visibility_map('t1'::regclass, 0);
 pg_visibility_map 
-------------------
 (f,f)
(1 row)

No, and that means two things: First of all a modification to a page clears the bit in the visibility map. The second consequence is, that our index only scan will need to visit the heap again:

postgres=# explain (analyze,buffers,costs off) select a from t1 where b = 5;
                                QUERY PLAN                                
--------------------------------------------------------------------------
 Index Only Scan using i2 on t1 (actual time=0.263..0.267 rows=1 loops=1)
   Index Cond: (b = 5)
   Heap Fetches: 2
   Buffers: shared hit=6 dirtied=3
 Planning time: 0.205 ms
 Execution time: 0.328 ms
(6 rows)

The question now is: Why two heap fetches? First of all every update in PostgreSQL creates a new row:

postgres=# select ctid,* from t1 where b=5;
   ctid    | a | b | c 
-----------+---+---+---
 (5405,76) | 8 | 5 | 5
(1 row)

Our row is now in a new block (and even if if would be in the same block it would be at another location in the block) and that of course also affects the index entry which points to that row. The index still points to the old version of the row and there is the pointer to the current version which means two heap fetches (when you update a column that is not part of the index, that is called a hot update, more on that in another post). For the next execution we see one heap fetch again:

postgres=# explain (analyze,buffers,costs off) select a from t1 where b = 5;
                                QUERY PLAN                                
--------------------------------------------------------------------------
 Index Only Scan using i2 on t1 (actual time=0.022..0.023 rows=1 loops=1)
   Index Cond: (b = 5)
   Heap Fetches: 1
   Buffers: shared hit=5
 Planning time: 0.093 ms
 Execution time: 0.047 ms
(6 rows)

Not sure why only one, at the moment, but I’ll update this blog once I have more information.

What you need to remember is, that an index only scan is not always index only. Depending on how many modifications are happening on that table, it might well be that PostgreSQL must visit the heap quite a lot of times which of course slows down things. For tables where most of the blocks are static an index only scan is great.

Cet article An index only scan in PostgreSQL is not always index only est apparu en premier sur Blog dbi services.

Oracle Sequence and EXPDP/Historical Data

Tom Kyte - Fri, 2018-11-09 18:06
Hello, Ask TOM Team. We are designing a new database and there are lots of tables with identity column. The sequence used by each table is a named sequence (we are not using the system-generated sequence). The default values of these tables is lik...
Categories: DBA Blogs

Function to gather the stats on the latest partition of a table

Tom Kyte - Fri, 2018-11-09 18:06
I have a function - ANALYZE_TABLE as below <code>create or replace FUNCTION ANALYZE_TABLE (P_TAB_NAME IN VARCHAR2) RETURN NUMBER IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN DBMS_STATS.GATHER_TABLE_STATS(USER,P_TAB_NAME); RETURN 0; END;</cod...
Categories: DBA Blogs

Not exist without subquery

Tom Kyte - Fri, 2018-11-09 18:06
I have 4 tables <code>table item_values (item_key, item_value) table filter_model (filter_key) table item_includes (include_key, filter_key_fk, item_key, include_value) table item_excludes (exclude_key, filter_key_fk, item_k...
Categories: DBA Blogs

sum() at each level in hierarchial query

Tom Kyte - Fri, 2018-11-09 18:06
consider the following query select level,sys_connect_by_path(last_name,'\') name,salary from employees start with last_name='King' connect by prior employee_id=manager_id is it possible to get the result like name sum_of_all_l...
Categories: DBA Blogs

Date in comparison with string

Tom Kyte - Fri, 2018-11-09 18:06
Hi Tom, I have a table named RTBS_SUBS_EVENT and there is a date column named expiry_date. expiry_date is a partitioned column too. a column named ID and it's indexed. NLS_DATE_FORMAT of the database is 'DD-MON-RR'. I run this query and th...
Categories: DBA Blogs

Join elimination

Tom Kyte - Fri, 2018-11-09 18:06
Hello All, While making some tests using LiveSQL, which is Oracle 18c, I found a strange behavior regarding join elimination. That is, join elimination DOES happen even if the constraints are DEFERRED ! My LiveSQL test script can be found he...
Categories: DBA Blogs

Is there any point backing up flashback 'flb' files?

Tom Kyte - Fri, 2018-11-09 18:06
Hi, Is there any point/benefit in backing up flb files to tape using an OS utility? Or, to put it another way: would backing up flb files just be a waste of tape? Thanks
Categories: DBA Blogs

SQL Pass Summit 2018: SQL Server 2019 features coming soon!

Yann Neuhaus - Fri, 2018-11-09 15:13

It’s the 20th anniversary of the event and the keynote today was amazing (see the blog from Christophe) for different reasons but the more important is that is every time better people. Not only DBA’s but also people using their application connected to the database.
Today, one of my focus will be the next version of SQL Server 2019 with the session of Bob Ward, Asad Khan & Amit Banerjee from Microsoft.
sql2019_01
I already tested some new features and wrote about it, but it’s really good to see a session about SQL Server 2019 and to discover new features that can help us.
Before writing about the session, I refer the keynote few minutes demo done by Bob Ward & Connor Cunningham and I will share this with you because today it’s not available in the CPT. It will be in the future release of SQL Server. The problem is if you have a huge workload and Tempdb is solicited. You notice in tempdb that only sys tables are the cause…What can we do? We do not have access to these tables, they are internal… And Connor comes with an update of SQL Server and it was fix! AMAZING…It’s really a dream for us!!! Connor explained that the SQLServer development Team developed a solution using in-memory technology and put sys tables in-memory to avoid this problem. This is illustrated in the picture as “tempdb: it Juts Runs Faster” and is part of the Intelligent Query Processing like Memory Grant Feedback or Adaptive Joins already available in SQL Server 2017 in the Adaptive Query Processing family.
The name is also changing from Adaptive to Intelligent! :lol: sql2019_02
After this future new feature, let’s focus on the Roadmap for SQL Server: Inside SQL Server 2019.
I will not write about features already in the CPT but features coming soon.
On the security I’m very curious to test the “Static Data masking”.
We will be able to mask data to users with admin access.
It can useful doing a copy from prod to dev for developers having db_owner role…  they won’t see the data in dev! Perfect for us!
sql2019_03
On the High Availability topic, they announced that system databases can be finally in Availability Groups. You cannot imaging the joy that this announcement does for all of us!
The other news is the Accelerated Data Recovery. This new feature helps in case of a rollback from a large query to come faster in the precedent state. I was in a session explaining how it’s working and it was very interesting and I’m excited to test it on-premises.
sql2019_4
During the demo at the keynote, Bob used a new function that I find very useful, it is fn_ PageResCracker associated to the DMV dm_db_page_info to troubleshoot page resource.

sql2019_5And to finish, the last slide of the presentation shows the next direction of SQL Server.
SQL2019_6Dbi services SQL Server Team will be happy to test all these new features and blog about them!

Cet article SQL Pass Summit 2018: SQL Server 2019 features coming soon! est apparu en premier sur Blog dbi services.

How do I add an image to a page in Oracle APEX?

Joel Kallman - Fri, 2018-11-09 14:04
Preface:  Many blog posts about Oracle APEX seem to cover a new feature or something rather obtuse & complex.  The sophisticated blog posts have little relevance to someone who is completely new to APEX and Oracle Database.  Having worked with APEX since 1999, it's always difficult for me to put myself in the shoes of a brand new student of APEX.  When I interact with people new to APEX, I try to look at APEX through their eyes, and listen to what they find confusing.  Sometimes, it's completely obvious to me and, at the same time, completely vexing to them.  Thus, this will begin a series of blog posts how to perform very common tasks in APEX.  It will be boring for the experienced user, but hopefully helpful to the APEX developer in training.  And separately, we will also strive to make APEX simpler and easier to understand, so that blog posts like this one become unnecessary in the future.



Gerald Venzl, a highly respected Oracle Database product manager, was recently participating in a hackathon with a team of people, and they were using Oracle APEX as part of their solution.  They produced a QR code for the URL to their app, they saved this QR code image to a local file, and they wanted to include the image on a page in their APEX application.  As Gerald stated, it took more than 30 minutes for this learned and competent group of people to figure out how to do this.  This is not a criticism of Gerald and his colleagues, it's a criticism of APEX.  Gerald and his team were expecting a simple upload to a page item of type Image and they would be done, right?  Well, not so simple.

This blog post is not intended to cover the case where you have images in BLOB columns in a database table (we'll cover that in the future), or the case where the image can already be referenced via a URL.  I am presenting the simple case of you having an image on the file system on your local computer, and you want to reference it in your APEX page.

From a high-level, the steps are:
  1. Upload the file as a static application file.
  2. Using the HTML IMG tag, reference the static application file in your application.


Now, for the more detailed explanation.

Before you can reference an image in your APEX application, it has to be in a location which can be "served" by your Web server.  With APEX, you have access to the back-end Oracle Database, but you don't have access to the Web server file system.  Fortunately, there is an easy way to upload the image into the Oracle Database and have it served by your Web server when used within an APEX application.  These are called Static Application Files.

The steps are really quite simple:
  1. Edit your application in the Application Builder
  2. Click Shared Components
  3. Click Static Application Files
  4. Click Upload File
  5. Choose the file from your local computer
  6. Click the Upload button




In the picture above, this is the list of Static Application Files in the APEX Application Builder, after uploading file Three_Stooges.jpg.  Under the Reference column, there is the string #APP_IMAGES#Three_Stooges.jpg.  This is how you can reference the file when used in an HTML context in your APEX application.  This reference means nothing when used outside of an APEX application.  When someone is running your application, the APEX engine will replace the substitution string #APP_IMAGES# with an absolute URL reference to an embedded REST endpoint which will return the stored image.

To reference this image in your application, you just need to include a reference to it from an HTML IMG tag on your APEX page.  This image reference would have to be entered into the attributes of your APEX application wherever it's appropriate to embed HTML.  This includes the headers and footers of page regions, templates, and even the Source attribute of certain region types.  An easy way to get started is to reference an image in a region of type Static Content.

As an example, edit a page in Page Designer.  In the gallery at the bottom of the page, drag a region of type Static Content onto your page.  In the Source attribute of this region, enter the HTML string to reference the image:

<img src="#APP_IMAGES#Three_Stooges.jpg">

Obviously, reference your file and not the name of the file in my example (Three_Stooges.jpg).  Save your page and run.  That's all there is to it!






One last point.  If you do have access to the web server file system, and you wish to include static file references in your application, then definitely put them on your web server file system and simply serve them from there.  It will always be more efficient to have these served by a web server instead of having to fetch them out of the database.

Leveraging Snippets to Create Wiki Pages in Oracle Developer Cloud

Shay Shmeltzer - Fri, 2018-11-09 12:53

Snippets are a feature of Oracle Developer Cloud Service that gives you a place to store reusable pieces of code as part of your project. These are the type of code snippets that you don't want as part of your core Git repository, but that you still find useful. Snippets can be your own private ones or shared among your team.

One nice usage for code snippets is the ability to quickly include them in a wiki page. This allows you, for example, to create a template of a wiki page and then quickly apply it to a new page that you creates. Using the correct markup for your wiki page format (confluence in the example in the video), you can create a collection of templates. For example, a template for a feature page, a template for a meeting minutes page, etc.. then your team members can quickly create pages that conforms to these templates.

In the video below I show you how to leverage this combination step by step.

Categories: Development

Pass Summit – dbi visit day 4 (2)

Yann Neuhaus - Fri, 2018-11-09 12:38
Power BI 5 top custom visuals

The development of Power BI itself is fast. But, the means to represent data is too. I mean there are constantly new Power BI custom visuals developed by the community and Microsoft.
You can develop your own custom visuals and publish them on the marketplace for your organization or the whole community.
Tips : check if the visuals are certified, meaning they have been tested on the different Power BI possibilities
You can find the gallery of custom visuals here: https://Appsource.microsoft.com/en-us/marketplace

Here are 5 different ones presented during the PASS Summit session:

1- SandDance
Used to display every data element of your data set on XY axis. It enables also to zoom in the data to isolate a plotted area and create scenes to show your data with animations.

IMG_1071

2- Synoptic Panel
Connects areas in a custom image with attributes in your data model using a designer
Https://Synoptic.design in order areas in your image that you will map to your data set

IMG_1076

3- Infographic designer
Custom the representation of you data to your specific need or business even with your own images

IMG_1077

4- Timeline Storyteller
Multiple representation and layouts of data based on time axis you can organize in order to tell a story with your data

IMG_1079

5- PowerApps
Integrate Line of Busienss application into Power BI. This can be used as well to have Writebacks in Power BI.

IMG_1080

Development of your power App must be done in the website Https://Powerapps.com if you are integrate it using Power BI Desktop. You will have a difference of experience using the service version. In the service version including the PowerApps visual will allow you to create the powerapps automatically with the fields of your data set

Again an interesting data analytic day…stay tuned..

Cet article Pass Summit – dbi visit day 4 (2) est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator