DBA Blogs

Output of DBMS_JOB.SUBMIT

Tom Kyte - 12 hours 8 min ago
HI Tom, I am running one procedure using dbms_job.submit. but it is getting failed immediately. i wrote dbms_output command to output the error message. So how and where can I see that error message. Thanks in advance for your great support...
Categories: DBA Blogs

I cannot append data more than 32kb in CLOB

Tom Kyte - 12 hours 8 min ago
Hi, Am using CLOB data type in stored procedure. I am appending data from temporary table in CLOB I have a following plsql. DECLARE TYPE sturecord IS RECORD (stid NUMBER, Regid NUMBER); TYPE stutype IS TABLE OF sturecord; stutable stutype; stid CLOB := NULL; Regid CLOB := NULL; BEGIN SELECT DISTINCT (st.stu_id), rg.Reg_id BULK COLLECT INTO stutable FROM Student_details st, student_registration rg WHERE st.stu_id = rg.stu_id; FOR i IN stutable.FIRST .. stutable.LAST LOOP dbms_output.put_line(i) ; stid := stid || stutable (i).stid || ','; Regid := Regid || stutable (i).Regid || ','; END LOOP; END; In line 22 am getting error like "ORA-06502: PL/SQL: numeric or value error ORA-06512: at line 22" Temporary Table having more than 44000 record, approximate size is 3mb. CLOB Datatype has max length 4GB. Please help how to resolve this error. Thanks
Categories: DBA Blogs

deterministic package function

Tom Kyte - Tue, 2021-06-15 12:46
How can I can set a package function to be deterministic? It is easy for standalone function, but I couldn`t find the right syntax for package functions.
Categories: DBA Blogs

How to read parameter values passed from java while calling a stored procedure.

Tom Kyte - Mon, 2021-06-14 18:26
Hi Tom, The web app we have uses java class to call stored procedures. User inputs are used to pass parameter values while calling the stored procedure. Is there a way in oracle that I get to know what parameter values are passed by querying some dbms table or the like? I need to know the parameters only when I encounter any error/exception, so I dont want to add a log of parameters inside all my procedures to record the parameters. Please let me know if this is possible. Thanks...
Categories: DBA Blogs

Combining multiple rows to get a single one

Tom Kyte - Mon, 2021-06-14 18:26
Source table : <code>---------------------------------------- | Employee Name | department | Emp Id | ---------------------------------------- | Sam | Sales | 101 | ---------------------------------------- | Sam | Finance | 101 | ---------------------------------------- | Dirk | marketing | 102 | ---------------------------------------- | Dirk | Research | 102 | ----------------------------------------</code> Output needed : <code>------------------------------------------------------ | Employee Name | Emp Id | department1 | department2| ------------------------------------------------------ | Sam | 101 | Sales | Finance | ------------------------------------------------------ | Dirk | 102 | marketing | Research | ------------------------------------------------------</code> Can you kindly help with what functions or query should i use to get above mentioned output?
Categories: DBA Blogs

Inject variable value

Tom Kyte - Fri, 2021-06-11 17:06
Hi guys. I have a piece of code like this: <code> declare myvar number := 4; begin select rule into v_stmt from rules where rule_id=1; --v_stmt -> 'begin :a := my_function(my_var, 1), 1); end;' execute immediate v_stmt; end; / </code> Is there a way to inject my_var value to execute v_stmt?
Categories: DBA Blogs

SQL Loader header row attribute cannot be applied to detail rows in event of multiple header/detail transactions

Tom Kyte - Fri, 2021-06-11 17:06
How to load using Oracle SQL Loader values to it's related detail rows if the file has multiple headers and details rows. The issue is header row attribute cannot be applied to it's related detail rows in event of multiple header/detail transactions. It does works and load for 1 header, where header loader evaluation is done for very first header and applies to all detail rows, which is not expected to happen. I need to apply each header row attribute value to applied it's child detail rows. Here is the example. Each <b>H - Header</b> Record attribute value <b>1001</b>, <b>1002 </b>and <b>1003</b>, I need to stamp to each respective detail record while loading via SQL Loader. H ABC 1001 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 H ABC 1002 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 H ABC 1003 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 The expected results should be in database table after SQL loader is completed as follows, which does not happen. Any suggestions! H ABC 1001 D XYZ 89.90 1001 D XYZ 89.91 1001 D XYZ 89.92 1001 H ABC 1002 D XYZ 89.90 1002 D XYZ 89.91 1002 D XYZ 89.92 1002 H ABC 1003 D XYZ 89.90 1003 D XYZ 89.91 1003 D XYZ 89.92 1003 Thank you.
Categories: DBA Blogs

Impact of proper column precision for analytical queries

The Oracle Instructor - Fri, 2021-06-11 04:33

Does it matter if your data warehouse tables have columns with much higher precision than needed? Probably: Yes.

But how do you know the precision of your columns is larger than required by the values stored in these columns? In Exasol, we have introduced the function MIN_SCALE to find out. I’m working on an Exasol 7 New Features course at the moment, and this article is kind of a sneak preview.

If there’s an impact, it will show only with huge amounts of rows of course. Would be nice to have a table generator to give us large testing tables. Another Exasol 7 feature helps with that: The new clause VALUES BETWEEN.


CREATE OR REPLACE TABLE scaletest.t1 AS
SELECT CAST(1.123456789 AS DECIMAL(18,17)) AS testcol
FROM VALUES BETWEEN 1 AND 1e9;

This generates a table with 1000 million rows and takes only 30 seconds runtime on my modest VirtualBox VM. Obviously, the scale of the column is too large for the values stored there. But if it wouldn’t be that obvious, here’s how I can find out:

SELECT MAX(a) FROM (SELECT MIN_SCALE(testcol) As a FROM scaletest.t1);

This comes back with the output 9 after 20 seconds runtime, telling me that the precision actually required by the values is 9 at max. I’ll create a second table for comparison with only the required scale:


CREATE OR REPLACE TABLE scaletest.t2 AS
SELECT CAST(1.123456789 AS DECIMAL(10,9)) AS testcol
FROM VALUES BETWEEN 1 AND 1e9;

So does it really matter? Is there a runtime difference for analytical queries?

SELECT COUNT(*),MAX(testcol) FROM t1; -- 16 secs runtime
SELECT COUNT(*),MAX(testcol) FROM t2; -- 7 secs runtime

My little experiment shows, the query running on the column with appropriate scale is twice as fast than the one running on the too large scaled column!

It would be beneficial to adjust the column precision according to the scale the stored values actually need, in other words. With statements like this:

ALTER TABLE t1 MODIFY (testcol DECIMAL(10,9));

After that change, the runtime goes down to 7 seconds as well for the first statement.

I was curious if that effect shows also on other databases, so I prepared a similar test case for an Oracle database. Same tables but only 100 million rows. It takes just too long to export tables with 1000 million rows to Oracle, using VMs on my notebook. And don’t even think about trying to generate 1000 million row tables on Oracle with the CONNECT BY LEVEL method, that will just take forever – or more likely break with an out-of-memory error.

The effect shows also with 100 million row tables on Oracle: 5 seconds runtime with too large precision and about 3 seconds with the appropriately scaled column.

Conclusion: Yes, looks like it’s indeed sensible to format table columns according to the actual requirements of the values stored in them and it makes a difference, performancewise.

Categories: DBA Blogs

Cannot alter Oracle sequence start with

Tom Kyte - Thu, 2021-06-10 22:46
I have tried altering the start with value of an Oracle sequence but I face [<b>ORA-02283: cannot alter starting sequence number</b>] error. I tried to find why Oracle does not allow the same but I could not find an appropriate answer for that. So my question is why is it that Oracle does not let you change a sequence start with value? (PS: I am hoping there should be a really solid technical reason behind this) Thanks in advance!
Categories: DBA Blogs

INSERT data in TIMESTAMP column having year less than 4713 i.e. timestamp like '01/01/8999 BC'

Tom Kyte - Thu, 2021-06-10 22:46
Want to insert data in TIMESTAMP column having year less than 4713 i.e. timestamp like '01/01/8999 BC' so here year is -8999 When I tries to insert it gives me error like 'YEar should be between -4713 and 9999'. BUt for some table I am having year less than 4713 as well like -8999 and -78888 So this limit is not applicable to timestamp data type. But then How to insert into timestamp for year -8999
Categories: DBA Blogs

Specified tablespace for IOT

Tom Kyte - Thu, 2021-06-10 22:46
I have two tablespaces named USERS and INDX, respectively. The dufault tablespace for current user is USERS. I created an IOT table whose name is tb_zxp. Since no need to specify a tablespace storing data for IOT, I'd like to put the whole index of tb_zxp on tablespace INDX? <code>create table tb_zxp (customer_id integer , store_id integer, trans_date date, amt number, goods_name varchar2(20), rate number(8,1), quantity integer, constraint pk_zxp primary key (customer_id,store_id,trans_date)) organization index including amt overflow tablespace indx; insert into tb_zxp values (11,21,date '2021-04-10',500,'Cocacola',2,250); insert into tb_zxp values (11,25,date '2021-04-11',760,'Tower',3.8,200); insert into tb_zxp values (24,9,date '2021-05-11',5200,'Washing machine',5200,1); commit;</code> However, with this query, we can find the index is still assigned on default tablespace USERS: <code>select tablespace_name from user_extents where segment_name in (select 'TB_ZXP' c from dual union select index_name from user_indexes where table_name='TB_ZXP'); TABLESPACE_NAME ------------------------------------------------------------------------------------------ USERS</code> Then,I remove the INCLUDING OVERFLOW clause from table creation statement, and try it again? <code>create table tb_zxp (customer_id integer , store_id integer, trans_date date, amt number, goods_name varchar2(20), rate number(8,1), quantity integer, constraint pk_zxp primary key (customer_id,store_id,trans_date)) organization index tablespace indx; insert into tb_zxp values (11,21,date '2021-04-10',500,'Cocacola',2,250); commit;</code> This time, the index falls upon tablespace INDX as expected: <code>select tablespace_name from user_extents where segment_name in (select 'TB_ZXP' c from dual union select index_name from user_indexes where table_name='TB_ZXP'); TABLESPACE_NAME ------------------------------------------------------------------------------------------ INDX</code> Could any guru kindly explain why the removal of including overflow can provide us desired result?
Categories: DBA Blogs

Using analytical functions for time period grouping

Tom Kyte - Thu, 2021-06-10 22:46
Hi Tom, I have a table like below: <code>GRP,SUBGRP,START_Y,END_Y 122,... 123,A,2010,2011 123,A,2011,2012 123,A,2012,2013 123,A,2013,2014 123,B,2014,2015 123,B,2015,2016 123,B,2016,2017 123,A,2017,2018 123,A,2018,2019 123,A,2019,2020 124,...</code> I would like to find start and end of all intervals in this table like so: <code>GRP,SUBGRP,MIN,MAX 122,... 123,A,2010,2014 123,B,2014,2017 123,A,2017,2020 124,...</code> A simple group by would show the results over the complete timeperiod but not over the different intervals: <code>GRP,SUBGRP,MIN,MAX 122,... 123,A,2010,2020 123,B,2014,2017 124,...</code> I think it should be possible with analytic functions but I don't get it.
Categories: DBA Blogs

Cloud Vanity: A Weekly Carnival of AWS, Azure, GCP, and More - Edition 5

Pakistan's First Oracle Blog - Thu, 2021-06-10 21:09

 Welcome to the next edition of weekly Cloud Vanity. As usual, this edition casts light on multiple cloud providers and what's happening in their sphere. From the mega players to the small fish on the ocean, it has covered it all. Enjoy!!!

AWS:

Reducing risk is the fundamental reason organizations invest in cybersecurity. The threat landscape grows and evolves, creating the need for a proactive, continual approach to building and protecting your security posture. Even with expanding budgets, the number of organizations reporting serious cyber incidents and data breaches is rising.

Streaming data presents a unique set of design and architectural challenges for developers. By definition, streaming data is not bounded, having no clear beginning or end. It can be generated by millions of separate producers, such as Internet of Things (IoT) devices or mobile applications. Additionally, streaming data applications must frequently process and analyze this data with minimal latency.

This post presents a solution using AWS Systems Manager State Manager that automates the process of keeping RDS instances in a start or stop state.

Over the last few years, Machine Learning (ML) has proven its worth in helping organizations increase efficiency and foster innovation. 

GCP:

In recent years, the grocery industry has had to shift to facilitate a wider variety of checkout journeys for customers. This has meant ensuring a richer transaction mix, including mobile shopping, online shopping, in-store checkout, cashierless checkout or any combination thereof like buy online, pickup in store (BOPIS).  

At Google I/O this year, we introduced Vertex AI to bring together all our ML offerings into a single environment that lets you build and manage the lifecycle of ML projects. 

Dataflow pipelines and Pub/Sub are the perfect services for this. All we need to do is write our components on top of the Apache Beam sdk, and they’ll have the benefit of distributed, resilient and scalable compute.

In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. And as well you should! It’s completely reasonable to use the capabilities from multiple cloud providers to achieve your desired business outcomes. 

Azure:

Generators at datacenters, most often powered by petroleum-based diesel, play a key role in delivering reliable backup power. Each of these generators is used for no more than a few hours a year or less at our datacenter sites, most often for routine maintenance or for backup power during a grid outage. 

5 reasons to attend the Azure Hybrid and Multicloud Digital Event

For over three years, I have had the privilege of leading the SAP solutions on Azure business at Microsoft and of partnering with outstanding leaders at SAP and with many of our global partners to ensure that our joint customers run one of their most critical business assets safely and reliably in the cloud. 

There are many factors that can affect critical environment (CE) infrastructure availability—the reliability of the infrastructure building blocks, the controls during the datacenter construction stage, effective health monitoring and event detection schemes, a robust maintenance program, and operational excellence to ensure that every action is taken with careful consideration of related risk implications.

Others:

Anyone who has even a passing interest in cryptocurrency has probably heard the word ‘blockchain’ branded about. And no doubt many of those who know the term also know that blockchain technology is behind Bitcoin and many other cryptocurrencies.

Alibaba Cloud Log Service (SLS) cooperates with RDS to launch the RDS SQL audit function, which delivers RDS SQL audit logs to SLS in real time. SLS provides real-time query, visual analysis, alarm, and other functionalities.

How AI Automation is Making a First-of-its-Kind, Crewless Transoceanic Ship Possible

Enterprise organizations have faced a compendium of challenges, but today it seems like the focus is on three things: speed, speed, and more speed. It is all about time to value and application velocity—getting applications delivered and then staying agile to evolve the application as needs arise.

Like many DevOps principles, shift-left once had specific meaning that has become more generalized over time. Shift-left is commonly associated with application testing – automating application tests and integrating them into earlier phases of the application lifecycle where issues can be identified and remediated earlier (and often more quickly and cheaply).

Categories: DBA Blogs

How to Identify the MINVALUE AND MAXVALUE SCN in the flashback versions query

Tom Kyte - Thu, 2021-06-10 04:26
Hi TOM, I am facing an issue with flashback versions query. I have described my issue below. <code> Query 1: select versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN from employees VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2021-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') AND TO_TIMESTAMP('2021-06-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') where employee_id='xyz' </code> the above query returns 2 records. <code> XID START_SCN END_SCN 0B0017008F7B0300 39280796004 39282671828 [INSERT] 2D001B0016420000 39282671828 (null) [UPDATE] </code> But on passing the versions_startscn value from the 1st query result in the filter condition of 2nd query, I got 0 records returned instead of 1 record. <code>Query 2: select versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN from employees VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE where versions_endscn = '39282671828' </code> the above query returns 0 records. Is there a way to identify the MINVALUE and MAXVALUE passed in the second query? On what cases the MINVALUE gets set?
Categories: DBA Blogs

v$session query get garbled code

Tom Kyte - Thu, 2021-06-10 04:26
Hi,I'm using oracle 10.2.0.1 for studing, and I queried the v$session using pl/sql developer from a windows pc client ,but I found the garbled code from the results.just as following: <code>SQL> select osuser from v$session; OSUSER -------- SYSTEM ???? SYSTEM abc ??????????</code> then I ran the same command from the DB server,but I got the same results. Here are the characterset: <code>SQL> select userenv('language') from dual; USERENV('LANGUAGE') ---------------------------------------------------- SIMPLIFIED CHINESE_CHINA.ZHS16GBK SQL> select * from v$nls_parameters; PARAMETER VALUE ---------------------------------------------------------------- ---------------------------------------------------------------- NLS_LANGUAGE SIMPLIFIED CHINESE NLS_TERRITORY CHINA NLS_CURRENCY ? NLS_ISO_CURRENCY CHINA NLS_NUMERIC_CHARACTERS ., NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE SIMPLIFIED CHINESE NLS_CHARACTERSET ZHS16GBK NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY ? NLS_NCHAR_CHARACTERSET UTF8 NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE 19 rows selected</code> I also set the environment variable in my windows os : NLS_LANG=SIMPLIFIED CHINESE_CHINA.ZHS16GBK What's more,I tested my db as following: <code>SQL> select '??' from dual; '??' ---- ??</code> Now,Could you please help me,why I got some garbled codes such as ??? when querying osuser from v$session? thanks a lot.
Categories: DBA Blogs

MultiThreaded Extproc Agent - max_sessions, max_task_threads and max_dispatchers parameters

Tom Kyte - Thu, 2021-06-10 04:26
Hello Team, Thanks for all the good work AskTOM is doing. Can you please help us to better understand about max_sessions, max_task_threads and max_dispatchers configuration parameters of MultiThreaded ExtProc agent and share tips on how to determine optimum values of these parameters? We have multiple clients on multiple DB servers. Each server has different H/W capacity and different load. Our understanding is that the final optimum values will depend on H/W configuration and number of external procedure calls. However, we are trying to arrive at an initial parameter configuration that can be fine tuned further based on actual situation. Thanks, AB
Categories: DBA Blogs

Users Privileges

Tom Kyte - Tue, 2021-06-08 15:46
Hello, I am facing a problem and it goes like this: I have a schema named CML that we want to put common objects (Tables, View, Procedures, Packages, etc) used by the team. I would like to know what grants I should give to my user (eliasr) to create tables, views, procedures, packages in the schema CML. I want to also change existing packages and procedures.
Categories: DBA Blogs

PDB lock down profiles.

Tom Kyte - Tue, 2021-06-08 15:46
Team, Is it not possible to have multiple values in the lock down profile? Kindly advice. <code>c##sys@ORA12CR2> drop lockdown profile p1; Lockdown Profile dropped. c##sys@ORA12CR2> create lockdown profile p1; Lockdown Profile created. c##sys@ORA12CR2> alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') 2 option=('CURSOR_SHARING') 3 value=('FORCE','SIMILAR'); alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') * ERROR at line 1: ORA-65206: invalid value specified ORA-00922: missing or invalid option c##sys@ORA12CR2> alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') 2 option=('CURSOR_SHARING') 3 value=('FORCE'); Lockdown Profile altered. c##sys@ORA12CR2></code>
Categories: DBA Blogs

Oracle apex - changing page authorization scheme on custom form

Tom Kyte - Tue, 2021-06-08 15:46
Hi, is it possible to change page authorization scheme ( or another page properties) in oracle apex on some custom build form . I need it for my application admin panel . I know which table is it , it is apex_200200.wwv_flow_list_items but on cloud its forbidden to edit WWV prefixed tables of apex_200200 user. I tried to find API - package to do it but cant find it. On cloud user ADMIN don't have select privilege on that table so this query is not working: select * from apex_200200.wwv_list_items [Error] Execution (1: 27): ORA-01031: insufficient privileges I tried to use this package but I'm getting no results : begin WWV_FLOW_API.update_page(p_id=>124,p_flow_id=>122,p_required_role=>'7.59043067032354E15'); end; Is there any way to do this through some other package in database.
Categories: DBA Blogs

“User created” PDB max before licensing multi tenant

Tom Kyte - Tue, 2021-06-08 15:46
Hey guys, Regarding https://blogs.oracle.com/database/oracle-database-19c-up-to-3-pdbs-per-cdb-without-licensing-multitenant-v2 And the documentation it refers to. I?m wondering if you can clarify how the 3 PDB limit (before additional licensing ) works with application containers. I?ve tried setting max_pdbs=3, then creating an application container and when I create the PDBs within that application container if I try to create 3 PDBs I get an error on creating the 3rd (max PDBs exceeded). The documentation isn?t clear on what a user created PDB is (in my opinion at least) when dealing with this type of set up so I?m not sure if the error is a bug or it?s enforcing things appropriately. Thanks ! <code> alter system set max_pdbs=3 scope=both sid='*'; ALTER SESSION SET container = CDB$ROOT; CREATE PLUGGABLE DATABASE App_Con AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE App_Con OPEN instances=all; ALTER SESSION SET container = App_Con; CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB1 OPEN instances=all; CREATE PLUGGABLE DATABASE PDB2 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB2 OPEN instances=all; --below fails CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB3 OPEN instances=all; </code>
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs