Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 13 hours 23 min ago

Understanding reused values for sql_id, address, hash_value, plan_hash_value in v$sqlarea

Thu, 2021-01-14 12:46
good evening, I have a sql statement with the following information in v$sqlarea <code>select sql_id, address, hash_value, plan_hash_value from v$sqlarea where sqltext=<string to identify my query> sql_id |address |hash_value|plan_hash_value cv65zdurrtfus|00000000FCAA9560|2944187224|3149222761</code> I remove this object from the shared pool with the command because I want to recompute the exec plan for my sql statement <code>exec sys.dbms_shared_pool.purge('00000000FCAA9560,2944187224','c');</code> I redo my previous select statement on v$sqlarea and it retuns 0 row so I'm happy with that. Then I execute my original sql and last I redo my select statement on v$sqlarea and it returns one row with the same values <code>sql_id |address |hash_value|plan_hash_value cv65zdurrtfus|00000000FCAA9560|2944187224|3149222761</code> I was wondering how identical ids were generated, i was expecting new values even though at the end I have the expected result. Thanks for your feedback. Simon
Categories: DBA Blogs

Get date filter from a table in Oracle?

Thu, 2021-01-14 12:46
I would like to know how to access date column from a table and use it as date filter for another large volume table.. I have the following query that currently uses a date in the filter criteria and it gets completed in about 10 to 15 minutes. <code>select a,b,datec, sum(c) from table1 where datec = date '2021-01-12' group by a,b,datec</code> I'm trying to replace the hard coded date with a date from another table called table2. It's a small table with 1600 rows that just returns latest cycle completion date (one value) which is typically today's date minus one day for most days except for holidays when the cycle doesn't run.table1 is a view and it returns millions of rows. I tried the following queries in order to get the date value in the filter condition: <code>select a,b,datec, sum(c) from table1 t1, table2 t2 where t1.datec = t2.pdate and t2.prcnm = 'TC' group by a,b,datec select a,b,datec, sum(c) from table1 t1 inner join table2 t2 on datec = t2.pdate and t2.prcnm = 'TC' group by a,b,datec select a,b,datec, sum(c) from table1 t1 where t1.datec = (SELECT t2.date FROM table2 t2 WHERE prcnm = 'TC') group by a,b,datec</code> I also tried this hint: <code>select a,b,datec, sum(c) from table1 t1 where t1.datec = (SELECT /*+ PRECOMPUTE_SUBQUERY */ t2.date FROM table2 t2 WHERE prcnm = 'TC') group by a,b,datec</code> The above queries take too long and eventually fail with this error message - "parallel query server died unexpectedly" I am not even able to get 10 rows returned when I use the date from table2. I confirmed that table2 returns only one date and not multiple dates. Can you please help me in understanding why the query works when hard coded date is used, but not when a date from another table is used? thank you.
Categories: DBA Blogs

Requirements to set up an Oracle Directory for WRITE access

Thu, 2021-01-14 12:46
We have several existing Oracle Directories set up to allow reading CSV files that work fine, and a couple of them work OK to Write new files. I have been trying to add a new Directory definition pointing to a different path and cannot get it to work. I am in a corporate environment where I don't have access to the System accounts and cannot see the instance startup file, and don't have direct access to the Linux operating system, so I don't know what setup has been done for the previous Directories. One of the existing Directories that works for both read and write is defined as: <code>CREATE OR REPLACE DIRECTORY RED AS '/red/dev';</code> for the above directory, the following test code works fine to create an output file: <code>DECLARE v_file UTL_FILE.FILE_TYPE; BEGIN v_file := UTL_FILE.FOPEN(location => 'RED', filename => 'test.csv', open_mode => 'w', max_linesize => 32767); UTL_FILE.PUT_LINE(v_file, 'A,123'); UTL_FILE.FCLOSE(v_file); END; </code> I want to write some files to a subdirectory under the above path, and have found that Oracle will only allow WRITE to a named-Oracle Directory for security reasons. A new Directory I want to create is defined as: <code>CREATE OR REPLACE DIRECTORY RED_OUTPUT AS '/red/dev/OUTPUT';</code> But changing the code above to use RED_OUTPUT as the "location" or directory, results in "ORA-29283: invalid file operation: cannot open file". The '/red/dev/OUTPUT' directory location exists on the external NAS filesystem and appears to have the same permissions as the parent '/red/dev' directory (as best I can tell by using Windows Explorer to look at the directory security properties). I have read various posts online indicating things like the Oracle instance must be restarted after defining a new Oracle Directory, or that every path specified by an Oracle Directory must have a separate "mount point" on the Oracle Linux server, but I don't have easy access to do those things. The RED_OUTPUT directory can be currently used to READ an existing file if I copy one to that location using Windows Explorer. What is likely the issue with not being able to WRITE to this new RED_OUTPUT directory, and are any of these additional steps (restart, mounting, etc) necessary to make this work?
Categories: DBA Blogs

Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage"

Thu, 2021-01-14 12:46
Greetings, I've seen when "cpu_per_call" limit is reached. ORA-02393 is sent to the SQL Plus. Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage" occurs to applications using the database since it isn't written to alert log. Thanks, John
Categories: DBA Blogs

DIFFERENCE BETWEEN ANALYZE AND DBMS_STATS

Thu, 2021-01-14 12:46
DIFFERENCE BETWEEN ANALYZE AND DBMS_STATS
Categories: DBA Blogs

How to pass a parameter to a GET Handler in APEX?

Thu, 2021-01-14 12:46
Hello, I created a PL/SQL function that returns a list of open balances as a table result, where all amounts are converted to the currency provided as an input parameter: <code>function my_pkg.my_func (pi_currency in NUMBER default NULL) return amount_tab pipelined; </code> I created an Oracle REST Data Service with only GET handler: <code>select * from table(my_pkg.my_func(:to_currency)) ;</code> I tested it by Advanced REST Client and it is working as expected with an additional header for the to_currency parameter. In APEX I declared a REST Data Source related to the above REST service, then I made an APEX page with IG region based on the above REST source and it is working well as long as I am not trying to provide a parameter, i.e. until to_currency is null. When I try to populate <b>{"to_currency":"USD"}</b> in the External Filter attribute, this causes the application crash. I googled the problem but found nothing. Is any other standard way to pass the non-column parameter to the GET handler in APEX or I should write my own procedure to call REST service, e.g. by using APEX_EXEC? Thank you and best regards, Alex
Categories: DBA Blogs

blob to clob on ORDS Handler Definition

Thu, 2021-01-14 12:46
Hi! I'm trying to send a post request with json: <code> { "id": 12344444, "email": "ppppoddddddppp@gmail.com", "first_name": "", "last_name": "", "billing": { "first_name": "22222", "last_name": "", "company": "", "address_1": "", "address_2": "", "city": "", "postcode": "", "country": "", "state": "", "email": "", "phone": "" } } </code> I'm trying to use apex_json to extract information like: ?company? that is in ?billing? I read the following guide:https://oracle-base.com/articles/misc/apex_json-package-generate-and-parse-json-documents-in-oracle#parsing-json and it works but not inside ORDS Handler Definition.... I'm trying to use the following code ... but it's not insert the data and return "201": <code> DECLARE l_json_payload clob; l_blob_body blob := :body; l_dest_offset integer := 1; l_src_offset integer := 1; l_lang_context integer := dbms_lob.default_lang_ctx; l_warning PLS_INTEGER := DBMS_LOB.warn_inconvertible_char; BEGIN if dbms_lob.getlength(l_blob_body) = 0 then :status_code := 400; --error :errmsg := 'Json is empty'; return; end if; dbms_lob.createTemporary(lob_loc => l_json_payload ,cache => false); dbms_lob.converttoclob( dest_lob => l_json_payload ,src_blob => l_blob_body ,amount => dbms_lob.lobmaxsize ,dest_offset => l_dest_offset ,src_offset => l_src_offset ,blob_csid => dbms_lob.default_csid ,lang_context => l_lang_context ,warning => l_warning); APEX_JSON.parse(l_json_payload); INSERT INTO ACCOUNTS ( wp_id , name , email , f_name , l_name , wp_role , wp_username , woo_is_paying_customer , woo_billing_first_name ) VALUES ( :id, :first_name || ' ' || :last_name, :email, :first_name, :last_name, :role, :username, decode(:is_paying_customer,'false', 'N', 'Y'), APEX_JSON.get_varchar2(p_path => 'billing.first_name') ); :status_code := 201; --created EXCEPTION WHEN OTHERS THEN :status_code := 400; --error :errmsg := SQLERRM; END; </code> updating: After testing - the problem is in this line: <code> l_blob_body blob := :body; </code> When I enter it, it does not insert anything into a database update 2: after testing... I realized that it is not possible to combine: : body and other bind value, so APEX_JSON.get_varchar2 should be used instead (p_path => 'billing.first_name') So the problem was solved
Categories: DBA Blogs

MATERIALIZED VIEW Performance Issue!

Thu, 2021-01-14 12:46
I have created a MV on UAT server and my MV view using a query which has remote connectivity to PROD and select only rights to these tables which has millions of rows around 10 lakhs in each table but after calculation output of query is 139-150 rows only. query alone without MViews is taking 60 seconds but when I use CREATE MATERIALIZED VIEW NOCOMPRESS NOLOGGING BUILD IMMEDIATE USING INDEX REFRESH FORCE ON DEMAND NEXT null USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS DISABLE QUERY REWRITE as "query" mview creation happens in one hour and after that refresh time is 20-30 minutes ? which is surely not acceptable as this data is being used for dashboard with 3 minutes delay which MV should take time to refresh! I don't have privilege to anything to check on prod DB but on UAT I have sufficient access! I have tried many option but didn't work so please help me to know what is solution and if no solution what is reason behind this? in addition when my mview refresh it shows in explain plan " INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO abc". Please help me! I am really stuck here and tried my hard to get it resolved or finding a reason where I can explain to relevant team! Please help! 1. I have tried create table with same query and it took less than a minute. 2. Insert statement also working fine taking same time. 3. I tried MV view refresh option with atomic_refresh=false as well but it didn't work and actually it will not help! Please let me know if u have any info required! Note: My mv view query using prod tables(approx 4 tables) with db link from UAT.Prod server has one separate user which has been given below table rights select count(*) from abc@prod; --800000 select count(*) from abc1@prod; --700000 select count(*) from abc2@prod; --200000
Categories: DBA Blogs

Oracle update statement

Fri, 2020-12-11 22:06
I have a table with 4 columns and am passing the inputs of these 4 columns with old and new values to update them as shown below. _________________________________________ Create table (column1 IN VARCHAR2, column2 IN VARCHAR2, column3 IN VARCHAR2, column4 IN VARCHAR2, ) insert into table values ('as','af','gh','kl'); insert into table values ('ss','sf','sh','sl'); insert into table values ('s1','ssf','sfh','sasfl'); procedure test1( oldcolumn1 IN VARCHAR2, oldcolumn2 IN VARCHAR2, oldcolumn3 IN VARCHAR2, oldcolumn4 IN VARCHAR2, newcolumn1 IN VARCHAR2, newcolumn2 IN VARCHAR2, newcolumn3 IN VARCHAR2, newcolumn4 IN VARCHAR2) ________________________________________ Requirement here is to update these 4 columns with the new values where the actual column values areequal to oldcolumn inputs.But there is no guarantee that user may pass all 4 column values everttime. If they pass oldcoulumn1 value in that case need to update only the column1. If they pass oldcolumn4 values in that case need to update only the column4. if they pass olculmn1 and oldcolumn5 then need to update column1 and column5. Can you please suggest a way how to write update statements in such case?
Categories: DBA Blogs

using of binding variable in ( IN clause ) with null condition check

Fri, 2020-12-11 22:06
Hi Tom, I am struggling with a simple but tricky logic during SQL. Requirement :- I want to call SQL from shell script passing character variable as input which could be null also <code>my sql :- select * from scott.emp where ename in (&1) &1 possible values :- 1. 'JOHN' 2. 'JOHN','ROCKY' 3. Null ( NOthing to enter on user input) </code> while input values 1 and 2 are working fine, when NULL is entered , it should not check any where condition and should display all records from emp table i.e. 1=1 I am facing problem with null handling in "IN" clause and bypassing the where clause. Please help ....
Categories: DBA Blogs

Does instant client 19 works with Oracle Database 12c

Fri, 2020-12-11 03:46
We would like to know if client 19 is compatible with database Oracle 12 ?
Categories: DBA Blogs

Want to delete data from apex mail queue

Fri, 2020-12-11 03:46
Hi, How can I delete the data from apex_mail_queue using dbms_scheduler? I am trying this for creation of job- begin dbms_scheduler.create_job ( job_name => 'xyz', job_type => 'STORED_PROCEDURE', job_action => 'rescheduler', start_date => SYSTIMESTAMP, repeat_interval => 'FREQ=MINUTELY; INTERVAL=30', enabled => TRUE, auto_drop => FALSE); END; and for preocedure- CREATE OR REPLACE PROCEDURE rescheduler AS BEGIN insert into dummy_mail select * from apex_mail_queue; delete from apex_mail_queue; END; Data inserted in the dummy_mail table but not deleted from apex_mail_queue. Please help me regarding this.
Categories: DBA Blogs

PL/SQL Java Mail

Fri, 2020-12-11 03:46
We are required to use TLS for outgoing emails from our applications, we currently use a PL/SQL Javamail implementation to send out emails from the database. ( Typically alerts generated by the application ) We were told that to use TLS we need to update the JavaMail library to at least 1.6, As TLS 1.2 is supported by Java Mail 1.6 and higher. The Java Mail jar file shipped with our databases are version 1.4 or lower. How do I update the mail.jar or javax.mail.jar file in my database home ? Thanks BC
Categories: DBA Blogs

Email Through Oracle.

Fri, 2020-12-11 03:46
Hi Tom : How are U . I have writtern one procedure for sending Email through Oracle. This Code is working fine. I am able to send Mail with CC option. On that code , I wan't some further enhancement. They are as follow. 1. I am unable to send BCC. ( I need BCC option also) 2. If i am giving wrong Email address which is nopt exist then , Procedure should give some Error. 3. I need Sent Confirmation also. If abc is sending mail to xyz then after sending mail , abc should get send receipt. 4. I need Read Confirmation also. That when xys will read the email , abc should get read receipt. Please gothrough the code & let me give ur feedback on same. CREATE OR REPLACE PROCEDURE mail ( sender IN VARCHAR2, recipient IN VARCHAR2, ccrecipient IN VARCHAR2, subject IN VARCHAR2, message IN VARCHAR2 ) IS crlf VARCHAR2(2):= UTL_TCP.CRLF; connection utl_smtp.connection; mailhost VARCHAR2(30) := 'MyMailHost.com'; header VARCHAR2(1000); BEGIN -- Start the connection. connection := utl_smtp.open_connection(mailhost,25); header:= 'Date: '||TO_CHAR(SYSDATE,'dd Mon yy hh24:mi:ss')||crlf|| 'From: '||sender||''||crlf|| 'Subject: '||subject||crlf|| 'To: '||recipient||crlf|| 'CC: '||ccrecipient; -- Handshake with the SMTP server utl_smtp.helo(connection, mailhost); utl_smtp.mail(connection, sender); utl_smtp.rcpt(connection, recipient); utl_smtp.rcpt(connection, ccrecipient); utl_smtp.open_data(connection); -- Write the header utl_smtp.write_data(connection, header); utl_smtp.write_data(connection, crlf ||message); utl_smtp.close_data(connection); utl_smtp.quit(connection); EXCEPTION WHEN UTL_SMTP.INVALID_OPERATION THEN dbms_output.put_line(' Invalid Operation in SMTP transaction.'); WHEN UTL_SMTP.TRANSIENT_ERROR THEN dbms_output.put_line(' Temporary problems with sending email - try again later.'); WHEN UTL_SMTP.PERMANENT_ERROR THEN dbms_output.put_line(' Errors in code for SMTP transaction.'); END; SQL> execute mail(' "Parag" ','xyz.com','abc.com','Thanks Tom' , 'Tom is my last hope for any Oralce Problem...'); PL/SQL procedure successfully completed. Tom : Thaks a Lot. Regards - Parag
Categories: DBA Blogs

JSON Value - Oracle PL/SQL : Multiple Fields

Wed, 2020-12-09 15:06
have a HCLOB with below sample entry <code> "relist":[{"name":"XYZ","action":["Manager","Specific User List"],"flag":false}] </code> When I try to get name or flag using JSON_VALUE I am able to get it as it has single field , but I want to get the value for action. If I try <code> select JSON_VALUE(JSON_CONTENT,'$.action')JSON_CONTENT from test </code> I get NULL. I read that JSON_VALUE only supports 1 entry . Is there any workaround to get both values of action ?
Categories: DBA Blogs

Finding records having columns which are blank in a table

Wed, 2020-12-09 15:06
I have a table with 100 columns. I need to retrieve records which have blank data in the columns without entering all the column names in query. Is it possible?
Categories: DBA Blogs

When does the report subscribed to mail runs

Wed, 2020-12-09 15:06
I have subscribed interactive report to mail when does the report sent in mail that is at what time report receives to mail if i configure the report period as between 1-dec 2020 1000 :AM to 1-jan-2021 at what time the reports receives. 2.How to subscribes report for first Monday of each month . Can you please help on this.
Categories: DBA Blogs

Autotrace vs Oracle Trace: Showing different results for Disk Reads for LOB Segment with NOCACHE

Wed, 2020-12-09 15:06
Hi, I am seeing different outcomes when testing the NOCACHE option of SECUREFILE LOB Segments and analysing SELECT performance using Autotrace vs Trace/TKPROF. <b>Test Setup</b> <code>CREATE TABLE js_poc.clob_test_sf_nocache ( id NUMBER, json_data CLOB CHECK(json_data IS JSON) ) LOB(json_data) STORE AS SECUREFILE(NOCACHE);</code> <code>INSERT INTO js_poc.clob_test_sf_nocache SELECT level,'{"key":"This is a long string of text, repeat"}' FROM dual CONNECT BY level <= 1000;</code> <b>Test 1: Select 500 rows with SET AUTOTRACE TRACEONLY</b> <code>sqlplus js_poc SET AUTOTRACE TRACEONLY SELECT json_data FROM clob_test_sf_nocache WHERE rownum <= 500; </code> Output: <code>Statistics ---------------------------------------------------------- 31 recursive calls 7 db block gets 1014 consistent gets 1005 physical reads 1004 redo size 371998 bytes sent via SQL*Net to client 148424 bytes received via SQL*Net from client 1002 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 500 rows processed</code> and subsequent runs of the SELECT show similar output: <code>Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 1003 consistent gets 1000 physical reads 0 redo size 371998 bytes sent via SQL*Net to client 148424 bytes received via SQL*Net from client 1002 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 500 rows processed</code> each <b>SELECT </b>showing <b>1000 physical reads</b> for every run, which is what I expect when the LOB is configured for <b>NOCACHE</b> However, when I execute the same SELECT statement and trace it/tkprof it, i.e. <code>ALTER SESSION SET SQL_TRACE = TRUE; SELECT json_data FROM clob_test_sf_nocache WHERE rownum <= 500; ALTER SESSION SET SQL_TRACE = FALSE;</code> the TKPROF file always shows 0 for Disk Reads, e.g. <code>call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 2 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 501 0.01 0.01 0 503 0 500 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 503 0.01 0.01 0 505 0 500</code> If you could explain why this is happening or what my misunderstanding is for the 2 metrics from Autotrace/SQL Trace, that would be great. Thank you Andrew
Categories: DBA Blogs

oracle 12c R2 - rman - ora-00907

Wed, 2020-12-09 15:06
hello, I try duplicate databaes and i have a issue: I use targret and auxiliary conenctions: rman target / auxiliary sys/$DSTPWD@$DSTDB connected to target database: prod (DBID=1119311471) connected to auxiliary database: test(not mounted) RMAN> run { allocate channel ch1 type disk; allocate auxiliary channel aux1 type disk; SET UNTIL TIME "TO_DATE('2020-12-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS')"; DUPLICATE TARGET DATABASE TO "test"; }2> 3> 4> 5> 6> 7> allocated channel: ch1 channel ch1: SID=630 device type=DISK allocated channel: aux1 channel aux1: SID=316 device type=DISK executing command: SET until clause Starting Duplicate Db at 2020-12-02 12:17:40 released channel: ch1 released channel: aux1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 12/02/2020 12:17:40 RMAN-05501: aborting duplication of target database ORA-00907: brak prawego nawiasu I tried: SET UNTIL TIME "TO_DATE('01/12/2020 10:00', 'dd/mm/yyyy hh24:mi')"; but is the same. And I tried in run section set NLS_DATA_FORMAT: sql 'alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"'; but is the same problem. What I do wrong? If I use run statement without set until time is ok. But If I used "set until time ... I have ora-00907. I can't duplicate datebase to point of time with until time parmeter ... thanks Krzysztof
Categories: DBA Blogs

The doubt of the long_waits in the V$BACKUP_ASYNC_IO

Tue, 2020-12-08 02:26
Why there would always be many non-zero long_waits values in the V$BACKUP_ASYNC_IO, which indicates waiting the O/S blocking I/O completed? Notes, here what the difference between the blocking I/O and synchronous I/O ? <code> IO_COUNT READY SHORT_WAITS LONG_WAITS FILENAME TO_CHAR(CLOSE_TIME, EPS ---------- ---------- ----------- ---------- -------------------------------------------------------- ------------------- ---------- 27 26 0 1 +DATA/cpemsdb/data_d-cpemsdb_ts-h_tzxx_s1_fno-389 11/27/2020 23:46:53 394202 27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-data_m_fno-198 11/27/2020 23:46:51 389805 7 7 0 0 +DATA/cpemsdb/data_d-cpemsdb_ts-bbxx_fno-38 11/27/2020 23:46:33 2097152 7682 7104 0 578 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_s_fno-676 11/27/2020 23:46:25 4145188 7682 6905 0 777 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_p_fno-612 11/27/2020 23:46:21 4141990 7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-515 11/27/2020 23:42:24 4266524 7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_a_fno-401 11/27/2020 23:42:18 4265394 7682 7167 0 515 +DATA/cpemsdb/data_d-cpemsdb_ts-data_p_fno-316 11/27/2020 23:41:51 4276720 7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-260 11/27/2020 23:41:25 4288106 7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_man_fno-203 11/27/2020 23:40:57 4301850 7682 7047 0 635 +DATA/cpemsdb/data_d-cpemsdb_ts-data_a_fno-43 11/27/2020 23:40:14 4322052 82537 76975 0 5562 11/27/2020 22:22:57 37828972 7682 7137 0 545 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-781 11/27/2020 22:22:57 11422785 5122 4773 0 349 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-439 11/27/2020 22:16:12 8910721 4098 3821 0 277 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-504 11/27/2020 22:10:47 8271483 4098 3821 0 277 +DATA/cpemsdb/data_d-cpemsdb_ts-data_arc_fno-115 11/27/2020 22:10:45 8263525 7682 6859 0 823 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-780 11/27/2020 21:44:07 28658590 82537 76630 0 5907 11/27/2020 21:44:07 44843779 5122 4746 0 376 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-438 11/27/2020 21:40:36 23650701 4098 3812 0 286 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-498 11/27/2020 21:39:24 20773723 4098 3813 0 285 +DATA/cpemsdb/data_d-cpemsdb_ts-data_arc_fno-97 11/27/2020 21:39:22 20698638 27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_yxbz_fno-689 11/27/2020 21:36:17 14979657 27 26 0 1 +DATA/cpemsdb/data_d-cpemsdb_ts-data_ics_fno-193 11/27/2020 21:36:15 20971520 27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-hgqjd_fno-384 11/27/2020 21:36:15 20971520 7682 7168 0 514 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_s_fno-674 11/27/2020 21:36:11 5206442 ...
Categories: DBA Blogs

Pages