Feed aggregator

Troubleshooting Agent Communications

Mark Vakoc - Fri, 2007-11-16 11:54
The managed home agents communicate with the management console using secure JMX connections. Once started the agent will connect with the console, perform some registration tasks, and appear automatically in the management dashboard. This article provides inside to the communication process and steps to troubleshoot the communications.

Agent Communication
The management console is configured with a JMX port used to establish communication between the management agents (both the managed home agent and the embedded agent contained within the server products). This port is specified during the installation wizard and defaults to 14501.

When the agent is installed a configuration file containing the name of the management console and JMX port to use is configured. This is in the file install_location/config/agent.properties.

management.server.name=myserver.mydomain.com
management.server.port=14501

The name of the server may either be a short machine name, a fully qualified domain name, or an IP address depending on how the Java process running the management console was able to resolve the host name.

The agent will use this information to attempt to connect to the management console. If unsuccessful, for example if the console isn't running, the agent will continually re-attempt the connection. This activity is recorded in the agent's log files. Look in the log file e1agent_0.log (the zero will always be the most current log) located within the install_location/logs directory of the managed home. In the log file you will see something along the following, in this case the management server is on denlcmwn5.mlab.jdedwards.com and the JMX port is 18501:

Nov 7, 2007 10:35:37 AM com.jdedwards.mgmt.agent.E1Agent$ManagementServerDaemonThread run
FINER: Attempting to connect to service:jmx:jmxmp://denlcmwn5.mlab.jdedwards.com:18501

If the connection could not be established a corresponding error will appear shortly in the log file.

Tip #1
On the machine the agent was installed attempt to ping the management console using the server name configured in the agent.properties file. If the ping is not successful you may either change the agent.properties file (for example to not use a fully qualified name) or modify host files (e.g. /etc/hosts on unix) as necessary. Either way restart the agent after making any changes.

Tip #2
If the ping was successful you can attempt to telnet to the management console using the same port. For example 'telnet denlcmwn5.mlab.jdedwards.com 18501'. If the connection is successful you know the agent is able to establish a connection to the console. If this fails there may be firewall or other networking issues preventing the connection that need to be resolved. Note: On a successful connection you may ignore any content displayed in the connection; it will not be human readable.

Once that connection has been made the management console will assign a TCP/IP port that the agent should use to listen for incoming connections. The agent will pass in it's machine name and install location, and the console will provide the next unused TCP/IP port starting at the 'Management Agent Starting Port' which was also configured during the installation wizard and defaults to 14502.

In the managed home agents log you will see the port that was assigned, in this case it was 18607:

Nov 7, 2007 10:35:42 AM com.jdedwards.mgmt.agent.Server startListener
INFO: Starting the management agent listener on port '18607'.
Nov 7, 2007 10:35:42 AM com.jdedwards.mgmt.agent.Server startListener
FINE: Attempting to start the local management agent listener on port 18607
Nov 7, 2007 10:35:42 AM com.jdedwards.mgmt.agent.Server startListener
FINE: Succesfully started the management agent listener on port 18607

If the operation was successful, as shown above, you may continue to to the next step. If there are errors indicating the listener could not be started you should make sure no other program is using that same port (and if they are you may change the 'Management Agent Starting Port' to something else in the management console (Select the 'Management Agents' link in the Quick Links. Do not change the 'Management Server JMX Port' setting.

If everything has been successful so far we will now focus our attention on the logs for the management console. You may view these logs using the console application itself. Navigate to the managed home for the management console (the managed home that contains the 'home' instance). On the bottom of that page select the log file home_0.log. The log should contain an entry indicating the initial connection (thus a port of -1) from the managed agent:

FINER: Received heartbeat from the remote management agent on denlcmlx2 listening on port -1 of type 2 in managed home /home/oracleas/oasagent

Next you will see an entry about the calculated port discussed above.

FINER: Determining the port the remote agent 'denlcmlx2' should start listening on.
FINER: Assigning the port 18607 to the remote agent 'denlcmlx2'.

Followed by a "heartbeat" request from that agent:

FINER: Received heartbeat from the remote management agent on denlcmlx2 listening on port 18,607 of type 2 in managed home /home/oracleas/oasagent

Finally the console will attempt to connect to the remote agent on the port assigned. If successful you will see something like:

FINE: Attemping to establish a connection from the management console to the remote agent 'denlcmlx2' on port 18607.
FINE: Successfully established a connection from the management console to the remote agent 'denlcmlx2' on port 18607 with connection id 'jmxmp://10.139.163.53:3213 32330841.

This completes the communication negotiation process and the managed home will soon appear in the dashboard.

Tip #3
If there are errors indicating the connection was not successful follow similar steps as above to troubleshoot the issue:
  1. On the management console machine ping the server using the name reported in the log files (in this case denlcmlx2). If not successful ensure the network/dns configuration is correct. Add an entry to the hosts file (\windows\system32\drivers\etc\hosts) for the machine name if necessary.
  2. If the ping was successful telnet from the management console to the specified name and port, for example 'telnet denlcmlx2 18607'. If that connection was not successful ensure that there are no firewall or other networking issues preventing the connection.
Nearly all the problems related to the managed agents not appearing in the management dashboard are related to networking and host name resolution issues or firewalls that are in place between the server running the management console and the remote machine.

Tip #4
On the iSeries platforms if there are errors in the logs indicating crypto or encryption related problems (because the connection between agents is fully encrypted) this usually indicates the required JDK (1.5) is not present.



Sizing the UNDO TABLESPACE for Automatic Undo management

Madan Mohan - Thu, 2007-11-15 21:43
Sizing an UNDO tablespace requires three pieces of data.

- (UR) UNDO_RETENTION in seconds
- (UPS) Number of undo data blocks generated per second
- (DBS) Overhead varies based on extent and file size (db_block_size)

UndoSpace = [UR * (UPS * DBS)] + (DBS * 24)

Two can be obtained from the initialization file: UNDO_RETENTION and DB_BLOCK_SIZE.
The third piece of the formula requires a query against the database. The number of undo blocks generated per second can be acquired from V$UNDOSTAT.


The following formula calculates the total number of blocks generated and divides it by the amount of time monitored, in seconds:

SQL>SELECT (SUM(undoblks))/ SUM ((end_time - begin_time) * 86400) FROM v$undostat;

Column END_TIME and BEGIN_TIME are DATE data types. When DATE data types are subtracted, the result is in days. To convert days to seconds, you multiply by 86400, the number of seconds in a day.

The result of the query returns the number of undo blocks per second. This value needs to be multiplied by the size of an undo block, which is the same size as the database block defined in DB_BLOCK_SIZE.


The following query calculates the number of bytes needed:

SQL> SELECT (UR * (UPS * DBS)) + (DBS * 24) AS "Bytes"
FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'),
(SELECT (SUM(undoblks)/SUM(((end_time - begin_time)*86400))) AS UPS FROM
v$undostat), (select block_size as DBS from dba_tablespaces where
tablespace_name= (select value from v$parameter where name
= 'undo_tablespace'));

Some issues with Oracle views as ActiveRecord source

Raimonds Simanovskis - Thu, 2007-11-15 16:00

I am using Ruby on Rails to publish data from existing “legacy” application on Oracle database which already have existing complex data model. I am defining additional database views on existing legacy data to which I grant select rights to Rails schema. And I am using Rails conventions when defining these views – view names as pluralized / underscored version of Rails model name, ID column as primary key and %_ID columns as foreign keys.

Typically this works quite well and I can use Rails find methods to automatically generate SQL SELECTs from these views. But for some legacy data I got the problem with Oracle number type mapping to Ruby integer type.

Rails standard convention for database NUMBER type mapping is the following:

  • NUMBER with specified scale and without precision (e.g. NUMBER) is mapped to :integer
  • NUMBER with specified scale and with precision (e.g. NUMBER) is mapped to :decimal
  • NUMBER without scale and precision (just NUMBER) is mapped to :decimal

If primary keys and foreign keys in legacy tables are defined as e.g. NUMBER then everything is OK and they will be mapped to :integer in Rails. But if primary keys or foreign keys in legacy tables are defined as NUMBER then they will be mappec to :decimal in Rails.

And what happens if e.g. primary key is mapped to :decimal in Rails? Then, for example, you get that customer.id is equal to “123.0” and you get ugly URLs like “/customers/123.0”.

One workaround is to use customer.id.to_i everywhere but it is quite annoying. Therefore I created patch for Oracle adapter (this is tested with Rails 1.2.3) which always sets data type as :integer for all primary keys (column name ID) and foreign keys (column name like %_ID). This includes also date columns patch that I wrote about previously.

module ActiveRecord::ConnectionAdapters
  class OracleColumn
    def simplified_type(field_type)
      return :boolean if OracleAdapter.emulate_booleans && field_type == 'NUMBER(1)'
      case self.name
        # RSI: treat columns which end with 'date' as ruby date columns
        when /date$/i then :date
        # RSI: removed 'date' from regex
        when /time/i then :datetime
        # RSI: treat id columns (primary key) as integer
        when /^id$/i then :integer
        # RSI: treat _id columns (foreign key) as integer
        when /_id$/i then :integer
        else super
      end
    end
  end

  # RSI: patch to change selected results NUMBER to integer for primary and foreign keys
  class OracleAdapter
    def select(sql, name = nil)
      cursor = execute(sql, name)
      cols = cursor.get_col_names.map { |x| oracle_downcase(x) }
      rows = []

      while row = cursor.fetch
        hash = Hash.new

        cols.each_with_index do |col, i|
          hash[col] =
            case row[i]
            when OCI8::LOB
              name == 'Writable Large Object' ? row[i]: row[i].read
            when OraDate
              (row[i].hour == 0 and row[i].minute == 0 and row[i].second == 0) ?
              row[i].to_date : row[i].to_time
            else row[i]
            end unless col == 'raw_rnum_'
          # RSI: patch - convert to integer if column is ID or ends with _ID
          hash[col] = hash[col].to_i if (col =~ /^id$/i || col =~ /_id$/i) && hash[col]
        end

        rows << hash
      end

      rows
    ensure
      cursor.close if cursor
    end
  end  
end

I have not yet verified this with Rails 2.0. And probably I will collect all my Oracle adapter patches and will release it as plugin. Is anybody interested in this?

Categories: Development

2nd day at sys-con conference

Rakesh Saha - Thu, 2007-11-15 03:36

Enabling , Disabling , Change the password protection on the RDBMS and Tools (8.0.6) listeners in an Oracle Applications 11.5.x

Madan Mohan - Thu, 2007-11-15 01:38
Pre-requisites
The steps in this guide should be performed after applying the following AutoConfig patches:-

3453499 (11i.ADX.F)
5225940 (Post ADX.F Fixes)
5107107 - AUTOCONFIG ENGINE & CONFIG TOOLS ROLLUP PATCH N or higher



How to enable/disable/change the password

To enable/disable or change the password script addlnctl.pl must be used.

DB Tier: $ORACLE_HOME/appsutil/bin/addlnctl.pl
Apps Tier: $AD_TOP/bin/addlnctl.pl

This has the following syntax:-

Valid arguments for addlnctl.pl:
help : get usage information
contextfile : provide Applications or DB Tier context file name

Set the Applications (APPSORA.env) or RBDMS ($CONTEXT_NAME.env)
Run one of the following commands

Example 1: To enable listener password

addlnctl.pl contextfile=$CONTEXT_FILE enablepassword

Example 2: To disable listener password

addlnctl.pl contextfile=$CONTEXT_FILE disablepassword

Example 3: To change existing listener password

addlnctl.pl contextfile=$CONTEXT_FILE changepassword

Further information:
This will change the contextfile variable s_enable_listener_password to "ON" or "OFF"
Update/Remove the listener.ora with a PASSWORDS_PROD entry and the encypted password value

Example

PASSWORDS_PROD=BC73ED1DD01AC862
If the listener is not running it will not start it.
If the listener is currently running it will stop and restart it.
Controlling a password protected listener .Once the password is enabled the following methods can now be used to stop/start the listener:-


Apps Tier:

Use script $COMMON_TOP/admin/scripts/$CONTEXT_NAME/adalnctl.sh

Examples:-
$COMMON_TOP/admin/scripts/$CONTEXT_NAME/adalnctl.sh start
$COMMON_TOP/admin/scripts/$CONTEXT_NAME/adalnctl.sh stop

This script will check the listener.ora for the encypted password, and use this to stop/start the listener.

DB Tier:

Method A: Use script $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME/addlnctl.sh

Examples:-

$ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME/addlnctl.sh start PROD
$ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME/addlnctl.sh stop PROD

This script will check the listener.ora for the encypted password, and use this to stop/start the listener.

Method B : Use command line options in the Listener Control Utility LSNRCTL
Start the Listener Control Utility

$ lsnrctl

Enter the listener name
LSNRCTL > set current_listener PROD

Enter password for the Listener
LSNRCTL > set password or

Examples:

LSNRCTL > set password 654FA907952B15B

or:

LSNRCTL > set password
Password: apps

To determine the Status of the listener:
LSNRCTL > status

To Stop the listener:
LSNRCTL > stop

To start the listener:
LSNRCTL > start

To exit :
LSNRCTL > start

Oracle VM

Renaps' Blog - Wed, 2007-11-14 15:09

Oracle VM is now available for download.

You can read the news here.

Oracle VM Homepage.


Categories: DBA Blogs

Openworld Session Room Change

Jo Davis - Tue, 2007-11-13 19:30
For those who don't check the morning OpenWorld newspaper my session tomorrow has changed rooms, it is now on in....

Saloon 12 & 13 at the Marriot

Apparently Diana & I needed an upgrade as we had too many registrations for the other room? How cool is that? And for those who care - yes, I will be talking about actual case studies in the presentation not just the theory :)

Have a great day
Jo

Going to OOW 2007

Vlad Sadilovskiy - Fri, 2007-11-09 14:35

Took entire week of PTO and I’m going to California for the first time. I’m going to Oracle Open World for the first time as well. I heard some good words about it all. I’m going there with my family too.

 Tricky part is to find enough time to enjoy both – the event and the place. So, next week I probably will run fewer tests and help fewer customers. But I’m sure I’ll catch up later with all that new energy.


Ingersoll Rand presenting Flow Manufacturing at Open World

Chris Grillone - Fri, 2007-11-09 13:28
Valerie Dubois and I will be co-presenting with Maggie Park from Ingersoll Rand at Open World. Maggie's presentation title is "Implementing Flow Manufacturing at Ingersoll Rand".

Session ID: S292704
Session Title: Leveraging Flow Manufacturing with Your Enterprisewide Lean Initiative
Track: Manufacturing; Automotive; High Tech; Industrial Manufacturing; Life Sciences
Room: Nob Hill CD
Date: 2007-11-14
Start Time: 11:15

Please visit us at the "Meet The Experts" Flow Manufacturing demo on Wednesday afternoon starting at 2:30 PM Space 2 in Moscone West. Barcode scanning, Kanban and RFID are featured in this demo.

Sharing is caring

Fadi Hasweh - Wed, 2007-11-07 05:08
It been a while since my last post, today I will post about some oracle books that can be viewed online using http://books.google.com go there and search for 11i or for “oracle e-business suite”+ altasen for workflow and sysadmin books of course not all the books are available online for free but only some parts, it will give you a good hint about the book before you buy it.

Good for google and good for us and sharing is caring

Fadi
My next post will be technical so keep visiting.

Understanding the Server Manager Downloads

Mark Vakoc - Tue, 2007-11-06 20:49
Server Manager is mastered along the standard tools release schedule. That means that for each tools release, maintenance release, and update will include downloads for Server Manager. For each tools release there will be two downloads.

If you are installing SM for the first time I would recommend obtaining the latest installer. This will be a large (1 GB) download that is used to perform the initial installation.

If you have already installed SM you may download a significantly smaller update (around 30-50 MB) and apply that to your existing SM installation (of course using SM to perform the update).

For example installing 8.97.0.0 and then applying the 8.97.0.1 update will be functionally identical to initially installing the 8.97.0.1 release. You may also go backwards, if desired, to an earlier release.

Remember you may always use the latest SM release even if you are managing earlier tools releases of the E1 components. Using the latest SM release ensures you have all the latest bug fixes and functionality available.

Visit us at OpenWorld '07

Mark Vakoc - Tue, 2007-11-06 20:47
I and some of my colleagues will be OpenWorld this year. Visit us at the EnterpriseOne Tools and Technology booths at DEMOgrounds and attend Server Manager session on Thursday morning. See you there!

Here we go....

Mark Vakoc - Tue, 2007-11-06 20:16
If you can read this it means we have gone GA with the 8.97 tools release.  Download, install, and go crazy with Server Manager.  And stay tuned for more tips, tricks, and detailed information about Server Manager.

OCR Mirroring

Fairlie Rego - Tue, 2007-11-06 05:10
In my last blog I talked about adding mirrors to your voting disk. Here I will document how to add mirrors to your OCR on a clustered file system like vxfs.

In the existing configuration there is only one ocr. The location of your ocr is specified in /var/opt/oracle/ocr.loc (on Solaris). It would be different depending on your port.

You need to add an OCR mirror when CRS is up else you get an error message like the one below

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocrmirror /u02/oracrs/ocr_file_02
PROT-1: Failed to initialize ocrconfig

For more details on why you have received an error check the latest ocrconfig log file under $ORA_CRS_HOME/log/node/client

root@bart # cat ocrconfig_1742.log
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
2007-10-27 12:32:19.130: [ OCRCONF][1]ocrconfig starts...
2007-10-27 12:32:20.933: [ OCRCONF][1]Failure in initializing ocr in DEFAULT level. error:[PROC-32: Cluster Ready Services on the local node i
s not running Messaging error [9]]
2007-10-27 12:32:20.933: [ OCRCONF][1]Exiting [status=failed]...

From the above error message it is clear that CRS needs to be up.

So after starting up CRS

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocrmirror /u02/oracrs/ocr_file_02
PROT-16: Internal Error

But even though I get an error it appears to have created the mirror

root@bart # cat /var/opt/oracle/ocr.loc
#Device/file getting replaced by device
/u02/oracrs/ocr_file_02
ocrconfig_loc=/u02/oracrs/ocr.dbf
ocrmirrorconfig_loc=/u02/oracrs/ocr_file_02
local_only=falseroot@bart

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 24932
Available space (kbytes) : 237188
ID : 1909893349
Device/File Name : /u02/oracrs/ocr.dbf
Device/File integrity check succeeded
Device/File Name : /u02/oracrs/ocr_file_02
Device/File integrity check succeeded

Now let us replace the existing OCR with a new one file

u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocr /u02/oracrs/ocr_file_01
PROT-16: Internal Error

root@bart # cat /var/opt/oracle/ocr.loc
#Device/file /u02/oracrs/ocr.dbf getting replaced by device
/u02/oracrs/ocr_file_01
ocrconfig_loc=/u02/oracrs/ocr_file_01
ocrmirrorconfig_loc=/u02/oracrs/ocr_file_02
local_only=falseroot@bart

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 24932
Available space (kbytes) : 237188
ID : 1909893349
Device/File Name : /u02/oracrs/ocr_file_01
Device/File integrity check succeeded
Device/File Name : /u02/oracrs/ocr_file_02
Device/File integrity check succeeded

Cluster registry integrity check succeeded

At this point you can shutdown and startup CRS and see if all is fine. One point to note is that you should check that the ocr.loc files on all remote nodes are updated with the updated location of the OCR file. In my testing the CRS stack on the remote node was down and in such cases the ocr.loc on the remote node does not get updated.

When the cluster is running if the OCR mirror is removed/corrupted the cluster continues running and all crs commands including ocrcheck can be run if you comment the following line from /var/opt/oracle/ocr.loc
ocrmirrorconfig_loc=/u02/oracrs/ocr_file_02

If you lose the primary OCR the cluster keeps running but
ocrcheck fails and CRS cannot be stopped even if you comment out the following line even though the mirror is available.

root@bart # cat /var/opt/oracle/ocr.loc
#Device/file /u02/oracrs/ocr_file_01 getting replaced by device /u02/oracrs/ocr_file_01
#ocrconfig_loc=/u02/oracrs/ocr_file_01
ocrmirrorconfig_loc=/u02/oracrs/ocr_file_02

Things are fine if I replace the ocr using
root@bart # /u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocr /u02/oracrs/ocr_file_01
PROT-16: Internal Error

Changing DB 32-Bit to 64-Bit

Madan Mohan - Tue, 2007-11-06 02:56
INSTALLING THE 64BIT RELEASE

1. Ensure that there is ample free size for the 64bit release installation.
Recommended free space should be 3G.

2. Start the Installer GUI.


3. On the File Locations Screen, create a new name and path for the 64bit
oracle installation under the Destination.

A typical entry would be

Name: orahome920_64b
Path: /u01/app/oracle/product/9.2.0-64b

4. Proceed with the installation. Stop at the configuration assistant
configuration screen.

5. Install the latest 64bit patch set under the new oracle installation.


CHANGING THE WORD-SIZE OF YOUR CURRENT RELEASE

The instructions in this section guide you through changing the word-size of
your current release (switching from 32-bit software to 64-bit software or vice versa).

Complete the following steps to change the word-size of your current release:

1. Start SQL*Plus.

2. Connect to the database instance AS SYSDBA.

3. Run SHUTDOWN IMMEDIATE on the database:
SQL> SHUTDOWN IMMEDIATE

4. Perform a full offline backup of the data depending on the available backup
mechanism, eg BCV, Unix file copy.

5. If the initialization parameter file eg initSID.ora, spfileSID.ora, listener.ora, sqlnet.ora resides within the old OR
ACLE_HOME, then copy it to the corresponding location of the new 64b it ORACLE_HOME. If the parameter files are symbol
ic links to another location, then the symbolic links have to be created in the new ORACLE_HOME.

Example:
Old $ORACLE_HOME/dbs
initSID.ora->/u01/app/oracle/admin/lss/initSID.ora
spfileSID.ora->/u01/app/oracle/admin/lss/spfileSID.ora
The same links have to be created in new $ORACLE_HOME/dbs.


6. Change your active Oracle instance environment to point at the new 64Bit ORACLE_HOME.
Eg
a)
Edit /var/opt/oracle/oratab if using dbhome/oraenv to set the environment.

lss:/u01/app/oracle/product/9.2.0:Y

i) set it to the new 64bit Oracle Home path

lss:/u01/app/oracle/product/9.2.0-64b:Y

b)
Change the essential environment setting eg $ORACLE_HOME,$LD_LIBRARY_PATH to use new 64bit Oracle Home Path if h
ardcoded

export ORACLE_HOME=/u01/app/oracle/product/9.2.0

Change to

export ORACLE_HOME=/u01/app/oracle/product/9.2.0-64b

7. Set AQ_TM_PROCESSES=0 if it is not.

a) If using initSID.ora to start instance, then add it to the init file.

b) If using spfileSID.ora to start instance,then the database can be startup and the parameter set by running th
e below command.

SQL> ALTER SYSTEM SET aq_tm_processes=0 SCOPE=SPFILE;

c) shutdown the database again.
9. Set _system_trig_enabled = false.

a) If using initSID.ora to start instance, then add it to the init file.
b) If using spfileSID.ora to start instance, then the database can be startup and the parameter set by running th
e below command.

SQL> ALTER SYSTEM SET "_system_trig_enabled"=FALSE SCOPE=SPFILE;

c) shutdown the database again.


The parameter should be set to FALSE for scripts that perform dictionary operations as the objects on which the triggers d
epend may become invalid or be dropped, causing the triggers to fail and thus preventing the scripts from running successf
ully.

10. When migrating from a 32-bit Oracle version to a 64-bit Oracle version, Oracle recommends doubling the size of paramet
ers such as:

SHARED_POOL_SIZE
SHARED_POOL_RESERVED_SIZE
LARGE_POOL_SIZE

11. At a system prompt, change to the new 64bit ORACLE_HOME/rdbms/admin
directory.

12. Start SQL*Plus.

13. Connect to the database instance AS SYSDBA.

14. Run STARTUP RESTRICT:

SQL> STARTUP RESTRICT

15. Run the following script:

SQL> @?/rdbms/admin/catalog.sql

16. Check for DBMS_PACKAGE validity.

17. If invalid, run below

SQL> alter package dbms_standard compile;
18. Run the following script:

SQL> @?/rdbms/admin/catproc.sql

19. Set the system to spool results to a log file for later verification of
success.

SQL> SPOOL /tmp/catoutw.log

20. Run utlirp.sql:

SQL> @?/rdbms/admin/utlirp.sql

The utlirp.sql script recompiles existing PL/SQL modules in the format required by the new database. This script first
alters certain dictionary tables. Then, it reloads package STANDARD and DBMS_STANDARD, which are necessary for using P
L/SQL. Finally, it triggers a recompile of all PL/SQL modules, such as packages, procedures, types, and so on.

21. Turn off the spooling of script results to the log file:

SQL> SPOOL OFF

Then, check the spool file and verify that the packages and procedures compiled successfully. You named the spool fil
e in Step 12; the suggested name was catoutw.log. Correct any problems you find in this file.


22. Run ALTER SYSTEM DISABLE RESTRICTED SESSION:

SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;

23. Remove the parameter aq_tm_processes or set value back to the original value.

a) If using initSID.ora to start instance, then remove or edit it from the init file after shutting down the data
base.

b) If using spfileSID.ora to start instance, then the parameter can be change by running the below command.

SQL> ALTER SYSTEM SET aq_tm_processes= SCOPE=SPFILE;

To remove the parameter.

SQL> ALTER SYSTEM RESET aq_tm_processes SCOPE=SPFILE SID=.*.;
24. Remove the parameter _system_trig_enabled = FALSE

a) If using initSID.ora to start instance, then remove it from the init file after shutting down the database.

b) If using spfileSID.ora to start instance, then the parameter can be removed by running the below command.

SQL> ALTER SYSTEM RESET "_system_trig_enabled" SCOPE=SPFILE SID=.*.;

25. The word-size of your database is changed. The database can be shutdown and reopen for normal use.

Openworld San Francisco

Jo Davis - Mon, 2007-11-05 20:57
Yes! The rumour is true! I will be at San Francisco, co-presenting about Internet Expenses! Come to the session.... or catch me for a coffee ... definately the place to be next week!

For those of you who are preregistering here is the info:
Session ID: S291212
Date: Wednesday 14th November
Time: 3pm - 4pm (What are the chances everyone will fall asleep?)
Where: Marriott Salon 3

And for those attendees who feel the urge to contact me (can't think why - perhaps that coffee?):


Join Me at Oracle OpenWorld Connect!

At what level should I version my source code?

Susan Duncan - Mon, 2007-11-05 09:11
This is a question that comes up regularly when I talk to development teams. And I believe that the answer should always be - at the top level.

If you are using JDeveloper this means at the application level. We adopt this in our own development here and it's my top tip for best practice. Some might want to argue that if their application is broken into a number of different projects then why not version those individually?

One problem with this approach could be cross- project dependencies in your application. Imagine that you work in one project on a day to day basic but there is a dependency on libraries held in another project (let's call it a 'library' project). You might checkout both projects at the beginning of the development cycle but assume that the 'library' project was complete and so you could just work and update your 'working' project. At some later stage, you could run into problems when you want to check in your 'working' project and realise that updates have been made to the 'library' project.

On a practical level, if you are using JDeveloper, you should not only follow my advice and version at the application level, but always ensure that your .jws configuration file is included in source control. This isn't a problem if you using JDeveloper's integrated SVN commands. When you select an application you automatically get the .jws file included.

But what if not all your development team are using JDeveloper? Perhaps you prefer to use Tortoise for your SVN interactions. Or perhaps you have some team members that use Eclipse or another IDE as well as JDeveloper? In this case it is important to ensure that you still place both your 'jws' and your 'jpr' files under source control.

Why? Because JDeveloper's SVN code looks at the state of those files to decide what menu options it makes available. For instance, say an initial import of an application to SVN had been done using Tortoise and the .jws file had not been included. I then check out this into a working copy in JDeveloper and make some changes. When I try to commit or update this working copy using JDeveloper I would expect to get the menu options 'Commit Working Copy' or 'Update Working Copy'. In fact, I would only see 'Import Files' as the .jws files would not be part of the versioned application and so JDeveloper offers you the option to place the application under version control.

I've come across this a few recently. The workaround is to go back to the repository and add the .jws file to version control. This brings the IDE back into kilter and allows you to continue using JDeveloper's integrated SVN support.

Recursive Calls

Vlad Sadilovskiy - Sun, 2007-11-04 23:26

Here is one interesting subject – recursive calls and performance monitoring instrumentation. This is just a few observations of the session tracing facility, Statspack, AWR and Tkprof point of view on recursive calls. We want to observe and clarify how the Tkprof, Statspack and AWR calculate top and recursive call statistics.

The definition of recursive call as per Metalink Note:41634.1 and 10g Tkprof Manual

“Recursive Calls
—————

Sometimes to execute a SQL statement issued by a user, Oracle must issue
additional statements
. Such statements are called ‘recursive calls’ or
‘recursive SQL statements’. For example, if you insert a row into a table
that does not have enough space to hold that row, Oracle makes recursive
calls to allocate the space dynamically. Recursive calls are also generated
when data dictionary information is not available in the data dictionary
cache and must be retrieved from disk.

If recursive calls occur while the SQL trace facility is enabled, TKPROF
produces statistics for the recursive SQL statements and clearly marks them
as recursive SQL statements in the output file.  Note that the
statistics for
a recursive SQL statement are included in the listing for that statement
,
not in the listing for the SQL statement that caused the recursive call.  So
when you are calculating the total resources required to process a SQL
statement, you should consider the statistics for that statement as well as
those for recursive calls caused by that statement.”

Recursive calls are triggered by top level calls that governs processing of statements directly issued by users (or more precisely, issued by a program that communicates directly with Oracle on behalf of its end-users or scripts). Tkprof is supposed to report base statistics (excluding the statistics of its recursive calls) of a call at any level.

For the purpose of the following discussion let me add the following statistics definitions taken from Statistics Descriptions that can be viewed either in Statspack or AWR report or directly using v$sysstat view:

CPU used by this session – Amount of CPU time (in 10s of milliseconds) used by a session from the time a user call starts until it ends. If a user call completes within 10 milliseconds, the start and end user-call time are the same for purposes of this statistics, and 0 milliseconds are added.
recursive cpu usage – Total CPU time used by non-user calls (recursive calls). Subtract this value from “CPU used by this session” to determine how much CPU time was used by the user calls.”

Documentation says that “CPU used by this session” statistics includes CPU used by recursive calls.

Let’s run a test and take a look at this from practical perspecive and see how well 10.2.0.3 DB conforms to its documentation. I’ve chosen following code for this testing:

create or replace function recursive_function(leveln number, maxlevel number) return number
as
    l_level number := leveln;
    c number := 0;
begin

    loop
        exit when c > 4000000000;
        c := c + 1;
    end loop;

    if l_level+1 <= maxlevel then
        execute immediate 'select /*+ query at level '||(l_level+1)||' */ recursive_function('||(l_level+1)||','||maxlevel||') from dual' into c;
    end if;
   
    return c;
end;
/

begin
    perfstat.statspack.snap(i_ucomment=>'start at '||to_char(sysdate,'yy/mm/dd hh24:mi:ss'));
    DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ('ALL');
end;
/

alter session set events '10046 trace name context forever, level 8';              

select /*+ top */ recursive_function(0,3) from dual;

begin
    perfstat.statspack.snap(i_ucomment=>'end at '||to_char(sysdate,'yy/mm/dd hh24:mi:ss'));
    DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ('ALL');
end;
/

alter session set events '10046 trace name context off';

It triggers chain of recursive calls with max level 3. This code generates quasi-equal amount of pure CPU load and should allow identifying resource consumption on any level quite easily.

Raw Trace:

PARSING IN CURSOR #53 len=51 dep=0 uid=57 oct=3 lid=57 tim=1166023881658166 hv=1409094977 ad='6cf1aba8'
select /*+ top */ recursive_function(0,3) from dual
END OF STMT
PARSE #53:c=0,e=163,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1166023881658162
EXEC #53:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1166023881658276
WAIT #53: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166023881658319
WAIT #53: nam='SQL*Net message from client' ela= 44597 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166023881702983
*** 2007-11-02 09:05:46.334
=====================
PARSING IN CURSOR #13 len=64 dep=1 uid=57 oct=3 lid=57 tim=1166024166342227 hv=1557986045 ad='72a95828'
select /*+ query at level 1 */ recursive_function(1,3) from dual
END OF STMT
PARSE #13:c=2999,e=2789,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,tim=1166024166342221
EXEC #13:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1166024166342394
*** 2007-11-02 09:10:59.564
=====================
PARSING IN CURSOR #55 len=64 dep=2 uid=57 oct=3 lid=57 tim=1166024472231315 hv=4273745374 ad='6e06c2e8'
select /*+ query at level 2 */ recursive_function(2,3) from dual
END OF STMT
PARSE #55:c=3000,e=2617,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1166024472231309
EXEC #55:c=0,e=38,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=1,tim=1166024472231491
*** 2007-11-02 09:15:50.950
=====================
PARSING IN CURSOR #14 len=64 dep=3 uid=57 oct=3 lid=57 tim=1166024756787955 hv=1817767248 ad='6e2b2ec8'
select /*+ query at level 3 */ recursive_function(3,3) from dual
END OF STMT
PARSE #14:c=2000,e=2633,p=0,cr=0,cu=0,mis=1,r=0,dep=3,og=1,tim=1166024756787948
EXEC #14:c=0,e=65,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=1,tim=1166024756788155
*** 2007-11-02 09:20:42.250
FETCH #14:c=291241724,e=284472442,p=0,cr=0,cu=0,mis=0,r=1,dep=3,og=1,tim=1166025041260627
STAT#14 id=1 cnt=1 pid=0 pos=1 obj=0 op='FASTDUAL  (cr=0 pr=0 pw=0 time=5 us)'
FETCH #55:c=582567435,e=569029386,p=0,cr=0,cu=0,mis=0,r=1,dep=2,og=1,tim=1166025041260907
STAT#55 id=1 cnt=1 pid=0 pos=1 obj=0 op='FASTDUAL  (cr=0 pr=0 pw=0 time=3 us)'
FETCH #13:c=895731828,e=874918635,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=1,tim=1166025041261058
WAIT #53: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166025041261123
FETCH #53:c=1187120530,e=1159558108,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=1166025041261160
WAIT #53: nam='SQL*Net message from client' ela= 54812 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166025041316063
STAT#53 id=1 cnt=1 pid=0 pos=1 obj=0 op='FASTDUAL  (cr=0 pr=0 pw=0 time=5 us)'

In the 10g raw trace time is given in microseconds. So, you see how the CPU cosumption and elapsed time are reported in the trace. At each level reported as its own CPU load plus the CPU load from the call sub-tree (all descendants):

CPUL3 (291241724)  = CPUL3.base + CPUL4(0: there are no further descendants)
CPUL2 (582567435)  = CPUL2.base + CPUL3(291241724)
CPUL1 (895731828)  = CPUL1.base + CPUL2(582567435)
CPUL0 (1187120530) = CPUL0.base + CPUL1(895731828)

or,

CPUL3.base = CPUL3 = 291241724 = 291 s
CPUL2.base = CPUL2 (582567435) - CPUL3(291241724) = 291325711 = 291 s
CPUL1.base = CPUL1 (895731828) - CPUL2(582567435) = 313164393 = 313 s
CPUL0.base = CPUL0 (1187120530) - CPUL1(895731828) = 291388702 = 291 s

Tkprof:

select /*+ top */ recursive_function(0,3)
from
 dual

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1   1187.12    1159.55          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3   1187.12    1159.55          0          0          0           1

Parsing user id: 57

================================================================================
select /*+ query at level 1 */ recursive_function(1,3)
from
 dual

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    313.16     305.88          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    313.16     305.88          0          0          0           1

Parsing user id: 57     (recursive depth: 1)

================================================================================
select /*+ query at level 2 */ recursive_function(2,3)
from
 dual

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    291.32     284.55          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    291.32     284.55          0          0          0           1

Parsing user id: 57     (recursive depth: 2)

================================================================================
select /*+ query at level 3 */ recursive_function(3,3)
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    291.24     284.47          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    291.24     284.47          0          0          0           1

Parsing user id: 57     (recursive depth: 3)

I always thought Tkprof doesn’t report net statistics for calls, but now I see how wrong I was. From this test it is obvious that the Tkprof reported base times for recursive calls at all levels, and only for the top level calls the CPU and elapsed time are defined as gross load including all levels: CPUL0 = CPUL0.base + CPUL1.base + CPUL2.base + CPUL3.base  = 291 + 313 + 291 + 291 = 1187 s. My memory might be playing tricks with me, but I recall I saw a discussion on Jonathan Lewis’s blog about this Tkprof deficiency. Maybe someone will update me where it was.

AWR:

SQL ordered by Elapsed Time

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
     1,160      1,160            1     1159.6    99.8 1rcptud9zu5a1
Module: SQL Developer
select /*+ top */ recursive_function(0,3) from dual

       875        875            1      874.9    75.3 9pgc4u1fdtyrx
Module: SQL Developer
select /*+ query at level 1 */ recursive_function(1,3) from dual

       569        569            1      569.0    49.0 0v0qmk7zbsbfy
Module: SQL Developer
select /*+ query at level 2 */ recursive_function(2,3) from dual

       284        284            1      284.5    24.5 2zfr66tq5jvah
Module: SQL Developer
select /*+ query at level 3 */ recursive_function(3,3) from dual


Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time                                          1,162         100.0
control file parallel write             396           1      1    0.0 System I/O
...


Time Model Statistics

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                              1,162.0        100.0
DB CPU                                                1,161.9        100.0
PL/SQL execution elapsed time                         1,159.6         99.8
...
DB time                                               1,162.1          N/A
background elapsed time                                   2.9          N/A
background cpu time                                       1.5          N/A


Instance Activity Stats

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session                    118,971          100.0      13,219.0
CPU used when call started                  118,946           99.9      13,216.2
recursive cpu usage                          89,804           75.5       9,978.2

I rearranged it a little bit for better view.  

First of all, the top section shows that each of the times reported at each level contain times of its descendants (i.e. as in raw trace). Well, no wonder, there is probably not enough information to track call dependency and subtract statistics of children from a parent. Anyhow, it reports gross load for each recursive level and top level calls. Anyone analyzing AWR reports should note statement “Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.”that precedes this section. Apparently it is actually true not only for PL/SQL code, but for any type of code. This can be seen in the time model statistics where the sql execute elapsed time roughly equal PL/SQL execution elapsed time.

Also, comparing this section to the raw trace or Tkprof output shows that in the AWR report CPU time matches exactly the elapsed time, although it was not the same in the raw trace. This would seem just a reporting “feature”, if not for the information from the further sections. CPU time in Top 5 Timed Events and DB CPU in Time Model Statistics is roughly about 1,160 s. Ignoring small difference, same time is spent by the top level statement. Only Instance Activity Stats section shows 1,189.7 s that almost matches the raw trace CPU time consumed by all subject statements with a little offset of 2.58 s. Further, the recursive cpu usage of 898 s matches the CPU time of the recursive calls starting at level 1 with difference of 2.3 s. Both deltas for the most part are time spent in Statspack subroutines that were captured by AWR.

Statspack:

Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
CPU time                                                     1,190          99.9
control file parallel write                        397           1      1     .0
...

In Statspack the Time Model Statistics and Instance Activity Stats precisely equal to the corresponding sections in the AWR report. The SQL statistics wasn’t captured for our statements because they didn’t generate logical or physical reads nor they generate other activity that is considered a treat by the Statspack. However in the Top 5 Timed Events you can see that the CPU time is the figure of CPU used by this session statistics in the Instance Activity Stats, so it comes from v$sysstat view.

This behavior is different from AWR algorithm, which rather seem to be taking CPU time in Top 5 Timed Events from v$sys_time_model or directly from their common source. In case v$sys_time_model and v$sysstat have different source of information, AWR is not bound to the figures of the later when reporting Top 5 Timed Events. This perhaps holds an explanation for the AWR oddity seen by Jonathan Lewis. It would be interesting to find the relevant clues in the reports or in this particular system.


Pages

Subscribe to Oracle FAQ aggregator