Feed aggregator

Publishing Gadgets

Oracle EPM Smart Space - Wed, 2008-01-23 11:24

In the world of Gadgets and Widgets getting a new gadget is as simple as downloading and installing one from a Gadget site, message board or blog. This is great for making a framework pervasive but would wreak havoc on the complex IT communities that implement EPM or BI solutions. Security is an obvious concern but also the idea of maintaining and understanding what users have installed becomes impossible in a scenario like this. When designing Smart Space we often argued this topic wanting to keep idea of gadgets open and pervasive while maintain control for customers. This is how the idea of Gadget publishing came to be.

Gadget publishing gives an administrator the ability to control what gadgets his or her users can download and install. It also allows the administrator to recommend gadgets to users giving administrators some assurance as to what gadgets are installed on a user desktop. Gadget publishing is done using the same technology we use to deploy and maintain the Smart Space client (often referred to as the framework); Click Once. Going into details on Click Once would warrant an additional post, so I will keep it short for now. Click Once helps to provide a safe and secure environment for users to download and install the gadgets while providing automatic updates. The gadgets are also digitally signed with a certificate so users will be assured of their safety when installing.

I am a simple guy so I will put this is simple terms. When Smart Space is installed in an environment the administrator essentially runs a 'store' where they will maintain an inventory of gadgets or other components. The users can go to this store and download from the 'stores' inventory with confidence.

Here is how it is done. The administrator will first have to install the Smart Space Administration Utility. This utility will probably only be installed by one or two people and does require administrative privileges to install. The installation files for this get laid down when the Smart Space server is installed and can be usually accessed from the following URL: http://[ServerName]:17080/SmartSpace/Installations/AdminUtility/Setup.msi. (For installation and setup help check out the Administrator's Guide for Smart Space) Once installed and configured you should be able to open the administration utility and get to the Sign & Publish tab.

From this tab you can do the following; publish or remove the framework (Smart Space client), publish or remove gadgets, download already published gadgets, sign gadgets, etc. Most of the time you are going to just sign and publish gadgets; but if this is a first time installation you will probably sign and publish the framework and the gadgets. In either case the process is the same but it is key to make sure the framework is properly published otherwise users will not be able to get any gadgets. The utility should be pretty self explanatory and there is plenty of help in the Administrator's guide but I will run through the steps for publishing a gadget.

  • From the Sign & Publish tab click Add.
  • Browse for a framework or gadget file to publish. (.SmartSpaceGadget or .SmartSpaceFramework)

    These files are compressed files with all the bits needed for the Click Once installations and deployments. An administrator will find the framework and default gadgets in the directory where they installed the Administration Utility (usually C:\Program Files\Oracle EPM\Oracle EPM Smart Space Administration Utility).
  • Once on the list from the action dropdown verify that it says Publish.
  • Click Apply.

This will move the gadget file to the server and place all the bits in the correct location so that users will see it as available to them in the Smart Space palette on their client.

To summarize, gadgets can be a great end user tool but in the world on enterprise software there needs to be some level of control.

Categories: Development

LOG_ARCHIVE_DEST_n

Ayyappa Yelburgi - Tue, 2008-01-22 21:28
Syntax LOG_ARCHIVE_DEST_[1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10] = "null_string" | ((SERVICE=service | LOCATION=location) [AFFIRM | NOAFFIRM] [ALTERNATE=destination | NOALTERNATE] [ARCH | LGWR] [DELAY[=minutes] | NODELAY] [DEPENDENCY=destination | NODEPENDENCY] [MANDATORY | OPTIONAL] [MAX_FAILURE=count | NOMAX_FAILURE] [QUOTA_SIZE=blocks | NOQUOTA_SIZE] [QUOTA_USED=blocks | NOQUOTA_USED] [ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com7

LOG_ARCHIVE_DEST_n

Ayyu's Blog - Tue, 2008-01-22 21:28
Categories: DBA Blogs

Data Guard 9i Log Transportation on RAC

Ayyappa Yelburgi - Tue, 2008-01-22 21:23
PURPOSE-------This article gives an overview about how to create a Data Guard Configuration on Real Application Clusters (RAC). The Configurationyou can find here is for a Physical Standby Database. SCOPE & APPLICATION-------------------You can see which initialisation parameters you have to use / changeand how the Log Transport is organized in a RAC-DataGuard environment. NOTE: The Data Guard ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com6

Data Guard 9i Log Transportation on RAC

Ayyu's Blog - Tue, 2008-01-22 21:23
Categories: DBA Blogs

Creating a Data Guard Configuration

Ayyappa Yelburgi - Tue, 2008-01-22 21:14
1) Ensure the Primary database is in ARCHIVELOG mode: SQL> archive log list Database log mode No Archive Mode Automatic archival Disabled Archive destination /export/home/oracle/temp/oracle/arch Oldest online log sequence 7 Current log sequence 9 SQL> alter database close; Database altered. SQL> alter database archivelog; Database ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com43

Creating a Data Guard Configuration

Ayyu's Blog - Tue, 2008-01-22 21:14
Categories: DBA Blogs

Improvement of AUTO sampling statistics gathering feature in Oracle 11g

Oracle Optimizer Team - Tue, 2008-01-22 17:34
Optimizer statistics in Oracle are managed via a pl/sql package, dbms_stats. It provides several pl/sql procedures to gather statistics for a table, schema, or a database. For example, gather_table_statistics is used to gather statistics on a table. This procedure has an estimate_percent parameter, which specifies the sampling percentage of the statistics gathering. The users can specify any number between 0 ~ 100 for this parameter. For example, suppose you have a table BIGT, you can specify a 1% sampling percentage as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => 1);


It is not always easy for users to pick the appropriate sampling percentage. If the specified sampling percentage is too high, it can take longer to gather statistics. On the contray, if the data is skewed and the specified sampling percentage is too low, the resulting statistics can be inaccurate.

For this reason, Oracle introduced the AUTO value for the estimate_percent parameter. For example, you can gather statistics on BIGT as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => dbms_stats.auto_sample_size);


The advantage of using AUTO sample size over a fixed number is two-folds. First, when AUTO sample size is specified, the system automatically determines the appropriate sampling percentage. Second, AUTO sample size is more flexible than a fixed sampling percentage. A fixed sampling percentage size that was good at some point in time may not be appropriate after the data distribution in the table has changed. On the other hand when AUTO value is used Oracle will adjust the sample size when the data distribution changes.

When AUTO is used Oracle picks a sample size where the statistics quality is good enough. However, it does not work very well under cases of extreme skew in the data. In Oracle 11g, we improved the behavior when the AUTO value is used. First, AUTO sampling now generates deterministic statistics. Second, and more importantly, AUTO sampling generates statistics that are almost as accurate as 100% sampling but takes much less time than 100% sampling. To illustrate these merits, we compare the performance of using a fixed sampling percentage, AUTO sample size in Oracle 10g and AUTO sample size in Oracle 11g.

We used the standard TPC-D data generator to generate a Lineitem table. The Lineitem table is about 230G and contains 1.8 million rows with 16 columns. The schema of the lineitem table is as follows:

column namecolumn typel_shipdatedatel_orderkeynumberl_discountnumberl_extendedpricenumberl_suppkeynumberl_quantitynumberl_returnflagvarchar2l_partkeynumberl_linestatusvarchar2l_taxnumberl_commitdatedatel_receiptdatedatel_shipmodevarchar2l_linenumbernumberl_shipinstructvarchar2l_commentvarchar2
Table 1 gives the elapsed time of gathering statistics on the Lineitem table by different sampling percentages.

Sampling PercentageElapsed Time (sec) 1% sampling 797100% sampling (Compute)18772Auto sampling in Oracle 10g 2935Auto sampling in Oracle 11g 1908

Table 1: Statistics gathering time on 230G TPC-D Lineitem Table Using Different Estimate Percentages

We also compare the quality of the statistics gathered using different estimate percentages. Among all the statistics of a column, number of distinct values (NDV) is the one whose accuracy used to be an issue. We define the accuracy rate of NDV of a column as follows:


accuracy rate = 1 - (|estimated NDV - actual NDV|)/actual NDV.


The accuracy rate ranges from 0% to 100%. The higher the accuracy rate is, the more accurate the gathered statistics are. Since 100% sampling always lead to an accuracy rate of 100%, we do not report it. We focus on the columns which has at least one statistics accuracy rate below 99.9% when using different estimate percentages. Table 2 illustrates the accurate rates of the columns.

Column NameActual NDV Auto Sampling in Oracle 11g 1% Sampling orderkey 450,000,000 98.0% 50% comment 181,122,127 98.60% 4.60% partkey 60,000,000 99.20% 98.20% suppkey 3,000,000 99.60%
99.90% extendedprice 3,791,320 99.60% 94.30%

Table 2: Accuracy Rate of Gathering NDV LineItem Using Different Estimate Percentages

In short, the elapsed time of ''AUTO sampling in Oracle 11g'' is 10 times faster than 100% sampling but yields the statistics of similar quality (accuracy rate close to 100%).


Improvement of AUTO sampling statistics gathering feature in Oracle 11g

Inside the Oracle Optimizer - Tue, 2008-01-22 17:34
Optimizer statistics in Oracle are managed via a pl/sql package, dbms_stats. It provides several pl/sql procedures to gather statistics for a table, schema, or a database. For example, gather_table_statistics is used to gather statistics on a table. This procedure has an estimate_percent parameter, which specifies the sampling percentage of the statistics gathering. The users can specify any number between 0 ~ 100 for this parameter. For example, suppose you have a table BIGT, you can specify a 1% sampling percentage as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => 1);


It is not always easy for users to pick the appropriate sampling percentage. If the specified sampling percentage is too high, it can take longer to gather statistics. On the contray, if the data is skewed and the specified sampling percentage is too low, the resulting statistics can be inaccurate.

For this reason, Oracle introduced the AUTO value for the estimate_percent parameter. For example, you can gather statistics on BIGT as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => dbms_stats.auto_sample_size);


The advantage of using AUTO sample size over a fixed number is two-folds. First, when AUTO sample size is specified, the system automatically determines the appropriate sampling percentage. Second, AUTO sample size is more flexible than a fixed sampling percentage. A fixed sampling percentage size that was good at some point in time may not be appropriate after the data distribution in the table has changed. On the other hand when AUTO value is used Oracle will adjust the sample size when the data distribution changes.

When AUTO is used Oracle picks a sample size where the statistics quality is good enough. However, it does not work very well under cases of extreme skew in the data. In Oracle 11g, we improved the behavior when the AUTO value is used. First, AUTO sampling now generates deterministic statistics. Second, and more importantly, AUTO sampling generates statistics that are almost as accurate as 100% sampling but takes much less time than 100% sampling. To illustrate these merits, we compare the performance of using a fixed sampling percentage, AUTO sample size in Oracle 10g and AUTO sample size in Oracle 11g.

We used the standard TPC-D data generator to generate a Lineitem table. The Lineitem table is about 230G and contains 1.8 million rows with 16 columns. The schema of the lineitem table is as follows:

column namecolumn typel_shipdatedatel_orderkeynumberl_discountnumberl_extendedpricenumberl_suppkeynumberl_quantitynumberl_returnflagvarchar2l_partkeynumberl_linestatusvarchar2l_taxnumberl_commitdatedatel_receiptdatedatel_shipmodevarchar2l_linenumbernumberl_shipinstructvarchar2l_commentvarchar2
Table 1 gives the elapsed time of gathering statistics on the Lineitem table by different sampling percentages.

Sampling PercentageElapsed Time (sec) 1% sampling 797100% sampling (Compute)18772Auto sampling in Oracle 10g 2935Auto sampling in Oracle 11g 1908

Table 1: Statistics gathering time on 230G TPC-D Lineitem Table Using Different Estimate Percentages

We also compare the quality of the statistics gathered using different estimate percentages. Among all the statistics of a column, number of distinct values (NDV) is the one whose accuracy used to be an issue. We define the accuracy rate of NDV of a column as follows:


accuracy rate = 1 - (|estimated NDV - actual NDV|)/actual NDV.


The accuracy rate ranges from 0% to 100%. The higher the accuracy rate is, the more accurate the gathered statistics are. Since 100% sampling always lead to an accuracy rate of 100%, we do not report it. We focus on the columns which has at least one statistics accuracy rate below 99.9% when using different estimate percentages. Table 2 illustrates the accurate rates of the columns.

Column NameActual NDV Auto Sampling in Oracle 11g 1% Sampling orderkey 450,000,000 98.0% 50% comment 181,122,127 98.60% 4.60% partkey 60,000,000 99.20% 98.20% suppkey 3,000,000 99.60%
99.90% extendedprice 3,791,320 99.60% 94.30%

Table 2: Accuracy Rate of Gathering NDV LineItem Using Different Estimate Percentages

In short, the elapsed time of ''AUTO sampling in Oracle 11g'' is 10 times faster than 100% sampling but yields the statistics of similar quality (accuracy rate close to 100%).


Categories: DBA Blogs, Development

Oracle Discoverer Query Prediction functionality and Performance

Aviad Elbaz - Tue, 2008-01-22 12:21

Lately we noticed that our Discoverer reports runs very slow.
Actually, the problem wasn’t the Discoverer reports query but the query prediction that for some reason took so long.

What is Query Prediction in Discoverer?

“Discoverer includes functionality to predict the time required to retrieve the information in a Discoverer query.
The query prediction appears before the query begins, enabling Discoverer users to decide whether or not to run the query.
This is a powerful facility that enables Discoverer users to control how long they wait for large reports.”
(from Oracle doc’)

The query prediction is the elapsed time while the following message appeared on the bottom left of Discoverer Desktop window: "Determining query time estimate".

For each report we tested, we found query prediction runs 30%-50% (!!!) from the report’s total run time.

Next phase was to start a SQL trace on Discoverer session to see what actually happens when running a Discoverer report.

This is the relevant section from the SQL Trace:

SELECT QS_ID, QS_COST, QS_ACT_CPU_TIME,
       QS_ACT_ELAP_TIME, QS_EST_ELAP_TIME,
       QS_CREATED_DATE, QS_OBJECT_USE_KEY,
       QS_NUM_ROWS,QS_STATE
FROM [EUL_USER].EUL5_QPP_STATS WHERE  QS_COST IS NOT NULL
AND    QS_OBJECT_USE_KEY = :OBJECTUSEKEY
ORDER BY QS_CREATED_DATE DESC


As you can see, the query prediction functionality tries to retrieve statistics information from EUL5_QPP_STATS and it takes 35 seconds. (total time for this report is 55 seconds).

The query prediction based, among other things, on the query prediction statistics table – EUL5_QPP_STATS.
This table records query prediction statistics while running Discoverer reports.

There is no logic by estimating query run time longer than the report’s query itself…

Since the query prediction functionality is important to our users we avoid from disable this functionality (by setting the QPPEnable to 0).
Furthermore, I found that we have statistics data in this table from 7 years ago… 
There is no meaning to hold these statistics…

I tried to find information about purging the EUL5_QPP_STATS and I found this: “How to delete old query prediction statistics” in Oracle® Business Intelligence Discoverer Administration Guide 10g Release 2 (10.1.2.1)

There is a SQL script at [ORACLE_ HOME]\discoverer\util\eulstdel.sql – that deletes all query prediction statistics that were created before a specified date.

Great!
I executed this sql on my database, gave 90 days as a parameter and it deleted 460,000 (from 468,000) rows.
I ran a Discoverer report again, but still query prediction takes too long, same like before.
I checked the explain plan and the cost of the above SQL and it remains the same.
I tried to gather statistics on EUL5_QPP_ENABLE table and rebuild its indexes but cost become higher… (More than 103, something like 800…).

I had no choice but rebuild the EUL5_QPP_ENABLE table (by export, drop table and import).

After recreation of EUL5_QPP_STATS table I ran a Discoverer report again and query prediction takes insignificant time, almost nothing…  :-)

This is from the trace I took after:

SELECT QS_ID, QS_COST, QS_ACT_CPU_TIME,
       QS_ACT_ELAP_TIME, QS_EST_ELAP_TIME,
       QS_CREATED_DATE, QS_OBJECT_USE_KEY,
       QS_NUM_ROWS,QS_STATE
FROM [EUL_USER].EUL5_QPP_STATS WHERE  QS_COST IS NOT NULL
AND    QS_OBJECT_USE_KEY = :OBJECTUSEKEY
ORDER BY QS_CREATED_DATE DESC


The elapsed time for this sql reduced to 0.05 seconds!! (was 35 sec’ before)


Sql cost reduced from 103 to 31!

I checked this issue on Discoverer Desktop 10g (10.1.2.2) but it is relevant to the web tools (Discoverer viewer and Discoverer Plus) as well, since the query prediction functionality exist in these tools like in the client version.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Oracle SOA Suite 11 g - Technology Preview

Peeyush Tugnawat - Tue, 2008-01-22 06:59

Get started with the new technology preview release of SOA Suite 11g!

Refer to this link for details:

Oracle SOA Suite 11 g - Technology Preview

Regular Tasks Of a DBA....

Ayyappa Yelburgi - Sun, 2008-01-20 20:34
1.Regular Monitoring of The free space in Database. 2.Taking logical bakups of important table. 3.Checking the locks on the Database. 4.Checking the long running queries on ur database 5.Analysing the performance of ur Database 6.Investigate Wait Statistics 7.Tablespace Usage 8.Ensure Connectivity to Oracle 9.Lock Contention 10.Extent Failure 11.Alert Logs ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com8

Regular Taska Of a DBA

Ayyappa Yelburgi - Sun, 2008-01-20 20:34
1.Regular Monitoring of The free space in Database.2.Taking logical bakups of important table.3.Checking the locks on the Database.4.Checking the long running queries on ur database5.Analysing the performance of ur Database6.Investigate Wait Statistics 7.Tablespace Usage 8.Ensure Connectivity to Oracle 9.Lock Contention 10.Extent Failure 11.Alert Logs 12.Redo Logs 13.Check if all the instances ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com5

Regular Tasks Of a DBA....

Ayyu's Blog - Sun, 2008-01-20 20:34
Categories: DBA Blogs

Regular Taska Of a DBA

Ayyu's Blog - Sun, 2008-01-20 20:34
Categories: DBA Blogs

R12 Suitability Matching functionality

RameshKumar Shanmugam - Sat, 2008-01-19 18:02
Suitability Matching Functionality is the new functionality available in both Employee and Manager Self Service.
To explain this is in a very common language - this function helps the employee to find the opportunity in the organization that suites his role and competency.The employee can compare the various competency that is desired and essential for a particular role
For the manager Self service this function helps them to find the employee / applicant based on the competency and the role


List of Options available in the Manager Self Service




In the manager Self Service lets take an example of searching the employee/contingent worker/an applicant based on the competency profile and View it in a Graph for analysing the competency for the Job

Start with the option 'Find Suitable People by Competency'

Add the list of competency that you want to search for



Once you select the continue button, you will be able to list of people that match your competency requirement



Click the button Graph people by Competency button that will help you out in the viewing the graph based on the Min , max and the personal proficiency level



Try this out!!!
Categories: APPS Blogs

Oracle’s clusterware real time priority oddity

Christian Bilien - Sat, 2008-01-19 12:16
The CSS processes use both the interconnect and the voting disks to monitor remote node. A node must be able to access strictly more than half of the voting disks at any time (this is the reason for the odd number of voting disks), which prevents split brain. Let’s just recall that split brains are […]
Categories: DBA Blogs

Sending A Message - Intro

Oracle EPM Smart Space - Fri, 2008-01-18 09:57
One of the cool things about using Smart Space is that you immediately become a part of the community of Smart Space users. You get this for free because the Smart Space client has built in messaging capabilities.

This opens up a wealth of opportunities for one to communicate with other Smart Space users in both traditional and non-traditional ways.

With Smart Space, you can chat with one or more users using the Smart Space Discussion dialog or send a toast notification to one or more users. (A toast is that small message box that appears and then disappears in the lower right hand corner of your screen.) For example, one could discuss a financial report with another Smart Space user by simply using the Discussion dialog to send the user a link to the desired report.

A non-traditional way messaging is utilized by the Smart Space system is in the publishing of a Smart Space desktop to one or more users. Publishing is a topic for another day so I will not elaborate for now. But in a nutshell, if you think that you have a great desktop full of gadgets and content, you can “publish” this desktop to other Smart Space users using the messaging infrastructure built into Smart Space. Those users would get a message that a new desktop has been published and prompted to install it.

Another non-traditional use of messaging within Smart Space is to use the messaging API (both .NET and Java). I have successfully used the Smart Space Java messaging API to create an Essbase Custom Defined Function (CDF) that will send a message to any user from within an Essbase calculation script. This could be used to notify a set of users that a calculation has completed. It could also be used to send a message to a user that something in their data needs attention such as a variance or missing data.

The “Always On” nature of the Smart Space client is a fantastic benefit of having an analytic client installed on your desktop. With this messaging footprint, the sky is the limit when it comes to new possibilities for enhancing your BI/EPM experience.

Stay tuned for more on this subject…
Categories: Development

Websphere ND dmgr permission problems

Hampus Linden - Fri, 2008-01-18 05:23
Ran in to a really weird problem with one of our Websphere 6.1 Network deployment setups yesterday and as I couldn't find one single page about the problem in google I thought I'd blog it.
I'm not much of a Websphere admin but managed to fix it after a while.

The problem started when a datasource was updated and all of the sudden all node agents stopped trusting the deployment manager (dmgr). Syncing the nodes failed and hence pretty much everything failed to restart / deploy. Running servers where fine though.
We got this message in the logs:
[1/17/08 13:30:52:020 GMT] 00000028 RoleBasedAuth A   SECJ0305I: 
The role-based authorization check failed for admin-authz operation
SSLAdmin:temporarilyDisableCertificateAuthentication:java.lang.Long.
The user UNAUTHENTICATED (unique ID: unauthenticated) was not granted
any of the following required roles: administrator.
My best guess is that node agent configuration was corrupted in some way.
How do you fix the problem then?
Fairly easy actually.
# Stop all node-agents that seem broken (that would probably be all of them!).
# Go to the node agents bin directory on the node (usually something like $WAS_HOME/profiles//bin/).
# Manually sync the node with syncNode.sh, point to the SOAP connector (default is 8879) on the DMGR server. See example
./syncNode.sh dmgrhost 8879 -username websphere -password webfear
# Start the node agent and verify that the logs are happy. Kick off a cell sync from dmgr. You should see entries similar to this in the logs:
[17/01/08 16:14:59:872 GMT] 0000002f NodeSyncTask  A   ADMS0003I: 
The configuration synchronization completed successfully.

Oracle 11g Security Enhancements Part – 1

Virag Sharma - Thu, 2008-01-17 10:02
Oracle 11g Security Enhancements Part – 1

New parameters have been added to the Oracle Database 11g to enhance the default security of the database.

  • SEC_RETURN_SERVER_RELEASE<?xml:namespace prefix = o />
  • SEC_PROTOCOL_ERROR_FURTHER_ACTION
  • SEC_PROTOCOL_ERROR_TRACE_ACTION
  • SEC_MAX_FAILED_FAILED_LOGIN_ATTEMPTS
  • SEC_DISABLE_OLDER_ORACLE_RPCS*

These parameters are system wide and static.

  • Release of server information restriction

You can restrict the display of the database version banner to unauthenticated clients by setting the SEC_RETURN_SERVER_RELEASE_BANNER initialization parameter in the initsid.ora initialization parameter file to either YES or NO. By default, SEC_RETURN_SERVER_RELEASE_BANNER is set to FALSE.

SQL> show parameter SEC_RETURN_SERVER_RELEASE_BANNER

NAME TYPE VALUE

----------------------------- -------------------- ---------------------
sec_return_server_release_banner boolean FALSE

When set to true the full banner is displayed. When the value is set to FALSE, a limited generic banner is displayed.

  • Protect against denial of Service (DoS) attacks

The two parameters shown specify the actions to be taken when the database receives bad packets from a client. The assumption is that the bad packets are from a possible malicious client. The SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken with the client connection: Continue, drop the connection, or delay accepting requests.

# Continue connection after 10 bad packet

SEC_PROTOCOL_ERROR_FURTHER_ACTION = Continue

#Delay 4 Sec before sever accept next connection

SEC_PROTOCOL_ERROR_FURTHER_ACTION = Delay,4

# Drop connection after 10 bad packet

SEC_PROTOCOL_ERROR_FURTHER_ACTION = Drop,10

The other parameter SEC_PROTOCOL_ERROR_TRACE_ACTION specifies a monitoring action: NONE, TRACE(Default) , LOG, or ALERT.

# DEFAULT Creates the trace files, but it is useful for debugging purposes,

SEC_PROTOCOL_ERROR_TRACE_ACTION = Trace

# Writes a short, one-line error message to the server trace file and alert log.

SEC_PROTOCOL_ERROR_TRACE_ACTION = Alert

# Writes a short, one-line message to the server trace file.

SEC_PROTOCOL_ERROR_TRACE_ACTION = Log

# Configures the server to ignore the bad packets and does not generate any trace files or

SEC_PROTOCOL_ERROR_TRACE_ACTION = None

>

SQL> show parameter SEC_PROTOCOL_ERROR_FURTHER_ACTION

NAME TYPE VALUE
------------------------------- ------------------- ------------------------
sec_protocol_error_further_action string CONTINUE

SQL> show parameter SEC_PROTOCOL_ERROR_TRACE_ACTION

NAME TYPE VALUE
---------------------- ------------------------- -----------------------
sec_protocol_error_trace_action
string TRACE

  • Protect against intruder

If profile is not enabled then intruder can try unlimited number of authenticated requests with different user names and passwords in an attempt to gain access to the database.

A new initialization parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS that has a default setting of 10 causes a connection to be automatically dropped after the specified number of attempts. This parameter is enforced even when the password profile is not enabled.

SQL> show parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS

NAME TYPE VALUE
-------------------------- --------------------------------------------------
sec_max_failed_login_attempts integer 10

>

The sqlnet.ora INBOUND_CONNECT_TIMEOUT parameter and the FAILED_LOGIN_ATTEMPTS initialization parameter also restrict failed logins, but the difference is that these two parameters only apply to valid user accounts.

>

Recently come across new parameter SEC_DISABLE_OLDER_ORACLE_RPCS on Database Error Messages 11g Release 1

>
ORA-03139: This OCI call has been disabled by the DBA
Cause: The SEC_DISABLE_OLDER_ORACLE_RPCS initialization parameter was enabled.
Action: Contact the Database Administrator

But when searched oracle documentation not found any detail about it , also not found any detail about it on Database

SQL> show parameter SEC_DISABLE_OLDER_ORACLE_RPCS

SQL>

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator