Feed aggregator

BEA mashup platform - Genesis

Rakesh Saha - Tue, 2008-01-08 17:43

Monitors in Server Manager

Mark Vakoc - Tue, 2008-01-08 17:42


Most of my posts thus far have been about installation, troubleshooting, and other server manager basics. Today begins a series of posts outlining the new or enhanced capabilities provided by SM.

Monitors are the mechanism by which administrators can be alerted through e-mail when an event of interest occurs. Much of this functionality is a direct carryover from that provided by the SAW SMC infrastructure in previous tools releases with some significant enhancements to boot.

As you may be aware by now Server Manager is a complete replacement for SAW and SMC. Among the other benefits, such as deployment and configuration management, we wanted to enhance and make easier to use the functionality provided by the SAW application.

While evaluating the SMC monitoring capabilities we identified the need to improve it in the following ways:

* Simplify the setup required to monitor servers and configure events of interest

* Enhance the monitored events to include some key items of interest, such as a user being unable to login to the E1 HTML Server

* Permit configuration of the hours in which alert e-mails should be sent for sites that make use of multiple administrators that are responsible for particular times of the week

* Maintain a history of past events and record the e-mail messages that were sent

We also changed the mechanism by which the events of interest are obtained. Beginning with 8.97 our server products contain an embedded variant of the management agent that provides server manager with the runtime information about the servers. Using this mechanism to obtain events provided two primary benefits: many of the events are reported to SM immediately upon occuring and events can be obtained from clustered or multi-JVM configurations for our web based products.

Currently monitoring is supported for our enterprise server and HTML server products only.

To get started select the monitors link from the quicklinks section. Note you must be signed into the management console as the jde_admin user or another user that as been granted the 'monitorConfig' permission to make changes to the monitoring configuration.

SMTP Configuration

The first step is to configure the SMTP mail server that will be used by server manager to send emails. Simply supply the mail server name, TCP/IP port to use, and sender email to use as the 'from' address. Some SMTP servers may require the sender email be from the same domain the mail server is configured to use. Note: SMTP servers that require authentication to send emails is not currently supported.

After making any changes you may supply an email address to test the settings. Server Manager will send an email to the supplied address to ensure the mail server configuration is correct.

Getting Started

The next step is to create a new monitor. You may have as many monitors as you wish. For example you may wish to create multiple monitors that listen for different events and each have different email recipients. Enter a name for the new monitor and select the 'Create' button. You will be redirected to a page used to configure the newly created monitor.

The first option in the general settings controls how often the monitor should poll for events. Some events will be detected immediately; when they occur a notification is sent to the management console and then to each running monitor. If this event is enabled for a particular monitor an email will be sent immediately. Other events are polled on a periodic basis. For example checking the free disk space on an enterprise server occurs on this period poll. You can change the frequency in which the monitor will check for these events.

Checking for monitored events is a low impact activity. That said if you have a large number of monitors it may be advisable to increase this interval from the default of 30 seconds.

Secondly you may configure whether this monitor should be automatically started when the management console application is started. Regardless of this selection an authorized user may start and stop monitors at any time using the previous page.

Instance Selection

The next step involves selecting the managed instances that this monitor should observe. Simply move the desired instances from the available options list to the selected options. Note that any changes made here on a running monitor will take effect immediately; the monitor need not be restarted.

Event Selection

Now that we have selected which managed instances we should monitor we now need to select which events we wish to observe. You do so by simply selecting the events of interest in the next section of the page. Each event has a help box next to it describing what the event is and when it may occur.

Some events may have threshold values that allow you to define a limit that, once reached, will trigger an email notification. The example below shows the limits for simultaneous users.

Once a threshold limit is reached on an enabled event a email notification will be sent. Notifications will not be resent unless the threshold goes higher. Consider the simultaneous users event. If we set the threshold to 50 we would receive a notification once 50 users are on at the same time. If two users sign off and two new users sign back on we are back at 50 simultaneous users. An email will not be sent; an email for 50 users has already been sent. If another user signs on, so we are at 51 sessions, and email will be sent; we have gone higher then the highest threshold reached.

I won't go into what all the events are in this post; they are documented with online help within the application.

Notification Hours

For a particular monitor you may specify which hours in the day and which days of the week email notifications should be sent. This may be helpful for those who administer in shifts. Those interested in events on weekdays may be different then those interested in weekend events, for example.

When you create a new monitor the default will be to enable notifications for all hours of all days. You can change this by modifying the times for each day using 24 hour notation. To disable events for an entire day simply set the start time and end time to both be 00:00.

The management console will use the clock and time zone information provided by the JVM on which it runs. That is the times should be considered to be the times as known to the management console machine.

Email Recipients

Finally we specify the email recipients that should receive notifications. You may add as many recipients as you wish. Any changes made to this list will take effect immediately; you need not restart the monitor.

Emails are sent individually to each recipient defined for a monitor using the from address configured previously. The subject and content of the email will contain details of the event. The mail format is plain text and is suitable for email, pager, and SMS mailboxes.

If an email could not be sent for any reason the failure will be recorded in the monitor's history, as discussed below.

Monitor History

Server Manager maintains a history for each monitor. Each start of a monitor will be listed in the monitor history.

You may view the history of a particular monitor to see all the events that occurred and the emails sent by clicking the appropriate icon in the grid row.

Each event that occurred will be listed along with the same type if information that was contained in the email sent. A grid will contain a listing for each email recipient of the monitor showing the successful sending of the email, an email that wasn't sent because it was outside the notification hours configured, or a email that failed to send for some reason such as an invalid recipient.

You may delete the monitor history if you no longer wish to view it. You may not delete the history for an actively running monitor.

Cloning Monitors

We have made it easy to clone an existing monitor. Simply select the corresponding icon in the 'Create Duplicate' column in the list of available monitors.

All the settings, selected managed instances, events, notification hours, and email recipients from the selected monitor will be copied to a new monitor definition. This makes setting up monitors for shifts much easier; the events and other setup need not be configured multiple times.


Hopefully you see that setting up and using monitors in Server Manager is much easier than previous solutions and the added events make administering your E1 servers much easier. Dig in, play with monitors, and enjoy.

UPDATE: I think the issue with missing images has been resolved.

10 Scripts Every DBA Should Have

Ayyappa Yelburgi - Tue, 2008-01-08 06:06
I. Display the Current Archivelog Status :ARCHIVE LOG LIST;II. Creating a Control File Trace FileALTER DATABASE BACKUP CONTROLFILE TO TRACE;III. Tablespace Free Extents and Free Spacecolumn Tablespace_Name format A20column Pct_Free format 999.99select Tablespace_Name,Max_Blocks,Count_Blocks,Sum_Free_Blocks,100*Sum_Free_Blocks/Sum_Alloc_Blocks AS Pct_Free from(select Tablespace_Name, SUM(Blocks) ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com6

10 Scripts Every DBA Should Have

Ayyu's Blog - Tue, 2008-01-08 06:06
Categories: DBA Blogs

Hail the Champions

Fadi Hasweh - Tue, 2008-01-08 01:55
I participate recently in a post from our famous Bolger OCP advisor, it a nice blog that help apps community with info. About Apps certification.
I am trying to be active again.
You can check his post here.

Good luck with your certification

Fix for Rails 2.0 on Oracle with database session store

Raimonds Simanovskis - Mon, 2008-01-07 16:00

As I started to explore Rails 2.0 I tried to migrate one application to Rails 2.0 which is using Oracle as a database. Here are some initial tips for Rails 2.0 on Oracle that I found out.

Oracle adapter is no more included in Rails 2.0 so you need to install it separately. It is also not yet placed on gems.rubyforge.org therefore you need to install it with:

sudo gem install activerecord-oracle-adapter --source http://gems.rubyonrails.org

The next issue that you will get is error message “select_rows is an abstract method”. You can find more information about it in this ticket. As suggested I fixed this issue with the following Oracle adapter patch that I call from anvironment.rb file:

module ActiveRecord
  module ConnectionAdapters
    class OracleAdapter
      def select_rows(sql, name = nil)
        result = select(sql, name)
        result.map{ |v| v.values}

And then I faced very strange behaviour that my Rails application was not working with database session store – no session data was saved. When I changed session store to cookies then everything worked fine.

When I continued investigation I found out that the issue was that for each new session new row was created in “sessions” table but no session data was saved in “data” column. As “data” column is text field which translates to CLOB data type in Oracle then it is not changed in Oracle adapter by INSERT or UPDATE statements but with special “write_lobs” after_save callback (this is done so because in Oracle there is limitation that literal constants in SQL statements cannot exceed 4000 characters and therefore such hack with after_save callback is necessary). And then I found that class CGI::Session::ActiveRecordStore::Session (which is responsible for database session store) does not have this write_lobs after_save filter. Why so?

As I understand now in Rails 2.0 ActiveRecord class definition sequence has changed – now at first CGI::Session::ActiveRecordStore::Session class is defined which inherits from ActiveRecord::Base and only afterwards OracleAdapter is loaded which adds write_lobs callback to ActiveRecord::Base but at this point it is not adding this callback to already defined Session class. As in Rails 1.2 OracleAdapter was loaded together with ActiveRecord and before Session class definition then there was no such issue.

So currently I solved this issue with simple patch in environment.rb file:

class CGI::Session::ActiveRecordStore::Session 
  after_save :write_lobs

Of course it would be nicer to force that OracleAdapter is loaded before CGI::Session::ActiveRecordStore::Session definition (when ActionPack is loaded). If somebody knows how to do that please write a comment :)

Categories: Development

Where has all my memory gone ?

Christian Bilien - Sun, 2008-01-06 14:09
A while ago, I came across an interesting case of memory starvation on a Oracle DB server running Solaris 8 that was for once not directly related to the SGA or the PGA. The problem showed up from a user perspective as temporary “hangs” that only seemed to happen at a specific time of the […]
Categories: DBA Blogs

Happy New Year 2008

Peter Khos - Fri, 2008-01-04 22:57
I hoped that 2007 has been good to you all both in your professional and personal lives. 2007 has been eventful and lots of stuff happening. We are now almost just 2 years away from the 2010 Olympics (Feb 2010) and construction of the various venues and transportation systems are chugging along full steam. In my own suburb, Richmond, we have the Canada Line (a light rail system linking the Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com1

Unconventional Oracle Installs, part One

Moans Nogood - Wed, 2008-01-02 18:24
You have to watch this:


We'll follow it up with a few other initiatives in order to help the big companies bring down the time spent to install Oracle from, say, 50 hours to one or two.

Perrow and Normal Accidents

Moans Nogood - Wed, 2008-01-02 18:21
While reading the book 'Deep Survival' (most kindly given to me at the UKOUG conference in Birmingham by Sir Graham Wood of Oracle after the fire in my house) I happened on a description on page 107 of a book called 'Normal Accidents' by a fellow named Perrow (get it? per row - a perfect name for database nerds).

Perrow's theses is that in any tightly coupled system - in which unexpected interactions can happen - accidents WILL happen, and they're NORMAL.

Also, he states that technological steps taken to remedy this will just make matters worse.

Perrow and IT systems
I have freely translated Perrow's thoughts into the following:

IT systems are tightly coupled. A change - a patch, a new application, or an upgrade - to a layer in the stack can cause accidents to happen, because they generate unexpected interactions between the components of the system.

This is normal and expected behaviour, and any technological gear added to the technology stack in order to minimize this risk will make the system more complex and therefor more prone to new accidents.

For instance, I find that two of the most complexing things you can do to an IT system are clusters and SAN's.

These impressive technologies are always added in order to make systems more available and guard against unexpected accidents.

Hence, they will, in and by themselves, guarantee other normal accidents to happen to the system.

Complexing and de-complexing IT systems
So you could say that it's a question of complexing or de-complexing IT systems.

I have found four situations that can complex IT systems (I'm being a bit ironic here):

1. To cover yourself (politics).
2. Exploration.
3. SMS decisions.
4. Architects.

1. Reason One: To cover yourself (politics)
You might want to complex systems in order to satisfy various parties that you depend on or who insist on buying certain things they've heard about at vendor gatherings:

"Yes, we've done everything humanely possible, including buying state-of-the-art technology from leading vendors and asking independant experts to verify our setup".

This is known as CYB (Cover Your Behind).

2. Reason Two: Exploration
Ah, the urge to explore unknown territories and boldly go where no man has ever gone before...

Because you can.

The hightened awareness thus enabled might be A Good Thing for your system and your customers.

It could also create situations that you and others find way too interesting.

Reason Two is often done by men, because we love to do stupid or dangerous things.

3. Reason Three: SMS decisions
A third reason for complexing IT systems could be pure ignorance in what is commonly referred to as Suit Meets Suit (SMS) decisions - where a person of power from the vendor side with no technical insight talks to a person of power from the customer side with no technical insight.

These SMS situations tend to cause considerable increases in the GNP (just like road accidents and fires) of any country involved because of all the - mostly unneccessary - work following.

The costs to humans, systems and users can be enormous. Economists tend to love it.

4. Reason Four: Architects
A fourth reason for complexing IT systems can be architects. Don't get me wrong: There are many good IT architects. The very best ones, though, tend not to call themselves architects.

One of my dear friends once stated that an architect is often a developer that can't be used as a developer any more. Very funny.

However, what I have witnessed myself is that the combination of getting further away from the technical reality and getting closer to the management levels (the C class, as it were) tend to make some architects less good at making architectural decisions after a while.

That's where the vendors get their chance of selling the latest and greatest and thus complexing new and upcoming systems.

Summary: The end of reasoning
Four reasons must be enough. There are probably more, but I cannot think of them right now.

Anyway, imagine what savings in costs and worries you can obtain by moving just a notch down that steep slope of complexity in your system.

You might be able to de-complex your system to a degree where it becomes
absolutely rock solid and enormously available.

That should be our goal in the years to come: To help our customers de-complex their systems, while of course trying everything we can to support those who chose to complex theirs.

Two new angles on tuning/optimising Oracle

Moans Nogood - Wed, 2008-01-02 18:00
Now and then some new angles and thoughts emerge in a field where a lot of people think there's not much new to be said.

Two examples:

1. James Morle told me a while ago, that he thinks all performance problems relate to skew, to latency, or to both. It's brilliant, I think. I hope James will one day write about it. He's a damn fine writer when he gets down to it.

2. This one from Dan Fink. Impressive piece, I think. Enjoy it.


When I emailed Dan and told him I admired his angle on this, he responded:

"I think it is a matter of keeping an open mind and knowing that you have friends and colleagues who are open to new ideas. Support is absolutely critical, even when you don't necessarily agree with what is being said. That keeps the flow of information open.

I shall never forget walking into a conference room. In big letters on one of the whiteboards were the words "THINK OUTSIDE THE BOX". For emphasis...someone had drawn a nice large box around them! "

I like that one :-)).

Using NFS partitions on AIX

Mark Vakoc - Wed, 2008-01-02 13:29
Unless you are running an E1 enterprise server on an NFS partition on the AIX platform, you can probably skip this posting.

Still here? Ok. This post outlines a potential problem with changing the tools release of an enterprise server when it is running on a NFS partition on AIX. It pertains to that combination only.

The AIX operating system has a feature that keeps shared libraries in memory even when the program that loads them terminates. Subsequent loads of that or any other program using the same library would be faster because the library is already in memory.

This behavior can cause some problems when the shared library is located on an NFS partition. Consider the case when Server Manager is performing a tools change for an enterprise server. The management agent will 1) stop the enterprise server, 2) delete the existing tools release, 3) extract and replace it with the new tools release.

So where's the problem? After stopping the enterprise server the E1 shared libraries may be cached by AIX even though no active processes are using them. AIX maintains open file handle to the shared library. On UNIX based platforms you are able to delete a file that is open by another process; although it will immediately disappear from the file system directory listings it will not actually be removed once the last handle to that file is closed. This behavior is done within the filesystem implementation.

The remote nature of the NFS file system requires a special implementation. When an open file is deleted on a NFS partition it will appear as a .nfs##### file in the same directory, where #### refers to a number randomly assigned. This file cannot be removed directly; it will disappear as soon as the last process holding the originally deleted file closes that handle.

So what does this have to do with E1 and Server Manager? The second step of performing a tools release change involves deleting the existing tools release. The caching of the shared libraries, and thus the presence of the .nfs#### files in the $EVRHOME/system/lib directories will prevent the removal of the system directory. This will cause the tools release change to fail, and the previous tools release will be restored. Even root cannot delete this .nfs files directly.

What can be done is to stop the enterprise server using Server Manager then sign on as root and run the command 'slibclean'. This will instruct AIX to unload/uncache any shared libraries that are no longer being used by an active process. You may then change the tools release using Server Manager without any issue.

Solaris Express on a Toshiba Satellite Pro A200

Hampus Linden - Tue, 2008-01-01 05:25
I bought myself one of those cheap laptops the other month. I needed a small machine for testing and since laptops are just as cheap (if not cheaper) as desktops these days I got a laptop.
The machine came with Vista but I wanted to triple boot Vista, Ubuntu and Solaris Express Community Edition.

  • Use diskmgmt.msc in Vista to shrink the partition the machine came with, Windows can do this natively so there is no need to use Partition Magic or similar tools. Create at least three new partitions. One for Solaris, one for Linux and one for Linux swap.
  • Secondly install Solaris, boot off the CD and go through the basic installer. The widescreen resolution worked out of the box (as usual). Do a full install, spending time "fixing" a smaller installer is just annoying. Solaris will install it's grub boot loader on both the MBR and superblock (on the Solaris partition). It probably makes sense to leave a large slice unused so it can be used with ZFS after the installation is done.
  • Install Ubuntu. Nothing simpler than that.
  • Edit Ubuntu's grub menu config (/boot/grub/menu.lst) to include Solaris. Simply point it to the Solaris parition (hd0,2 for me). Add these lines at the end of the file.
    title Solaris
    root (hd0,2)

I had to install the gani NIC driver in Solaris to get the Ethernet card working and the Open Sound System sound card driver to get sound working.
The Atheros WiFi card is supposed to be supported but I couldn't get it to work, even after adding the pci device alias to the driver. I'll post an update if I get it to work.

Google - just another big, dumb, brutal organisation?

Moans Nogood - Mon, 2007-12-31 04:42
I found this article in The Economist interesting:


There's some truth there, I think. Google is buying stuff (like blogger), is making pirate copies (sorry: clones) of other companies' software and in general trying to be as dominant and brutal as Microsoft, IBM, Oracle and the others. Yawn.

What the Hell happened to "Don't do evil"? Why did Google sell out to the Chinese horror regime?

They're just after the money and the happiness of shareholdes. Boring stuff.


R12 Global Deployment functionality

RameshKumar Shanmugam - Sat, 2007-12-29 17:04
It is a common Operation process in any industry to move people around or transfer employee’s temporary basis for a particular project or Assignment. Or transfer them permanently to a different country.

Though this functionality was available in 11i for the HR professionals to do it manually if the cross business group is enabled they can update organization and location in the assignment form, or another method of doing this is to terminate and hire the employee in the new business group.

Now in R12 this functionality has been made as a standard functionality in the Manager Self Service Responsibility Under the function 'Transfer'

Manager Self Service > Transfer

Select the employee you wanted to transfer, follow the wizard which will take you through complete process like new salary change, new direct report, New Location Change, Time card approver, work Schedule etc., finally you will receive a Review summary page where you can review and submit for the approval

Note: if you are a Oracle Payroll Customer you need to take necessary actions when changing the work location for the payroll Taxation
Try this out!!!
Categories: APPS Blogs

Oracle 11g NF Database Replay

Virag Sharma - Thu, 2007-12-27 22:10

Oracle 11g New Feature Database Replay

“Simulating production load is not possible” , you might have heard these word.

In one project, where last 2 year management want to migrate from UNIX system to Linux system ( RAC ) , but they still testing because they are not sure where this Linux Boxes where bale to handle load or not. They have put lot of efforts and time in load testing and functional testing etc, but still not le gain confidence.

After using these feature of 11g , they will gain confidence and will able to migrate to Linux with full confidence and will know how there system will behave after migration/upgrade.

As per datasheet given on OTN

Database Replay workload capture of external clients is performed at the database server level. Therefore, Database Replay can be used to assess the impact of any system changes below the database tier level such as below:

  • Database upgrades, patches, parameter, schema changes, etc.
  • Configuration changes such as conversion from a single instance to RAC etc.
  • Storage, network, interconnect changes
  • Operating system, hardware migrations, patches, upgrades, parameter changes

DB replay does this by capturing a workload on the production system with negligible performance overhead( My observation is 2-5% more CPU usage ) and replaying it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. This makes possible complete assessment of the impact of the change including undesired results; new contentions points or performance regressions. Extensive analysis and reporting ( AWR , ADDM report and DB replay report) is provided to help identify any potential problems, such as new errors encountered and performance divergences. The ability to accurately capture the production workload results in significant cost and timesaving since it completely eliminates the need to develop simulation workloads or scripts. As a result, realistic testing of even complex applications using load simulation tools/scripts that previously took several months now can be accomplished at most in a few days with Database Replay and with minimal effort. Thus using Database Replay, businesses can incur much lower costs and yet have a high degree of confidence in the overall success of the system change and significantly reduce production deployment

Steps for Database Replay

  1. Workload Capture

Database are tracked and stored in binary files, called capture files, on the file system. These files contain all relevant information about the call needed for replay such as SQL text, bind values, wall clock time, SCN, etc.

1) Backup production Database #

2) Add/remove filter ( if any you want )
By default, all user sessions are recorded during workload capture. You can use workload filters to specify which user sessions to include in or exclude from the workload. Inclusion filters enable you to specify user sessions that will be captured in the workload. This is useful if you want to capture only a subset of the database workload.
For example , we don't want to capture load for SCOTT user

fname => 'user_scott',
fattribute => 'USER',
fvalue => 'SCOTT');

Here filter name is "user_scott" ( user define name)

3) Create directory make sure enough space is there

AS '/u04/oraout/test/db-replay-capture';

Remember in case on Oracle RAC directory must be on shared disk otherwise , you will get following error

SQL> l
2 DBMS_WORKLOAD_CAPTURE.start_capture (name =>'capture_testing',dir => 'DB
3 END;

SQL> /
ERROR at line 1:
ORA-15505: cannot start workload capture because instance 2 encountered errors
while accessing directory "/u04/oraout/test/db-replay-capture"
ORA-06512: at "SYS.DBMS_WORKLOAD_CAPTURE", line 799
ORA-06512: at line 2

4) Capture workload

name => capture_testing',dir=>'DB_REPLAY_DIR',
duration => NULL );

Duration => NULL mean , it will capture load till we stop with below mentioned manual SQL command. Duration is optional input to specify the duration (in seconds) , default is NULL

5) Finish capture


# Take backup of production before Load capture, so we can restore database on test environment and will run replay on same SCN level of database to minimize data divergence

Note as per Oracle datasheet

The workload that has been captured on Oracle Database release and higher can also be replayed on Oracle Database 11g release.So , I think , It simply mean NEW patch set will support capture processes. Is it mean Current patch set ( not support load capture ??????

2. Workload Processing

Once the workload has been captured, the information in the capture files has to be processed preferably on the test system because it is very resource intensive job. This processing transforms the captured data and creates all necessary metadata needed for replaying the workload.

exec DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_DIR');

  1. Workload Replay

1) Restore database backup taken step one to test system and start Database

2) Initialize

DBMS_WORKLOAD_REPLAY.initialize_replay (
replay_name => 'TEST_REPLAY',
replay_dir => 'DB_REPLAY_DIR');

3) Prepare

exec DBMS_WORKLOAD_REPLAY.prepare_replay(synchronization => TRUE)

4) Start clients

$ wrc mode=calibrate replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release - Production on Wed Dec 26 00:31:41 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Report for Workload in: /u03/oradata/test/db-replay-capture

Consider using at least 1 clients divided among 1 CPU(s).

Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 7

- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE

$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (00:31:52)

5) Start Replay


$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (00:31:52)
Replay started (00:33:32)
Replay finished (00:42:52)

  1. Analysis and Reporting

Generate AWR , ADDM and DB reply report and compare with data gathered on production for same timeperiod when load was captured on Production database. For Database Replay Report run following command

SQL> SELECT id, name FROM dba_workload_replays;

---------- --------------------

v_report CLOB;
v_report := DBMS_WORKLOAD_replay.report(
replay_id => 1,

For sample report [ Click Here]

Chapter 22 Database Replay
Categories: DBA Blogs

Mobile phones, fats and backups

Moans Nogood - Thu, 2007-12-27 04:56
As with such things, life has been rather dull since the fire - relatively speaking. Fortunately, I had a wonderful thing happening to my mobile phone that brightened several of my days before Christmas.

It all started about half a year ago, when the menu button on my Nokia E60 stopped working. That's rather inconvenient, but I could still call people up and receive calls, so no big problem.

Then, one Saturday in December, the old team from the National Nurses' Dormitory in Copenhagen had our annual, traditional, Danish Christmas lunch in a place called Told & Snaps in Copenhagen. When the frist dish was brought in - pickled herrings, of course - my dear friend Ole and I decided to see if soft butter on the keys could bring the menu button back to life. So we, uhm, buttered the keyboard - and it worked! The menu button worked again!

Flushed with succes we decided to try and repair the problems I had with the microphone and loudspeaker in the E60. So we used fat (from a duck, I think) on the, eh, bottom of the phone. Didn't seem to have the desired effect. In fact, in the days that followed I had to shout louder and louder in order for people to hear me. It was getting silly - I had to be in the privacy of my car in order not to disturb the general population with my shouting.

But I could still send and receive SMS messages, so things were OK.

Then my wife Anette and I had dinner at restaurant Avanti and that's when Anette hit the oil lamp on the table so that the E60 became soaked in oil. The display looked like a lava lamp and the keys became very loooong and soooft to use.

During the night, while I was asleep, the E60 sadly expired.

That's when I discovered that my 800 contacts were residing inside the E60, not on the mini-SD-card. So I got a new phone from a friendly phone broker (an N-73, which seems to be a fine phone, by the way) but every call I received were wonderfully new and exciting since I didn't recognice any of the numbers.

Of course I didn't have any backup. I'm a man.

Then a miracle happened. Anette and some good Miracle folks managed to wake up the phone for a short while and unload my contacts. That was a good day.

So people have told me: This will teach you to remember to take a backup!

But look at it this way: I started carrying a mobile phone 25/8/370 back in 1991 and this was the first time I was in danger of losing everything. And I have never taken a backup.

Chances are it won't happen again anytime soon either, unless somebody steals it.

So I think I'll continue with my usual mobile phone backup strategy :-))).

Merry Christmas.


Oracle 11g Database Replay

Virag Sharma - Thu, 2007-12-27 01:32

If your database currently running on 10g R2 , and want upgrade database to 11g then you can take advantage of Database Replay , As per Datasheet given on OTN workload capture on can run/replay on 11g.

So , it simply mean , before you going to upgrade from 10g R2 to 11g , you can take advantage of database Replay feature i.e. capture work load on Production 10g R2 database , then copy workload to test system , upgrade test system to 11g , run workload captured on production and check how your system performing. This make life easier , isn't it ?

Check following links

Categories: DBA Blogs

Add / Delete a row from a SQL based Tabular Form (Static ID)

Duncan Mein - Tue, 2007-12-25 15:40
The current application I am working on involves a re-write to a 12 screen wizard that was written 18 months ago. Several of the screens make use of manually built tabular forms (SQL report regions) and collections to hold the values entered. Some of the screens in the wizard have multiple tabular forms on them as well.

Currently all tabular forms have 15 lines which cannot be added to or deleted from. In the new version, we removed this limit and allow the user to add as many rows as he / she needs. Furthermore, specific rows can now be removed from the Tabular form. Since all entered data is written into collections, we wanted to avoid "line by line" processing i.e. submitting the form for each time, updating the collection and branching back to the page. By utilising some simple JavaScript and the new "Static ID" of the Reports Region new to APEX 3.0, all requirements could be met.

The Static ID attribute of the reports region allow us to add our own (unique) ID to a report region. From here we can simpley navigate down the DoM, clone a row in the form using cloneNode and append it to the table using appendChild.

The JavaScript will work even if you have multiple report regions on the same page providing each report region has a unique Static ID value.

  • Create a new region on your page using the following details:
  1. Region Type: Report
  2. Report Implementation: SQL Report
  3. Title: Add Row to Report
  4. Region Template: Reports Region
  5. SQL Query: view
  6. Report Template: Standard
  • Add the following JavaScript to your Page Header: view
  • Copy the region template: Reports Region and name it Reports Region (Static ID)
  • Edit the region template: Reports Region (Static ID) and replace the Substitution String #REGION_ID# with #REGION_STATIC_ID# in the Definition section

  • Edit the region: Add Rows to Report and insert the value: REPORT1 into the Static ID textbox found in the Identification seciton. Note that the values entered into the Static ID textbox must be unique to the page if using multiple report regions where you are specifying a Static ID. Then change the template of the region to use the newley created Reports Region (Static ID) template

  • Copy the report template: Standard and name it Standard (Static ID)

  • Edit the report template: Standard (Static ID) and replace the text: id="#REGION_ID#" with: id="datatable_#REGION_ID#" in the before rows section.

  • Edit the report attributes and change the report template to use the newley created: Standard (Static ID)

  • Add a button to the page using the following details:
  1. Select a Region for the Button: Add Rows to Report
  2. Position: Create a button in a region position
  3. Button Name: ADD_ROW
  4. Label: Add Row
  5. Button Type: HTML Button
  6. Action: Redirect to URL without submitting page
  7. Target is a: URL
  8. URL Target: javascript:addRow('REPORT1');

Please note that REPORT1 refers to the Static ID of the region you want to add your row to

  • Test your Add and Delete a row functionality

An example with all the source code can be seen here


Subscribe to Oracle FAQ aggregator