David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 2 weeks 5 days ago

PeopleTools Table Reference Generator

Sun, 2019-06-09 15:46
Like many other PeopleSoft professionals, I spend a lot of time looking at the PeopleTools tables because they contain meta-data about the PeopleSoft application. Much of the application is stored in PeopleTools tables. Some provide information about the Data Model. Many of my utility scripts reference the PeopleTools tables, some of them update them too. Therefore, it is very helpful to be able to understand what is in these tables. In PeopleSoft for the Oracle DBA, I discussed some tables that are of regular interest. I included the tables that correspond to the database catalogue and that are used during the PeopleSoft login procedure. The tables that are maintained by the process scheduler are valuable because they contain information about who ran what process when, and how long they ran for.
I am not the only person to have started to document the PeopleTools tables on their website or blog, most people have picked a few tables that are of particular interest. I wanted to produce a complete reference.  However, with over 4000 PeopleTools tables and views, it is simply not viable to do even a significant number of them manually.  So, I wrote some SQL and PL/SQL to dynamically generate a page for each PeopleTools table and views and put the generated pages on my website.  If you use Google to search for a PeopleTools name you will probably find the reference page.
I have now revisited that code and made a number of changes and improvements.
  • I have used a later version of PeopleTools to generate the reference on my website.  The list of PeopleTools tables is no longer defined in PeopleTools by object security, so I have used an independent and somewhat broader definition: Table or View records that are either owned by PPT or whose SQLTABLENAME is the same as the record name.
  • There is much less reliance on static pages.  There are now only 3 such pages everything else is generated.  Instead, additional data is loaded from static scripts into the PLAN_TABLE that should always be present in an Oracle database and so it doesn't have to be created.  It should be a global temporary table so there is no problem with debris being left behind or interference with other processes.  That data is then combined with data in the PeopleTools tables during the generation process.
  • The utility has been rewritten as a PL/SQL package that should be created in the PeopleSoft Owner ID schema (usually SYSADM).
  • The generated HTML is simply tidier, and more consistent.
The source is available on GitHub at https://github.com/davidkurtz/PTRef, so you can download and generate your own reference on your own current version of PeopleTools. An example of the generated output can be found on my website.
The idea of the original PeopleTools tables reference was the people could contribute additional descriptions and information that were not in the PeopleTools tables.  That can still happen, and indeed should be easier, by making changes to the scripts that load the additional data and uploading new versions to GitHub.


PS360 enhancement: Added report of DDL models

Fri, 2019-06-07 09:08
I have written several blogs and presentations recently about how and how not to collect statistics in PeopleSoft.
  • Managing Cost-Based Optimizer Statistics for PeopleSoft recommends
    • If you are going to continue to use DBMS_STATS in the DDL model then
      • Do not specify ESTIMATE_PERCENT because it disables the hash-based number-of-distinct-values calculation, forcing it to go back to the COUNT(DISTINCT ...) method that requires a sort, and may not produce accurate values because it only samples data.
      • Do not specify METHOD_OPT='FOR ALL INDEXED COLUMNS SIZE 1' because this will not collect histograms on indexed columns, and will not update column statistics on unindexed columns.
      • Do specify FORCE=>TRUE so that you can lock and delete statistics on temporary records.
    • However, the alternative is to use GFCSTATS11 package to collects these statistics.  This package is controlled by a metadata table so you can defined statistics collection behaviour for specific records.
  • How Not to Collect Optimizer Statistics in an Application Engine Program
    • This blog explains why you should not explicitly code DBMS_STATS calls into Application Engine programs.
This has prompted me to add a new report to the PS360 utility that simply reports the various DDL models for Oracle.  Thus during a health check, I can see how statistics are collected during batch processes.

PS360 can be download from https://github.com/davidkurtz/ps360.




PeopleSoft Adminstrator Podcast: #184 – nVision Performance

Wed, 2019-05-29 06:42
I recorded a second podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about nVision.
(10 May 2019) #184 – nVision Performance You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

PeopleSoft Administrator Podcast: #183 – Effective Performance Monitoring

Wed, 2019-05-08 14:09
I recently recorded a podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about instrumentation, monitoring the performance of PeopleSoft system, and Performance Monitor.  There is also just a little about cursor sharing.
(3 May 2019) #183 – Effective Performance Monitoring You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Effective PeopleSoft Performance Monitoring

Fri, 2019-03-08 12:00
This advice note describes how to configure PeopleSoft systems on Oracle so that performance metrics are collected that are useful performance monitoring and subsequent performance tuning.
Contents
  • Oracle RDBMS Instrumentation
    • On-Line Component/Page Information
    • Application Engine Step Information
    • Cobol/nVision Instrumentation
    • nVision Layout Instrumentation
    • 2nd Database Connection Instrumentation
  • PeopleTools Performance Metrics
    • Cobol and Application Engine Batch Timings
    • PeopleSoft Performance Monitor
Summary of Recommendations
  • Set EnableAEMonitoring=1 in all Application Server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
  • Implement the PSFTAPI package and trigger described above to set module and action at the start of all processes.
  • Implement Fine-Grained Audit policy and handler to instrument nVision processes if required.
  • Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.
  • Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
  • Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
  • Performance Monitor is complex to set up and the delivered analytics are limited.  Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performances issues.
Oracle RDBMS InstrumentationOracle provides the dbms_application_info package to set certain attributes on the current database session.  These attributes are visible in some of the dynamic performance version, and are picked up by Active Session History (ASH) and can also be seen in AWR reports, Enterprise Manager screens, SQL trace files, and other performance utilities such as EDB360.  The package was first documented in Oracle 7.3.3
"Application developers can use the DBMS_APPLICATION_INFO package to record the name of the executing module or transaction in the database for use later when tracking the performance of various modules… System administrators can also use this information to track resource usage by module. When an application registers with the database, its name and actions are recorded in the V$SESSION and V$SQLAREA views. Your applications should set the name of the module and name of the action automatically each time a user enters that module. The module name could be the name of a … code segment in an … application. The action name should usually be the name or description of the current transaction within a module." Oracle 7 Tuning, release 7.3.3 ©Oracle 1997, Chapter 23 Registering Applications See also One of my Favourite Database Things: DBMS_APPLICATION_INFO
If module and action are consistently set to meaningful values as the application executes it is then possible to determine from where SQL statements originated and how different parts of the application are performing.
ASH is separately licenced as part the Diagnostics Pack, that is only available on Enterprise Edition of the database.  However, most PeopleSoft customers running on Oracle do so on Enterprise Edition and are licenced for the diagnostics pack.
PeopleSoft has set client_info since PeopleTools 7.53 so the PeopleSoft operator ID can be associated with the database session, mainly to allow auditing to be done with database triggers.  However, this attribute is not persisted to the ASH data.  However, client_id is also set to the operator ID, and this is collected by ASH.
PeopleTools has set module and action since PeopleTools 8.50, though the exact values used have changed sometimes with the PeopleTools version.
A new application server/process scheduler domain parameter EnableAEMonitoring was introduced in PeopleTools 8.54 to control this behaviour and it is not enabled by default.  If monitoring is not enabled module defaults to the program name and action remains blank.
Recommendation: Set EnableAEMonitoring=1 in all application server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
See also:
On-Line Component/Page InformationIn the online application, module and action are set to the component and page name respectively.  In a search dialogue, Action is set to 'xyzzy'.
In the Integration Broker module and action are set to service name and queue name.
Application Engine Step InformationIn Application Engine, module is set to a string that includes the name of the Application Engine main program the was called and the sessionid_num for the current process instance number recorded on the process scheduler request table PSPRCSRQST.  For example: PSAE.PSPMCSOSUM.1448
Later on, it may be necessary to use regular expressions in SQL to process this string before profiling the ASH data.
Action is set to string concatenated from the Application Engine program, section, step name and step type.  For example: PSPMCSOSUM.GETCNT.CNT.P
The program name may be different from that shown in module if one AE program calls another.
Note: Application Engine still doesn't reset ACTION on DO SELECT steps
Cobol/nVision InstrumentationCobol, nVision and SQR do not set module and action.  Instead, they can be set at the start of every program initiated by Process Scheduler using a trigger on PSPRCSRQST.  The first thing a process does when it is initiated is to set the run status on its scheduler request record to 7, indicating that it is processing.  This is visible in the Process Monitor component.  A trigger on this transition can set module and action for the session to the program and process instance number on that row of data.
This technique was used prior to PeopleSoft adding instrumentation to PeopleTools.  It can still be applied to all processes, including Application Engine, because any PeopleSoft instrumentation will simply overwrite the value set by the trigger.
Recommendation: Implement the PSFTAPI package and trigger described above in order to set module and action at the start of all processes.
The same technique was also used prior to the introduction of ASH to enable Oracle SQL Trace if a particular run control was specified, and is still occasionally useful.
nVision Layout InstrumentationOne of the challenges of tuning and monitoring nVision is to be able to identify each report being run.   nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, it would also be useful to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, it is possible to define a fine-grained audit policy on the query and set module within a PL/SQL handler package that is invoked by the fine-grained audit.
Recommendation: Implement Fine-Grained Audit policy and handler to instrument nVision processes if required
2nd Database Connection InformationPeopleSoft programs use a second database connection to increment sequence numbers to minimise the row level locking on such tables.  It is like an AUTONOMOUS_TRANSACTION in PL/SQL.  There is no PeopleSoft instrumentation on this session.  It is possible to use an AFTER LOGON trigger to set client_info, module and action.
Oracle Automatic Workload Repository (AWR) SnapshotsPeopleSoft generates a lot of non-shareable SQL.
  • Dynamically generated SQL, often in PeopleCode, concatenates strings of SQL with bind variables, thus the bind variables become literals in the final SQL statement.  Statements with different literal values are considered to be different statements with different SQL_IDs.
  • Similarly, dynamic Cobol statements result in literal values in the SQL statement.
  • %BIND() variables in Application Engine will also become literal values in the SQL Statement unless the ReUseStatement attribute is set on the AE step, however, this cannot be set on statements with dynamic fragments code are introduced with %BIND(…,NOQUOTES).
  • Application Engine programs that use temporary records can use different non-shared instances of the record in different executions, and this also results in different statements with different SQL_IDs.
  • See also Performance Benefits of ReUse Statement Flag in Application Engine
Consequently, the library caching is not particularly efficient in PeopleSoft, and dynamic SQL statements are often quickly aged out of the library cache.  AWR snapshots can only capture the SQL that is in the library cache at the time of the snapshot.  If the SQL statement, or at least a statement with the same force matching signature or plan hash value, cannot be found the AWR it cannot be identified or analysed.  Therefore, it is advantageous to increase the snapshot frequency on PeopleSoft systems.
Recommendation: Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.  This change results in only a modest increase overhead in processing and space on AWR, but it is worth the additional information that is captured.
This advice also applies to Statspack that may be used if you are not licenced for the Diagnostics Pack.
PeopleTools Performance MetricsBatch TimingsPeopleSoft Application Engine and Cobol programs can emit batch timings reports on successful completion.
Application EngineApplication Engine batch timings are controlled by the AETrace flag in the Process Scheduler domain configuration file and for on-line AE programs in the Application Server domain configuration files.
  • AETrace=128: batch timings report is written to the AE Trace file to
  • AETrace=1024: batch timings are written to PS_BAT_TIMINGS% tables in the database
The overhead of batch timings is negligible while the program is running because it is accounted in memory and only written to file or database when the process completes successfully.
Recommendation: Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
The trace setting in the process scheduler configuration can be overridden by setting process specific command line parameter overrides in the process definition.  This is often done to set other trace settings, it is also common to see these unintentionally left in place longer than necessary.  If trace is set in this way it should always also set the batch timings flags.
See PeopleBooks -> Development Tools -> Application Engine -> Enabling Application Engine Tracing
CobolPeopleSoft Cobol programs can only write batching timings reports to file and not to the database.  This is controlled by a different parameter.
  • TraceSQL = 128: Enable Cobol statement timings report 
Recommendation: Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
This trace setting can also be overridden by setting process specific command line parameter overrides in the process definition.  If trace is set in this way it should always also set the batch timings flags.
PeopleSoft Performance MonitorThis provides information about the performance of the PIA including response times for the online transactions.  Metrics are stored in a separate monitoring PeopleSoft system to minimize the effect of measurement intrusion.  It optionally samples the state of each web server, application server and process scheduler collecting operating system and Tuxedo metrics.  It also has a PIA session trace capability.
The sampled data includes the number of busy application server processes and length of inbound Tuxedo service queues.  This data can be used to validate the sizing of the application servers.
Recommendation: Performance Monitor is complex to set up and the delivered analytics are limited.  Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performances issues.

How Not to Collect Optimizer Statistics in an Application Engine Program

Thu, 2018-09-06 05:24
I was told about a PeopleSoft customer experiencing an Oracle error when collecting statistics during an Application Engine.
ORA-06533: Subscript beyond count 
ORA-06512: at "SYS.DBMS_STATS", line 36873
It is possibly a manifestation of a database bug. The workaround was not to use AUTO_SAMPLE_SIZE, so instead, this customer initially coded an explicit call to DBMS_STATS.GATHER_TABLE_STATS with a specific sample size.
This blog is not about the bug, but how to manage the workaround.
DO NOT TRY THIS AT HOME!

I think that there are a number of problems with this approach
  1. Using a fixed sample size rather than AUTO_SAMPLE_SIZE should only be considered as a temporary workaround. The new hash-based number-of-distinct-values (NDV) algorithm in Oracle 12c only works with AUTO_SAMPLE_SIZE, and it produces more accurate statistics and runs faster because saves a large sort operation. Coding a fixed sample size into an Application Engine requires a managed change to be made, tested and released into production, and then when the underlying problem is resolved the customisation needs to be removed by the same managed process. 
  2. DBMS_STATS.GATHER_TABLE_STATS implies a commit. That can lead to problems that PeopleSoft avoids by only calling statistics via the %UpdateStats macro and controlling when that macro does and does not execute. 
Committing and Restart Checkpointing in Application EngineIf a restartable Application Engine program fails it rolls back to the last commit point and it can then be restarted from that point. Committing is managed by Application Engine at section and step levels where the program state record is updated accordingly. If an error occurs in a step after the implicit commit in DBMS_STATS, it can result in the data in the application tables being different to where the state record indicates the program can be restarted. The program may not restart, or it could conceivably execute but produce erroneous results.
Committing inside a do while loop, including any other Application Engine program called from inside the loop is suppressed at Application Engine section/step level and therefore the execution of %UpdateStats macro is also suppressed. Otherwise, you could get rogue ORA-01555 Snapshot Too Old errors. Suppression of %UpdateStats is reported in the Application Engine step trace.
-- 19:07:37.199 .(AEMINITEST.MAIN.LOOP) (Do Select)
%Select(recname) SELECT recname FROM psrecdefn WHERE recname = 'JOB'
/
-- Buffers:
-- 1) JOB
-- 19:07:37.204 Iteration 1 of .(AEMINITEST.MAIN.LOOP) (Do Select) loop

-- 19:07:37.208 .(AEMINITEST.MAIN.LOOP) (SQL)
RECSTATS PS_JOB LOW
/
-- 19:07:37.211 UpdateStats ignored - COMMIT required

-- 19:07:37.212 .(AEMINITEST.MAIN.LOOP) (Do Fetch)
Even a previously uncommitted SQL step can lead to %UpdateStats being suppressed.
-- 19:07:35.205 .(AEMINITEST.MAIN.Step01) (SQL)
UPDATE PS_AETEMPTBLMGR SET AE_DEDICATED = 1 WHERE 0 = 1
/
-- Row(s) affected: 0

-- 19:07:35.213 .(AEMINITEST.MAIN.Step02) (PeopleCode)

-- 19:07:35.220 .(AEMINITEST.MAIN.US1) (SQL)
RECSTATS PS_JOB LOW
/
-- 19:07:35.227 UpdateStats ignored - COMMIT required
If you code DBMS_STATS.GATHER_TABLE_STATS explicitly, Application Engine will not recognise the step as having committed. In the following example. you can see the %UpdateStats on the last step has been suppressed because it Application Engine does not recognise that the update in the first step has been committed by the call to DBMS_STATS.
-- 19:12:06.690 .(AEMINITEST.MAIN.Step01) (SQL)
UPDATE PS_AETEMPTBLMGR SET AE_DEDICATED = 1 WHERE 0 = 1
/
-- Row(s) affected: 0
-- 19:12:06.696 .(AEMINITEST.MAIN.Step02) (PeopleCode)
-- 19:12:06.700 .(AEMINITEST.MAIN.GTS) (SQL)
%Execute(/) BEGIN
dbms_stats.gather_table_stats('SYSADM','PS_JOB',estimate_percent=>1); END; /
/
-- Row(s) affected: 1

-- 19:12:09.871 .(AEMINITEST.MAIN.US1) (SQL)
RECSTATS PS_JOB LOW
/
-- 19:12:09.877 UpdateStats ignored - COMMIT required
/
Perhaps, the safest form of this workaround would be to have the step with DBMS_STATS and the immediately preceding step explicitly commit as in the following example.  I have also made the program restartable.  Now restart data is checkpointed, and the %UpdateStats macro executes at step US1.
Restart Data CheckPointed
/
COMMIT
/

-- 19:20:24.792 .(AEMINITEST.MAIN.GTS) (SQL)
%Execute(/) BEGIN
dbms_stats.gather_table_stats('SYSADM','PS_JOB',estimate_percent=>1); END; /
/

-- Row(s) affected: 1
/
/
Restart Data CheckPointed
/
COMMIT
/

-- 19:20:29.153 .(AEMINITEST.MAIN.US1) (SQL)
RECSTATS PS_JOB LOW
/
-- Row(s) affected: 1
/
However, you have to consider the state the application data after an error, whether you wish to restart or cancel the Application Engine because you can no longer rollback.
Doing the Right Thing
I recommend that:
  • You should only ever collect stats in Application Engine with the %UpdateStats macro that in turn executes the command in the DDL model.
  • From Oracle 11g both PeopleSoft statistics gathering DDL models should be the same and should ultimately call DBMS_STATS without any parameters other than the table name. The default value of ESTIMATE_PERCENT is AUTO_SAMPLE_SIZE.
There are two DDL models in PeopleSoft because %UpdateStats can be invoked with a second parameter to collect the statistics HIGH or LOW. This dates back to Oracle's ANALYZE command that could either compute or estimate statistics (and other database platforms had similar options). Collecting optimizer statistics with ANALYZE has been deprecated for many years, but the command still has other valid uses.  It was superceded by DBMS_STATS in Oracle 8i (released in 1998).
Automatic sample size was introduced in Oracle 9i.  In Oracle 9i and 10g, it was usual to use automatic sample size in the high statistics gathering model and a small fixed sample size in the low model for use on very large tables.  The LOW parameter was specified on %Updatestats in Application Engine programs as necessary.
This approach became redundant from Oracle 11g with the introduction of table preferences. If you need to collect statistics with a specific rather than the automatic sample size or specify any other parameters, then a table preference should be defined. It applies whenever statistics are gathered on that table, during the statistics maintenance window, and any places where statistics are explicitly collected, rather than just a single call.
From Oracle 12c, as the new NDV algorithm only works with AUTO_SAMPLE_SIZE, you should always use the default unless you have an overarching reason to the contrary, and then you should use a table preference. This approach does not require any application code change because the preference is an attribute of a table in the database.
I recommend using GFCPSSTATS package, calling it from the DDL model (see previous blog Managing Cost-Based Optimizer Statistics for PeopleSoft). The package also includes a mechanism to specify table preferences in metadata, so that they are automatically instantiated when a table is created or altered by Application Designer, and are applied to every instance of a temporary record.
gfcpsstats11.ps_stats(p_ownname=>[DBNAME], p_tabname=>[TBNAME]);
Alternatively, just call DBMS_STATS with no additional parameters (other than FORCE, in case you lock statistics on temporary tables) but then you must manage table preferences manually.
DBMS_STATS.GATHER_TABLE_STATS(ownname=>[DBNAME], tabname=>[TBNAME], force=>TRUE);
Further Reading

Managing Cost-Based Optimizer Statistics for PeopleSoft

Fri, 2018-06-29 06:05
I gave this presentation to UKOUG PeopleSoft Roadshow 2018

PeopleSoft presents some special challenges when it comes to collecting and maintaining the object statistics used by the cost-based optimizer.

I have previously written and blogged on this subject.  This presentation focuses exclusively on the Oracle database and draws together the various concepts into a single consistent picture.  It makes clear recommendations for Oracle 12c that will help you work with the cost-based optimizer, rather than continually fight against it.

It looks at collecting statistics for permanent and temporary working storage tables and considers some other factors that can affect optimizer statistics.

This presentation also discusses PSCBO_STATS, that is going to be shipped with PeopleTools, and compares and contrasts it with GFCPSSTATS11.

Application Engine in Process Scheduler: PSAESRV Server Process -v- Standalone PSAE executable

Sat, 2018-04-14 06:05
Whether to use the Application Engine server process (PSAESRV) in the process scheduler tuxedo domain or the standalone PSAE executable is a frequently discussed point amongst PeopleSoft administrator.  Over the years, I have written various things on the subject.  I am going to draw them together in this blog, and restate Oracle’s now clear advice about when to use which option.

In PeopleTools 8.4, the Process Scheduler became a fully fledged Tuxedo domain.  The PSAESRV process was also introduced at this time.  It is a persistent process that handles both Application Engine and Optimization Engine requests.  Each server process creates persistent database connections.  A number of these server processes are started with the domain.   The PSAESRV process does not spawn like other Tuxedo server processes.  Instead, you must configure the number of server processes to match the maximum number of concurrent Application Engine process requests and concurrent Optimization Engine requests that the process scheduler can run.  The server was introduced to handle very short-lived Application Engine programs thus avoiding the overhead of instantiating a new process and new database sessions for each process request.  CRM typically uses Application Engine in this fashion, but generally, you do not see this in other PeopleSoft products.

Oracle has not always been clear what they mean by a short-lived process.  It has been suggested that if Application Engine processes are typically taking less than 10-30 seconds, or if you run more than 1000 Application Engine processes requests per hour (note 651970.1) you should use PSAESRVs.
PeopleBooks advises you should use PSAESRV because it delivers improved system performance.  However, PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1) contradicts this somewhat.  Ultimately, if you have any doubts, you should it test each way and determine whether one way gives a clear advantage over the other.

Oracle Support Note "What will be the impact of disabling PSAESRV on the Process Scheduler (Doc ID 651970.1)" explains that if PSAESRV is disabled in the Tuxedo domain configuration, the Process Scheduler goes back to the legacy behaviour and spawns a stand-alone PSAE process for each Application Engine request.  “The Application Engine will take a bit longer to start, [the] time delay may be range from millisecond to seconds” depending on hardware and configuration.

The stand-alone process has several advantages.
  • At the end of the Application Engine program, it disconnects from the database and terminates.  Thus releasing resources from the process and the database session.  Whereas the persistent Application Engine process has been reported to accumulate allocated memory over time.
  • If you are using Oracle database Global Temporary Tables in an application engine, then you should not use PSAESRV because the tables are always created PRESERVE ON COMMIT and so are only released when the database session terminates.
  • If you set any session parameters within an Application Engine program run via PSAESRV, or enable database trace, then these settings will carry forward from one Application Program to the next unless you reset the parameter at the end of the program, or the start of the next.  This is not a concern with standalone PSAE processes.
However, there is at least one case where you must use the server process:
  • If you are using Oracle Active Data Guard and wish to mark some Application Engine programs as read-only then they must be run via the PSAESRV process
Conclusion
  • PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1 sums it up very nicely: “PSAE is as good as PSAESRV for most practical purposes.  If you have an application engine job that runs longer than 10 seconds, PSAE is equivalent to PSAESRV.  PSAE has the added advantage of being recycled at the end of each application engine job, cleaning up any outstanding SQL cursors to the database that may have been left behind.  Because PSAE recycles after each use, PSAE does not have any possible memory leakage problem that may occupy the precious system memory.  In short, PSAE is a cleaner workhorse.”
  • I think it is reasonable to use PSAESRV in CRM.  For all other products, I recommend that PSAESRV should be disabled from all Process Schedulers.
    • If you do have some Application Processes that are both short-lived (i.e. less than 10 seconds) and run frequently, then consider creating other process schedulers with PSAESRV processes that are dedicated to running only these process.  You can then move these processes to a new Process Scheduler category that only runs on these new Process Scheduler.
  • PSAESRV is configured by default, so if you don’t want to use it, and mostly you won’t, then you have to remember to disable it.

Resetting High Water Marks on On-line Temporary Table Instances

Tue, 2018-03-20 11:45
PeopleSoft has always used regular database tables for temporary working storage in batch processes.   Up to PeopleTools 7.x working storage tables were shared by all instances of a program.  That led to consistent read contention when multiple processes concurrently used the same table, and much higher high water marks that increased durations of full scans.
From PeopleTools 8, many copies of each temporary working storage table are created.  Application Engines that run on the Process Scheduler are allocated exclusive use of a particular copy of the table.  This avoids the inter-process contention.  They start by truncating each allocated table, which resets the high-water mark.
Some delivered processing uses batch programs that are run apparently synchronously from the PIA.  On-line edit and post in Financials General Ledger is a common example.  Up to PeopleTools 7, the application server would synchronously spawn a batch process and wait for it to complete.  From PeopleTools 8 the process is submitted to the process scheduler, and the PIA polls the Scheduler tables waiting for the process to complete.  However, Application Engine can be run within the component processor.  In Financials General Ledger, this can be chosen by a setting an installation configuration option.  The truly on-line method can perform better because you are no longer waiting for the process scheduler to pick up the process request.  A separate process Application Engine is not spawned, but the Application Engine program is executed by the PSAPPSRV application server process.  One of the limitations is that the Application Engine program cannot commit.  Committing after steps or sections is suppressed, and the %TruncateTable macro generates a delete statement instead.  Therefore, on-line temporary table instances are never truncated by any process and their high-water marks can be raised by processes that handle larger volumes of data.  This can have impacts for subsequent processes with smaller data volumes but that still have to full-scan working storage tables up to their high water marks.
Truncating On-line Temporary Table Instances
The answer is to implement a periodic process that truncates working storage tables, but only doing so when the table is not currently being used by a process.  Every on-line Application Engine program is allocated a temporary table instance number, it locks the corresponding row on the table PS_AEONLINEINST.  If it allocated to instance 1, it locks the row where CURTEMPINSTANCE is 1 and uses instance 1 of each temporary record that it needs.  
Therefore the proposed truncate process must also lock the row on PS_AEONLINEINST that corresponds to each table that is to be truncated.  The truncate must be done in an autonomous transaction so that the implicit commit does not release that lock.  The lock can be released after the truncate completes.  Thus, the truncate process waits for any online process to complete before truncating a table with the same instance number, and no process can start while the truncate process is holding the lock.  However, each truncate will be very quick, and so each lock will only be held briefly, and it will have only a minimal effect on any online process that may be running at the time.  

I have written a PL/SQL packaged procedure (to perform this process for all temporary records.  It is available on Github as a part of my collection of miscellaneous PeopleSoft scripts.
Package UsageUsually, the package will be run without any parameters. The default behaviour will be to truncate tables with more than a single extent.  Information on what the package does is emitted to the server output.
Set serveroutput on 
EXECUTE xx_onlineinsthwmreset.main;
The package can be run in test mode when it will list the commands without executing them.  Thus you can see what it will do without actually doing it.
EXECUTE xx_onlineinsthwmreset.main(p_testmode=>TRUE);
The package can optionally deallocate any physical storage. Storage will be reallocated next time the table is used.
EXECUTE xx_onlineinsthwmreset.main(p_drop_storage=>TRUE, p_min_extents=>0);
The package can be run for certain tables that match a particular pattern.
EXECUTE xx_onlineinsthwmreset.main(p_recname_like=>'JP%');
I recommend that the package is run daily. However, it can be run safely while the users are doing on-line edit/post processing, but it would be sensible to choose a quiet time.

Fewer Platform Flags on Indexes from PeopleTools 8.55

Wed, 2018-03-07 12:28
It has always been possible in Application Deisnger to specify upon which databases platforms each index should be built.  This is really a feature that is used by PeopleSoft development, rather than customers to deliver indexes that are more appropriate for a particular platform due to differences in the optimizer.
Over the years, the number of supported PeopleSoft platforms has declined.  In PeopleTools 8.45, it went down from 9 to 6 and in PeopleTools 8.55 it has gone down to just 4, but there are still 9 columns on PSINDEXDEFN that correspond to the original 9 supported platforms.
I explained in a previous blog that you can have all or none of the platform flags set to the same value, but with the platform radio button on the index properties dialogue box is still set to 'some' because one or more of the platform flag columns for some of the unsupported platforms is set differently.  Of course, this is a purely cosmetic problem, but one that can cause confusion in Application Designer.
PeopleTools 8.45PeopleTools 8.55I fix this by updating PeopleTools tables as follows. The first query reports on the indexes where the supported platform flags all have the same value and one of the unsupported platform flags are different.
column RECNAME format a15
column INDEXID format a3 heading 'Idx|ID'
column DDLCOUNT format 999 heading 'DDL|Cnt'
column CUSTKEYORDER format 9999 heading 'Cust|Key|Order'
column KEYCOUNT format 999 heading 'Key|Cnt'
column PLATFORM_SBS format 999 heading 'SBS'
column PLATFORM_DB2 format 999 heading 'DB2'
column PLATFORM_ORA format 999 heading 'ORA'
column PLATFORM_INF format 999 heading 'INF'
column PLATFORM_DBX format 999 heading 'DBx'
column PLATFORM_ALB format 999 heading 'ALB'
column PLATFORM_SYB format 999 heading 'SYB'
column PLATFORM_MSS format 999 heading 'MSS'
column PLATFORM_DB4 format 999 heading 'DB4'
column ACTIVEFLAG Format 999 heading 'Active'
column CLUSTERFLAG format 999 heading 'Clst|Flg'
column UNIQUEFLAG format 999 heading 'Uniq|Flg'
column INDEXTYPE format 999 heading 'Idx|Type'
column IDXCOMMENTS format a60
spool platformfix855
SELECT *
FROM PSINDEXDEFN
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_MSS
AND (PLATFORM_ORA!=PLATFORM_SBS
  OR PLATFORM_ORA!=PLATFORM_ALB
  OR PLATFORM_ORA!=PLATFORM_SYB
  OR PLATFORM_ORA!=PLATFORM_INF
  OR PLATFORM_ORA!=PLATFORM_DB4)
;
These are the indexes that have inconsistent platform flags.  In this case PS_PSPMTRANSHIST is to be disabled on DB2/AS400.  You can't update the flag via Application Designer, but you could set the radio button to ALL.
                                           Cust
Idx Idx Uniq Clst Key Key DDL
RECNAME ID Type Flg Flg Active Order Cnt Cnt SBS DB2 ORA INF DBx ALB SYB MSS DB4
--------------- --- ---- ---- ---- ------ ----- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
IDXCOMMENTS
------------------------------------------------------------
PSPMTRANSHIST _ 1 1 1 1 0 4 1 1 1 1 1 1 1 1 1 0
It could be a tedious process to do this for a lot of indexes.  So the following SQL commands correct all indexes.  They set the SQL flags for the unsupported platforms to the value for the supported platforms if they are all the same. The version number on the record definition is updated so that Application Desinger refreshes the object.
UPDATE PSVERSION
SET VERSION = VERSION + 1
WHERE OBJECTTYPENAME IN('SYS','RDM');

UPDATE PSLOCK
SET VERSION = VERSION + 1
WHERE OBJECTTYPENAME IN('SYS','RDM');

UPDATE PSRECDEFN
SET VERSION = (
SELECT VERSION
FROM PSVERSION
WHERE OBJECTTYPENAME = 'RDM')
WHERE RECNAME IN (
SELECT RECNAME
FROM PSINDEXDEFN
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND ( PLATFORM_ORA!=PLATFORM_SBS
OR PLATFORM_ORA!=PLATFORM_ALB
OR PLATFORM_ORA!=PLATFORM_DB4)
);

UPDATE psindexdefn
SET PLATFORM_DB4=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_DB4;

UPDATE psindexdefn
SET PLATFORM_ALB=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_ALB;

UPDATE psindexdefn
SET PLATFORM_SBS=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_SBS;
The platform flags now say 'ALL'. Not a tremendous change, but at least I can immediately see that these indexes do build on all platforms without having to open each one.

The new platformfix855.sql script is available on collection of miscellaneous scripts on Github.

Setting Oracle Session Parameters for Specific Process Scheduler Processes

Fri, 2018-03-02 06:01
This note describes a mechanism for setting initialisation parameters for specific processes run on the process scheduler. I will demonstrate it relation to nVision, but it has general application in PeopleSoft.
A table is used to hold metadata that described what setting is applied to which processes. A trigger on the process scheduler request table PSPRCSRQST reads that data and alters the session setting. This approach is easier to adjust and understand that static PL/SQL code in a trigger.
NB: The PSAESRV server process is used to run application engine programs in the process scheduler by default.  It should only be used for very short-lived programs and should usually be deconfigured. It maintains a persistent connection to the database. Therefore, session parameters set for one application engine program will carry forward into subsequent programs run on the same server process with the potential for unintended and unpredictable results. Other processes all establish their own database session that ends when the process terminates. This trigger mechanism can be still used to set parameters for some processes that run on PSAESRV, but the default parameter value should then be set for all other application engine processes. 
Metadata The table that holds the metadata should be defined in Application Designer.
CREATE TABLE PS_PRCS_SESS_PARM (PRCSTYPE VARCHAR2(30) NOT NULL,
PRCSNAME VARCHAR2(12) NOT NULL,
OPRID VARCHAR2(30) NOT NULL,
RUNCNTLID VARCHAR2(30) NOT NULL,
PARAM_NAME VARCHAR2(50) NOT NULL,
PARMVALUE VARCHAR2(128) NOT NULL) TABLESPACE PTTBL STORAGE (INITIAL
40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
PCTUSED 80
/
CREATE UNIQUE iNDEX PS_PRCS_SESS_PARM ON PS_PRCS_SESS_PARM (PRCSTYPE,
PRCSNAME,
OPRID,
RUNCNTLID,
PARAM_NAME) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
/
ALTER INDEX PS_PRCS_SESS_PARM NOPARALLEL LOGGING
/
I will demonstrate this mechanism for nVision. The metadata is simply inserted into the table by script.
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_policy','auto');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_limit','4');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_level','150');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_min_time_threshold','1');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', '_optimizer_skip_scan_enabled','FALSE');
Here we have 5 session parameters that will apply to all nVision reportbooks, but which I don't want to apply to the rest of the system.
PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
-------------------- ------------ ---------- --------------- ------------------------------ --------------------
nVision-ReportBook RPTBOOK parallel_degree_policy auto
nVision-ReportBook RPTBOOK parallel_degree_limit 4
nVision-ReportBook RPTBOOK parallel_degree_level 150
nVision-ReportBook RPTBOOK parallel_min_time_threshold 1
nVision-ReportBook RPTBOOK _optimizer_skip_scan_enabled FALSE
The objective is to use limited parallelism only in the nVision reporting, and without decorating underlying ledger tables
  • parallel_degree_policy=auto enables the new 12c automatic degree of parallel calculation, statement queuing. 
  • parallel_min_time_threshold is set to 1 second. The default is 10. Statements whose runtime is estimated to be greater than or equal to this value will be considered for automatic degree of parallelism. 
  • parallel_degree_limit=4 restricts the automatic degree of parallelism to 4 to prevent any one statement using excessive parallelism.
  • parallel_degree_level=150 scales up the automatic degree of parallelism calculation but within the parallel_degree_limit. See Kerry Osborne's blog 12c – parallel_degree_level (control for auto DOP)
  • _optimiser_skip_scan_enabled=FALSE disables index skip scan to promote the use of smart full scan and Bloom filtering. It is recommended for engineered systems in Advice for the PeopleSoft DBA. Skip scan can prevent the optimizer choosing a smart full scan, so it makes sense to limits the setting to just nVision. I can also specify a parameter that will only be set when the reportbook is run by a particular operator with a particular run control.
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue) VALUES ('nVision-ReportBook','RPTBOOK','NVOPR','NVSRBK_2', 'parallel_degree_level','200');
The specific setting for one particular operation ID and run control takes precedence of the generic setting for all reportbooks. In this case, I will scale the degree of parallelism further for a particular reportbook.
PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
-------------------- ------------ ---------- --------------- ------------------------------ --------------------

nVision-ReportBook RPTBOOK parallel_degree_level 150
nVision-ReportBook RPTBOOK NVOPR NVSRBK_2 parallel_degree_level 200
Trigger When a process starts the first thing it does is update its own status to 7 to indicate that it is processing. This is another example of a trigger created on that transition that injects behaviour at the start of a PeopleSoft process. This trigger reads the metadata and applies the settings with an ALTER SESSION command. The process type, name, operation and run control attributes must exactly match the process request, but a blank space is treated as a wildcard. Underscore parameters must be delimited in double quotes.
CREATE OR REPLACE TRIGGER sysadm.set_prcs_sess_parm
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus = 7
AND old.runstatus != 7
AND new.prcstype != 'PSJob')
DECLARE
l_cmd VARCHAR2(100 CHAR);
l_delim VARCHAR2(1 CHAR);
BEGIN
FOR i IN (
WITH x as (
SELECT p.*
, row_number() over (partition by param_name
order by NULLIF(prcstype, ' ') nulls last,
NULLIF(prcsname, ' ') nulls last,
NULLIF(oprid, ' ') nulls last,
NULLIF(runcntlid,' ') nulls last
) priority
FROM PS_PRCS_SESS_PARM p
WHERE (p.prcstype = :new.prcstype OR p.prcstype = ' ')
AND (p.prcsname = :new.prcsname OR p.prcsname = ' ')
AND (p.oprid = :new.oprid OR p.oprid = ' ')
AND (p.runcntlid = :new.runcntlid OR p.runcntlid = ' ')
)
SELECT * FROM x
WHERE priority = 1
) LOOP
IF SUBSTR(i.param_name,1,1) = '_' THEN
l_delim := '"';
ELSE
l_delim := '';
END IF;

IF NULLIF(i.parmvalue,' ') IS NOT NULL THEN
dbms_output.put_line('Rule:'||NVL(NULLIF(i.prcstype,' '),'*')
||'.'||NVL(NULLIF(i.prcsname,' '),'*')
||':'||NVL(NULLIF(i.oprid,' '),'*')
||'.'||NVL(NULLIF(i.runcntlid,' '),'*')
||':'||i.param_name||'='||i.parmvalue);

l_cmd := 'ALTER SESSION SET '||l_delim||i.param_name||l_delim||'='||i.parmvalue;
dbms_output.put_line('PI='||:new.prcsinstance||':'||:new.prcstype||'.'||:new.prcsname||':'
||:new.oprid||'.'||:new.runcntlid||':'||l_cmd);
EXECUTE IMMEDIATE l_cmd;
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
END;
/
The trigger script set_prcs_sess_parm.sql can be download from my miscellaneous PeopleSoft scripts repository on Github.
Testing The trigger can be tested by updating the process scheduler request table in SQL*Plus, but be careful to roll back the update afterwards rather than committing. The trigger writes debug information to the server output that can be seen in SQL*Plus showing the rule being used and the ALTER SESSION command generated. However, this output is not captured in any PeopleSoft log when the trigger is fired by a PeopleSoft process.
set serveroutput on 
update psprcsrqst set runstatus = 7 where runstatus != 7
and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_2' and oprid = 'NVOPR' and rownum = 1;
rollback;

Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
Rule:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:parallel_degree_level=200
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_level=200
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_limit=4
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_policy=auto
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_min_time_threshold=1
In the above example, the specific rule for NVSRBK_2 was applied setting PARALLEL_DEGREE_LEVEL to 200, whereas in the next example the generic setting of 150 is applied to NVSRBK_1.
update psprcsrqst set runstatus = 7 where runstatus != 7
and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_1' and oprid = 'NVOPR' and rownum = 1;
rollback;

Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_level=150
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_level=150
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_limit=4
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_policy=auto
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_min_time_threshold=1

Invalid Views in PeopleSoft and the Oracle Database

Wed, 2018-01-31 14:33
I was listening to the section on Invalid Views in PSADMIN Podcast #117 (@19:00). Essentially, when you drop and recreate a view that is referenced by a second view, the status on the second view in the database goes invalid. This is not a huge problem because as soon as you query the second view it is compiled. However, you would like to know whether any change to a view prevents any dependent views from compiling, although you would expect have teased these errors out before migration to production.
The PeopleSoft solution to this is to include all the dependent views in the Application Designer package. However, as pointed out, in the podcast you are now releasing code, possibly unintentionally releasing code changes and certainly updating last change dates on record definitions, when really you just need to compile the database objects.   PeopleSoft does this because it is a platform generic solution, but really it is using the PeopleSoft Application Designer to solve a database management issue.
A similar problem also occurs in the Oracle database with dependent PL/SQL procedures and packages where you sometimes get referential loops. Oracle provides a procedure DBMS_UTILITY.COMPILE_SCHEMA that recompiles all invalid objects in a schema and report any errors to the ALL_ERRORS view.  I think this is a much safer option.

Here is a very simple (non-PeopleSoft) example
drop table t purge;
drop view a;
drop view b;

create table t (a number);
insert into t values (1);
create view a as select a from t;
create view b as select a from a;

column object_name format a12
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID
Dropping and recreating view A renders view B invalid.
drop view a;
create view a as select a from t;

select object_type, object_name, status
from user_objects
where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID

select * from b;
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/
Just querying B has made it valid again.
OBJECT_TYPE             OBJECT_NAME  STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID
Let's make B invalid again by rebuild A, but this time I will change the name of the column in view A from A to T so that view B cannot compile without an error.
drop view a;
create view a (t) as select a from t;

select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/
EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID
And I can query the errors from user_errors
NAME
---------------------------------------------------
TYPE SEQUENCE LINE POSITION
------------ ---------- ---------- ----------
TEXT
---------------------------------------------------
ATTRIBUTE MESSAGE_NUMBER
--------- --------------
B
VIEW 1 0 0
ORA-00904: "A": invalid identifier
ERROR 0
N.B.: if you use CREATE OR REPLACE VIEW is not left invalid unless there is an error. Unfortunately, Application Designer always drops and recreates views.

nVision Performance Tuning 12: Hinting nVision with SQL Profiles

Wed, 2017-12-20 10:00
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.  It is a PeopleSoft specific version of a posting on my Oracle blog.

As I explained earlier in this series, it is not possible to add hints to nVision.  The dynamic nature of the nVision SQL means that it is not possible to use SQL Patches.  nVision SQL statements contain literal values and never use bind variables.  When dynamic selectors are used, the SELECTOR_NUM will be different for every execution. A SQL_ID found in one report will be not be seen again in another report.  Even static selector numbers will change after the tree is updated or when a new tree is created.
It is possible to use SQL Profiles to introduce hints because they can optionally match the force match signature of a SQL.  SQL statements that differ only in the literal values they contain will have different SQL IDs but will have the same force matching signature.  Although you will still have a lot of force matching signatures, you should find that you have far fewer force matching signatures than SQL_IDs.   Picking out the signatures that account for the most elapsed execution time and creating profiles for them is manageable.
Note: SQL Profiles require the Tuning Pack to be licenced.
As far as is possible, good nVision performance should be achieved by setting appropriate tree performance options at tree level.  These are global settings.  You may find that a particular setting on a particular tree is not optimal for all reports.  You may then choose to override the tree-level setting in specific layouts.  You may also find that you still need hints to control execution plans.
In particular, parallel query can be an effective tactic in nVision performance tuning.  However, you should put a degree of parallelism on PS_LEDGER or PS_LEDGER_BUDG because that will invoke parallelism in many other processes.  I have found that even putting a degree of parallelism on a summary ledger table can easily result in too many concurrent parallel queries.   On OLTP systems, such as PeopleSoft, I recommend that parallelism should be used sparingly and in a highly controlled and targetted fashion.
ExampleLet's take the following nVision query as an example.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_XX_SUM_CONSOL_VW A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
/
We can tell from the equality join conditions that the two selectors still joined to the are dynamic selectors.
A third selector on DEPTID has been suppressed with the 'use literal values' performance option.  The number of DEPTID predicates in the statement will depend on the tree and the node selected for the report.  Note, that if these change then the statement will not force match the same profile.  SQL profiles might suddenly cease to work due to a tree or selection criteria change.
This is the plan I get initially and without a profile. It doesn't perform well.
Plan hash value: 808840077
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 10408 (100)| | | |
| 1 | HASH GROUP BY | | 517 | 50666 | 10408 (1)| 00:00:01 | | |
| 2 | HASH JOIN | | 517 | 50666 | 10407 (1)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 4 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 5 | HASH JOIN | | 518 | 41440 | 10404 (1)| 00:00:01 | | |
| 6 | PARTITION RANGE SINGLE | | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 7 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 8 | PARTITION RANGE ITERATOR | | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_X_LEDGER_ACCTS | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 10 | SORT CLUSTER BY ROWID BATCHED | | 5373 | | 5177 (1)| 00:00:01 | | |
| 11 | INDEX SKIP SCAN | PS_X_LEDGER_ACCTS | 5373 | | 5177 (1)| 00:00:01 | 28 | 40 |
-----------------------------------------------------------------------------------------------------------------------------------
These are the hints I want to introduce (on Oracle 12c).
SELECT /*+OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_min_time_threshold',2) 
OPT_PARAM('parallel_degree_limit',4) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)*/…
  • Use automatic parallel degree, statement queuing and in-memory parallel execution.
  • Invoke parallelism if the statement is estimated to run for at least 2 seconds
  • However, I will also limit the automatic parallelism to a degree of 4
  • Force materialize view rewrite
  • Use a Bloom filter when joining to the materialized view.
I have created a data-driven framework to create the profiles. I have created working storage table to hold details of each force matching signature for which I want to create a profile.
CREATE TABLE dmk_fms_profiles
(force_matching_signature NUMBER NOT NULL
,sql_id VARCHAR2(13)
,plan_hash_value NUMBER
,module VARCHAR2(64)
,report_id VARCHAR2(32) /*Application Specific*/
,tree_list CLOB /*Application Specific*/
,sql_profile_name VARCHAR2(30)
,parallel_min_time_threshold NUMBER
,parallel_degree_limit NUMBER
,other_hints CLOB
,delete_profile VARCHAR2(1)
,sql_text CLOB
,CONSTRAINT dmk_fms_profiles_pk PRIMARY KEY (force_matching_signature)
,CONSTRAINT dmk_fms_profiles_u1 UNIQUE (sql_id)
,CONSTRAINT dmk_fms_profiles_u2 UNIQUE (sql_profile_name)
)
/
Using conditional parallelism with the PARALLEL_MIN_TIME_THRESHOLD, but limited with PARALLEL_DEGREE_LIMIT is an effective tactic with nVision, so I have specified columns in the metadata table for those hints, otherwise, hints are injected via a string. I identified the problematic SQL by analysis with ASH, and hence I also obtained the FORCE_MATCHING_SIGNATURE. The metadata is keyed by FORCE_MATCHING_SIGNATURE. I have specified a meaningful name for the SQL profile.
INSERT INTO dmk_fms_profiles (force_matching_signature, parallel_min_time_threshold, parallel_degree_limit, other_hints, sql_profile_name) 
VALUES (16625752171077499412, 1, 4, 'REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)', 'NVS_GBGL123I_BU_CONSOL_ACCOUNT');
COMMIT;
Profiles are created using the text of a SQL rather than the SQL_ID or FORCE_MATCHING_SIGNATURE directly. Therefore the SQL_TEXT must be extracted from the AWR, so this method also requires that the SQL statement has been captured by an AWR snapshot.
UPDATE dmk_fms_profiles a
SET (module, action, sql_id, plan_hash_value, sql_text)
= (SELECT s.module, s.action, s.sql_id, s.plan_hash_value, t.sql_text
FROM dba_hist_sqlstat s
, dba_hist_sqltext t
WHERE t.dbid = s.dbid
AND t.sql_id = s.sql_id
AND s.force_matching_signature = a.force_matching_signature
AND s.snap_id = (
SELECT MAX(s1.snap_id)
FROM dba_hist_sqlstat s1
WHERE s1.force_matching_signature = a.force_matching_signature
AND s1.module = 'RPTBOOK' /*Application Specific*/
AND s1.action LIKE 'PI=%:%:%' /*Application Specific*/)
AND s.module = 'RPTBOOK' /*Application Specific*/
AND s.action LIKE 'PI=%:%:%' /*Application Specific*/
AND ROWNUM = 1)
WHERE sql_id IS NULL
/

MERGE INTO dmk_fms_profiles u
USING (
SELECT a.sql_id, a.force_matching_signature, p.name
FROM dmk_fms_profiles a
, dba_sql_profiles p
WHERE p.signature = a.force_matching_signature
) s
ON (s.force_matching_signature = u.force_matching_signature)
WHEN MATCHED THEN UPDATE
SET u.sql_profile_name = s.name
/
Columns REPORT_ID and TREE_LIST contain application specific information extracted from the application instrumentation and tree selector logging.
/*Application Specific - extract report ID from ACTION*/
UPDATE dmk_fms_profiles a
SET report_id = substr(regexp_substr(s.action,':([A-Za-z0-9_-])+',1,1),2)
WHERE report_id IS NULL
AND action IS NOT NULL
/
/*Application Specific - extract financial analysis tree from application logging*/
UPDATE dmk_fms_profiles a
SET tree_list =
(SELECT LISTAGG(tree_name,', ') WITHIN GROUP (ORDER BY tree_name)
FROM (select l.tree_name, MAX(l.length) length
FROM dba_hist_sql_plan p
, ps_nvs_treeslctlog l
WHERE p.plan_hash_value = a.plan_hash_value
AND p.sql_id = a.sql_id
AND p.object_name like 'PS%TREESELECT__'
AND p.partition_start = partition_stop
AND p.partition_start = l.selector_num
AND l.tree_name != ' '
GROUP BY l.tree_name)
)
WHERE tree_list IS NULL
/

Now I can produce a simple report of the metadata in order to see what profiles should be created.
column sql_text word_wrapped on format a110
column module format a8
column report_id heading 'nVision|Report ID'
column tree_list word_wrapped on format a20
column plan_hash_value heading 'SQL Plan|Hash Value' format 9999999999
column parallel_min_time_threshold heading 'Parallel|Min Time|Threshold' format 999
column parallel_degree_limit heading 'Parallel|Degree|Limit' format 999
set long 500
SELECT * FROM dmk_fms_profiles
/

SQL Plan
FORCE_MATCHING_SIGNATURE SQL_ID Hash Value MODULE ACTION
------------------------ ------------- ----------- -------- ----------------------------------------------------------------
Parallel Parallel
nVision Min Time Degree
Report ID TREE_LIST SQL_PROFILE_NAME Threshold Limit D
-------------------------------- -------------------- ------------------------------ --------- -------- -
OTHER_HINTS
--------------------------------------------------------------------------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------
12803175998948432502 5pzxhha3392cs 988048519 RPTBOOK PI=3186222:USGL233I:10008
USGL233I BU_GAAP_CONSOL, NVS_GBGL123I_BU_CONSOL_ACCOUNT 1 4
GAAP_ACCOUNT
REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)
SELECT L2.TREE_NODE_NUM,A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_GBMGT' AND A.FISCAL_YEAR=2017 AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND (A.DEPTID BETWEEN
'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID
BETWEEN 'B9165' AND 'B9999' OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' OR
A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DE
Next, this PL/SQL block will create or recreate SQL profiles from the metadata. The various hints can be concatenated into a single string and passed as a parameter to SQLPROF_ATTR. The SQL text is passed as a parameter when the profile is created.
set serveroutput on
DECLARE
l_signature NUMBER;
h SYS.SQLPROF_ATTR;
e_no_sql_profile EXCEPTION;
PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
l_description CLOB;
BEGIN

FOR i IN (
SELECT f.*, s.name
FROM dmk_fms_profiles f
LEFT OUTER JOIN dba_sql_profiles s
ON f.force_matching_signature = s.signature
) LOOP

BEGIN
IF i.name IS NOT NULL AND i.delete_profile = 'Y' THEN
dbms_sqltune.drop_sql_profile(name => i.name);
END IF;
EXCEPTION WHEN e_no_sql_profile THEN NULL;
END;

IF i.delete_profile = 'Y' THEN
NULL;
ELSIF i.sql_text IS NOT NULL THEN
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_degree_policy'',''AUTO'') ' END||
CASE WHEN i.parallel_degree_limit >=0 THEN 'OPT_PARAM(''parallel_degree_limit'',' ||i.parallel_degree_limit ||') ' END||
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_min_time_threshold'','||i.parallel_min_time_threshold||') ' END||
i.other_hints,
q'[END_OUTLINE_DATA]');

l_signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(i.sql_text);
l_description := 'coe nVision '||i.report_id||' '||i.tree_list||' '||i.force_matching_signature||'='||l_signature;
dbms_output.put_line(i.sql_profile_name||' '||l_description);

DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => i.sql_text,
profile => h,
name => i.sql_profile_name,
description => l_description,
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );

END IF;
END LOOP;
END;
/
I can verify that the profile has been created, and the hints that it contains, thus:
SELECT profile_name,
xmltype(comp_data) as xmlval
FROM dmk_fms_profiles p
, dbmshsxp_sql_profile_attr x
WHERE x.profile_name = p.sql_profile_name
AND p.status = 'ENABLED'
ORDER BY 1
/

PROFILE_NAME
------------------------------
XMLVAL
------------------------------------------------------------------------------------------------
NVS_GBGL123I_BU_CONSOL_ACCOUNT
<![CDATA[BEGIN_OUTLINE_DATA]]>
<![CDATA[OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_degree_limit',4) OPT_PARAM('parallel_min_time_threshold',1) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)]]>
<![CDATA[END_OUTLINE_DATA]]>
And now when the application runs, I get the plan that I wanted.
  • The query runs in parallel.
  • The SQL is rewritten to use materialized view.
  • There are no indexes on the materialized view, so it must full scan it.
  • It generates a bloom filter from PSTREESELECT10 and applies it to the materialized view.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2219 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10004 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 4 | PX RECEIVE | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 5 | PX SEND HASH | :TQ10003 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | P->P | HASH |
| 6 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 7 | HASH JOIN | | 536 | 47704 | 2218 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 8 | PX RECEIVE | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10002 | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | | | | | | | Q1,02 | PCWC | |
| 11 | HASH JOIN | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 12 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
| 13 | JOIN FILTER CREATE | :BF0000 | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 14 | PX RECEIVE | | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 236 | 3776 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 16 | PARTITION RANGE SINGLE | | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 17 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 18 | JOIN FILTER USE | :BF0000 | 8859 | 475K| 2213 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 19 | PX BLOCK ITERATOR | | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWC | |
| 20 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_XX_SUM_GCNSL_MV | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWP | |
| 21 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
| 22 | PX RECEIVE | | 731 | 13158 | 3 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 23 | PX SEND HYBRID HASH | :TQ10001 | 731 | 13158 | 3 (0)| 00:00:01 | | | | S->P | HYBRID HASH|
| 24 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
| 25 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Conclusion SQL Profiles can be used in much the same way as SQL Patches to introduce hints into application SQL without changing the code, the difference being that SQL Profiles can force match SQL.  However, SQL Profiles do require the Tuning pack to be licenced, whereas SQL Patches and Baselines do not.
Applying force matching SQL profiles to nVision is an effective, though reactive tactic.   Tree changes can result in changes to the number of literal criteria in nVision SQL statements that may, therefore, cease to match existing profiles.  nVision will always require on-going monitoring and introduction of new profiles.

nVision Performance Tuning: 11. Excel -v- OpenXML

Thu, 2017-11-30 04:47
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The general objective the performance tuning changes described in this series of blog posts has been to improve the performance of individual nVision reports, but also to allow many reports to execute concurrently.
However, if you use Excel 2010, Excel 2013 or above, then you may notice run times are significantly longer than with Excel 2007.  Also, from PeopleTools 8.54, Excel 2007 is no longer certified.
The problem is discussed in Oracle support note E-NV: nVision Performance using Microsoft Office 2010 and above (Doc ID 1537491.1).  Essentially, Excel 2010 upwards only runs single threaded.  Only one Excel nVision process that is not waiting for a database call to return can run concurrently on any one Windows server at any one time.  If you want to be able to run 10 concurrent nVision reports you would need to run one on each of 10 process schedulers, on 10 different windows servers.
From PT8.54, OpenXML is the default and preferred engine for executing nVision report on the process scheduler.  This uses a different PeopleSoft executable (PSNVSSRV).  It does not suffer from the single-threading problem so multiple processes can run concurrently.  It can also be run on non-Windows environments.
However, there are some limitations with OpenXML:
  • Excel macros are ignored during report generation, although macros can be put into a layout that will execute when the generated report is subsequently opened in Excel.
  • There are problems with nPlosion.  
  • Any print area set in the layout is lost.
  • When rerunning nVision to file any pre-existing file is not overwritten.
Therefore, it may be necessary to continue to run some nVision reports on Excel.  This would require:
  • Separate process schedulers configured to run Excel rather than OpenXML on each available Windows server.  Excel is selected by setting the variable Excel Automation = 1, in the nVision section of the process scheduler configuration file (psprcs.cfg).  
  • A new Excel nVision process type should be configured to run specific layouts or reportbooks on Excel.  
  • That new process type should only run on these additional process schedulers.  It should have a maximum concurrence of 1, or at most 2, on each Process Scheduler.  These schedulers should be configured to run this new process type (and a single Application Engine so that the purge process can run).

nVision Performance Tuning: 10. Maintaning Statistics on Non-Partitioned Tree Selector Tables

Tue, 2017-11-28 10:21
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you do not partition your selector tables, and if you predominantly use static selectors then you can maintain the statistics on the selector table as the new static selectors are created or updated.  PeopleTools table PSTREESELCTL is used to track static selectors.  It is keyed by selector number.  A row is inserted when a new selector is created, and the version number is updated when an existing selector is re-extracted.  Therefore, a trigger on this table can be used to submit a job to refresh the statistics on the tree selector.
CREATE OR REPLACE TRIGGER sysadm.pstreeselector_stats
BEFORE INSERT OR UPDATE ON sysadm.pstreeselctl
FOR EACH ROW
DECLARE
l_jobno NUMBER;
l_cmd VARCHAR2(1000);
l_table_name VARCHAR2(18);
l_suffix VARCHAR2(2);
BEGIN
l_table_name := 'PSTREESELECT'||LTRIM(TO_CHAR(:new.length,'00'));
l_suffix := SUBSTR(l_table_name,-2);
l_cmd := 'dbms_stats.gather_table_stats(ownname=>user,tabname=>'''||l_table_name||''',force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_FROM_'||l_suffix||''',density=>1,force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_TO_'||l_suffix||''',density=>1,force=>TRUE);'
dbms_output.put_line(l_cmd);
dbms_job.submit(l_jobno,l_cmd);
EXCEPTION WHEN OTHERS THEN NULL;
END;
/
When all the selectors exist in a single segment, the optimizer will not correctly calculate the cardinality of the expressions in the nVision queries on the tree selector tables.  Therefore, I have found it necessary to manually set the density on the range columns to 1 to get the effective execution plans at least some of the time.
N.B. This trigger should not be used if the selector tables are partitioned.


nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

Fri, 2017-11-24 10:42
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.
The Advanced Compression licence enables compression features to be used where segments are still updated by the application.  Without this licence, only simple table compression can be used, although Hybrid Column Compression (HCC) can only be used on an engineered system.  Neither forms of compression can be used in on-line line transaction processing mode.  A table that is marked for compression, but that is populated in conventional path mode will not be compressed.  Any compressed blocks will no longer be compressed after being updated.
However, in most systems, ledger rows cease to be updated within a month or two of the period closing.  The ledger tables will usually be partitioned by fiscal year and accounting period, so a month after the period has closed the corresponding partition will cease to be updated.  Therefore, it could then be compressed.
I usually compress historical partitions when I introduce partitioning.  This following example comes from an engineered system, so Hybrid Columnar Compression has been used.
  • There is one range partition each for the whole of fiscal years 2014 and 2015.  This data is historical and rarely queried, and then not by a single period.  A more aggressive compression QUERY HIGH has been used.  
  • Monthly partitioning is used for the previous fiscal year, 2016.  These partitions are compressed, but using QUERY LOW which should deliver better performance with lower CPU overhead than QUERY HIGH (although usually there isn't much in it).
  • Partition LEDGER_2017_05 is not partitioned because it is current (at the time of building this script) and could still be updated by the application.
CREATE TABLE sysadm.ps_ledger
(...)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015 VALUES LESS THAN (2016,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2015_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 COMPRESS FOR QUERY LOW
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)

,PARTITION ledger_2017_05 VALUES LESS THAN (2017,6)
(SUBPARTITION ledger_2017_05_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2017_05_z_others VALUES (DEFAULT)
)

)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/
As periods close there are more partitions that can be compressed.  However, it may not be practical to take an outage to rebuild ledger partitions each month, and it certainly isn't practical to rebuild the entire ledger table every month.  Instead, from 11g, partitions can be rebuilt in place in an online operation.  The compression attribute can only be specified on the partition, and then the sub-partitions can be rebuilt.  The data in the partition can still be read while it is being moved.
ALTER TABLE ps_ledger MODIFY PARTITON ledger_2017_05 PCTFREE COMPRESS FOR QUERY LOW;
ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_actuals ONLINE PARALLEL 32;

ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_z_others ONLINE PARALLEL 32;
NB: Moving a partition online can fail in 12.1.0.1 or later of the database due to bug 2070300.  This is documented in MOS Doc ID 2040742.1.
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00932: inconsistent datatypes: expected NUMBER got BINARY
The bug is fixed in 12.2 and can be addressed in 12.1 by patch 2070300.

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

Mon, 2017-11-20 05:55
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.
CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
TREE_NODE_NUM INTEGER NOT NULL,
RANGE_FROM_10 VARCHAR2(10) NOT NULL,
RANGE_TO_10 VARCHAR2(10) NOT NULL)
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10
( SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 
Compound Triggers on Tree Selector Tables There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.
  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 
The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.
However, there are two problems that then have to be worked around. 
Library Cache Contention 
Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).
CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.
Limitation on the Maximum Number of Partitions In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.
INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
*
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

nVision Performance Tuning: 7 Analysis of Tree Usage with the Selector Log

Mon, 2017-11-13 11:22
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Over time, the selector log will build up a picture of how each tree is used in a system. Here are two examples of how it can be used.

You may look at a piece of SQL generated by nVision, it will have a literal value for the selector number, and you want to know about that particular selector.
REM treeanal.sql
WITH t as (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l as (
SELECT *
FROM ps_nvs_treeslctlog l
WHERE l.selector_Num = &selector_num
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM t, l
WHERE t.tree_name = l.tree_name
/
Now you can see various pieces of information about the selector and the report in which it was used.
  • Process instance of the report, although it is also in the ACTION string
  • Length of the selector.  Here it was 10 characters so it was in PSTREESELECT10.
  • The number of rows inserted into the selector.  This is useful if you want to recreate the conditions at runtime by populating the selector table manually.
  • Time at which the selector was populated.
  • Session module, usually the process name.  
  • Session action string, which contains the process instance, report ID and business unit.
  • The client info string, containing the operator ID, database name, the name of the host running nVision and the name of the executable.
    • PSNVS is nVision running on Microsoft Excel
    • PSNVSSRV is nVision running in OpenXML
    • psae is a PS/Query using the IN TREE operator that has been scheduled to run on the Process Scheduler.
    • PSQED is the windows query client running in 2-tier mode and using an IN TREE operator.
    • PSSAMSRV indicates either nVision or PS/Query running either through the PIA or on the windows client in 3-tier mode.
  • Status of the selector.  X indicates that the selector has been deleted and the partition has been dropped.
  • Name of the tree.
  • Name of the schema in which the selector table is located.
  • Partition name in the tree selector table in which the data was held.  This will be blank if the partition has been dropped.
  • The current values of the three tree selector flags on the tree definition are also reported.
SELECTOR_NUM PROCESS_INSTANCE Len   NUM_ROWS TIMESTAMP                    MODULE       APPINFO_ACTION
------------ ---------------- --- ---------- ---------------------------- ------------ ----------------------------------------------------------------
CLIENT_INFO S TREE_NAME OWNERID PARTITION_NAME JOB_NO DTL_FIELDNAME T T T
---------------------------------------------------------------- - ------------------ -------- -------------------- ---------- ------------------ - - -
10233 1780069 10 362 10-NOV-17 02.40.50.755038 AM RPTBOOK PI=1780069:UKGL123I:UK001
GBNVISION,PSFINPRD,UKLONWIN001,,PSNVSSRV.EXE, X UKGL_ACCOUNT SYSADM 33052 ACCOUNT J D S

The following query aggregated log entries to report the number of times each tree was used over the last 7 days, and provide various statistics about the numbers of rows extracted from trees into the selector tables, and the current tree performance options.
REM tree_usage.sql
WITH t AS (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method
, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l AS (
SELECT tree_name, length
, COUNT(*) num_uses
, MIN(num_rows) min_rows
, AVG(num_rows) avg_rows
, MEDIAN(num_Rows) med_rows
, MAX(num_rowS) max_rows
, STDDEV(num_Rows) stddev_rows
, SUM(num_rows) sum_rows
, COUNT(distinct process_instance) processes
FROM ps_nvs_treeslctlog l
WHERE num_rows>0
AND timestamp >= sysdate-7
GROUP BY tree_name, length
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM t, l
WHERE t.tree_name = l.tree_name
ORDER BY sum_rows
/
The default recommendation is that all trees should use:
  • Literal values where possible when working with less than about 2000 rows in the selector.  However, where more than 2000 rows it may be better to join the table due to parse and execution overhead of each criterion.
  • Dynamic selectors 
  • Single Value joins 
This report can help to identify trees where extreme volumes mean that different options should be considered.
                          Num    Min Average Median    Max Std Dev       Sum   Num
TREE_NAME Len Uses Rows Rows Rows Rows Rows Rows Procs DTL_FIELDNAME T T T
------------------ --- ------ ------ ------- ------ ------ ------- --------- ----- ------------------ - - -

CORP_ACCT 10 5 1147 2839 2616 6668 2263 14194 1 ACCOUNT J D S
FUNCTION 10 480 9 32 35 35 8 15474 43 CHARTFIELD2 L D S
INT_SUP 6 7 225 2463 2838 2838 987 17243 1 PRODUCT L D S
STAT_PRODUCT 6 8 2889 2889 2889 2889 0 23112 1 PRODUCT J D S
AFFILIATE 5 43 215 576 509 938 223 24789 15 AFFILIATE L D S
INT_GAAP_CON 5 62 82 486 522 730 225 30153 10 BUSINESS_UNIT L D S
BU_GAAP_CON 5 96 44 619 614 731 115 59461 48 BUSINESS_UNIT L D S
STAT_ACCOUNT 10 45 23 4204 6516 6516 2905 189182 6 ACCOUNT J D S
INT_REP1 10 135 149 1563 1664 1664 379 211005 1 CHARTFIELD1 L D S
COMBO_CODE 10 172 17 1592 1532 2430 809 273846 18 CHARTFIELD1 L D S
UKGL_ACCOUNT 10 2586 2 1713 1147 7797 1793 4430262 110 ACCOUNT J D S
  • The account trees have been set to join the tree to the ledger table rather than literal values because sometimes in excess of 6000 rows are extracted. A query with 6000 literal terms would be extremely large, take time to generate in nVision, and time on the database to parse and execute each criterion. 
  • STAT_PRODUCT has been set to join partly because it is large, it always extracts 2889 rows, but also because the whole tree is extracted every time so it does not cut down the result set.

nVision Performance Tuning: 6. Logging Selector Usage

Mon, 2017-11-06 13:09
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Static selectors are tracked by entries in the PSTREESELCTL table.  It is maintained after the tree is extracted to the selector table.  The version number on PSTREESELCTL is compared with the corresponding version number on the PSTREEDEFN to determine whether the extract of the tree to the selector table is still valid, or whether it needs to be reextracted because the tree has been updated.  Selectors that do not have an entry in PSTREESELCTL are therefore dynamic.
Static selectors are left in the PSTREESELECTnn table after the report completes so that they can be reused.  However, many customers periodically create new effective dated versions of many trees, and so static selectors on old effective dated versions of the tree will accumulate as there is nothing to purge them.
Dynamic selectors are extracted every time the report runs.  They should normally be deleted by nVision before the report finishes.  However, if a report crashes dynamic selectors can be left behind.  That creates a number of problems
  • The size of the selector tables will tend to increase over time as old static and dynamic selectors are left behind.
  • That in turn affects the statistics on these tables.  The range of values for SELECTOR_NUM will become wider faster than the number of rows in the table grows.  The minimum value will either be the oldest static selector number, or the oldest dynamic selector left after a crash.  The maximum value will be the last selector number inserted when statistics were collected.
Therefore, it is useful to be able to track creation and deletion of dynamic selectors by the various nVision reports and queries.  I have therefore created a logging table PS_NVS_TREESLCTLOG (see nvision_dynamic_selectors.sql), a PL/SQL package XX_NVISION_SELECTORS and compound DML triggers on every tree selector table.
The column names in the log table have been chosen for compatibility with the PeopleSoft data dictionary, so that a record can be defined in Application Designer.
Column Name
Data Type
Description
SELECTOR_NUM
NUMBER
Unique identifier for tree selector records.
PROCESS_INSTANCE
NUMBER
PeopleSoft process instance number for nVision/Query
LENGTH
NUMBER
Length of tree selector
NUM_ROWS
NUMBER
Number of rows inserted into tree selector. 
Counted by the AFTER ROW part of the triggers.
TIMESTAMP
TIMESTAMP
Time when rows inserted
MODULE
VARCHAR2(64)
Module attribute of session inserting selector rows. 
APPINFO_ACTION
VARCHAR2(64)
Action attribute of session inserting selector rows
CLIENTINFO
VARCHAR2(64)
CLIENT_INFO attribute of session inserting selector rows.  This will include:
  • PeopleSoft Operator ID.
  • Name of the application server or process scheduler domain.
  • Name of the machine where the client process is executing.
  • Name of the client executable process.
STATUS_FLAG
VARCHAR2(1)
I=Selectors Inserted
S=Static Selectors Inserted
D=Selectors Deleted
X=Selectors Deleted and Partition Dropped
TREE_NAME
VARCHAR2(18)
Name of the tree from which selector extracted.
Obtained by querying statement from V$SQL.
OWNER_ID
VARCHAR2(8)
Schema under which nVision report run
PARTITION_NAME
VARCHAR2(128)
Name of partition where selectors stored
JOB_NO
NUMBER
Database Job number to collect statistics on the partition.
All of the logic is kept in the PL/SQL package because it is common to the triggers on all the tree selector tables.  Insert triggers track population of selectors and delete triggers track the successful removal of dynamic selectors.  After row triggers track the selector number and count the number of rows inserted.  After statement triggers call the logging procedures.
CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_insert
FOR INSERT ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowins(:new.selector_num,:new.range_from_10,:new.range_to_10);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logins(10,'SYSADM');
EXCEPTION WHEN OTHERS THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/

CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_delete
FOR DELETE ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowdel(:old.selector_num);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logdel(10);
EXCEPTION WHEN OTHERS
THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/
Once the decision to bear the overhead of triggers on the selector tables is made, there are then various pieces of additional information that can be captured and stored in the logging table for very little additional overhead.  It is easy to record the current session attributes such as module, action, and client_info.  The process instance number is captured on startup and can then be read by the psftapi package.  The tree is identified by scanning the V$SQL for the SQL that fired the triggers. 
It is also possible to maintain statistics on the selector tables.
Purging SelectorsThe selector log can be used to drive purging of selectors not cleared by nVisions that failed to complete.  The package includes a purge procedure to clear selectors that are not already marked as having been deleted either when the logged process instance is no longer running, or if there is no known process instance then if it is more than 2 days since the selector was inserted.  If the selector table is also interval partitioned, then the partition will be dropped.
A trigger on PSTREESELCTL (see pstreeselctl.sql) tracks the logs static selector maintenance.
The purge process is invoked by a trigger on PSPRCSRQST that fires when nVision process status is changed away from processing.  The selector purge process should also be scheduled to run daily.
The selector log itself is not purged as it contains useful information about tree usage.
Recommendations
  • If the tree selector tables are not partitioned create a histogram on SELECTOR_NUM.  However, up to Oracle 11g, this will not be effective as the number of distinct values reaches the maximum number of buckets, 254.  Dynamic selectors should be purged before this happens.  From Oracle 12c the maximum number of buckets is 8192, and hybrid histograms can be used.

nVision Performance Tuning: 5. Additional Instrumentation of nVision

Thu, 2017-11-02 07:28
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

One of the challenges of tuning and monitoring nVision is to be able to identify each report being run. Calls to Oracle instrumentation package dbms_application_info  were added to the component processor in PeopleTools 8.50, and to Application Engine in PeopleTools 8.52.  However, COBOL, nVision, and SQR were never instrumented.
Therefore, there is still a place for the psftapi package and trigger.  When a PeopleSoft batch process starts, it sets the status on its request record on the Process Scheduler request record, psprcsrqst to 7, thus indicating that it is processing.  A trigger on that table fires on that update and calls the psftapi package.  The package sets module and action to the process name and process instance, and also stored the process instance number in a package global variable that can be read with another procedure in the package.  Every scheduled process will have module and action set to something meaningful.  Any PeopleSoft instrumentation will simply overwrite these values.  A sessions module and action are picked up Oracle monitoring tools, in particular, they are also stored in the Active Session History (ASH).
However, nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, we also need to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, you can create a fine-grained audit policy on the query with a PL/SQL handler.  The handler package is then invoked by the audit policy.
BEGIN
DBMS_FGA.ADD_POLICY(
object_schema => 'SYSADM',
object_name => 'PS_NVS_REPORT',
policy_name => 'PS_NVS_REPORT_SEL',
handler_module => 'AEG_FGA_NVISION_HANDLER',
enable => TRUE,
statement_types => 'SELECT',
audit_trail => DBMS_FGA.DB + DBMS_FGA.EXTENDED);
END;
/
The handler package runs in the session that triggered the audit.  It can access the audit record and extract the string of colon-separated bind variables thus obtaining the report ID and business unit.  It updates the session action attribute in the same way as psftapi.sql.
CREATE OR REPLACE PROCEDURE sysadm.aeg_fga_nvision_handler
(object_schema VARCHAR2
,object_name VARCHAR2
,policy_name VARCHAR2)
AS
l_sqlbind VARCHAR2(4000);
l_parm1 VARCHAR2(30);
l_parm2 VARCHAR2(30);
l_parm3 VARCHAR2(30);
l_parm4 VARCHAR2(30);
BEGIN
BEGIN
SELECT x.lsqlbind
, SUBSTR(x.lsqlbind,x.start1,LEAST(30,NVL(x.end1,x.lensqlbind+1)-x.start1)) parm1
, SUBSTR(x.lsqlbind,x.start2,LEAST(30,NVL(x.end2,x.lensqlbind+1)-x.start2)) parm2
, SUBSTR(x.lsqlbind,x.start3,LEAST(30,NVL(x.end3,x.lensqlbind+1)-x.start3)) parm3
, SUBSTR(x.lsqlbind,x.start4,LEAST(30,NVL(x.end4,x.lensqlbind+1)-x.start4)) parm4
INTO l_sqlbind, l_parm1, l_parm2, l_parm3, l_parm4
FROM (
SELECT l.*
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,1,1,'i'),0) start1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,0,'i'),0) end1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,1,'i'),0) start2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,0,'i'),0) end2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,1,'i'),0) start3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,0,'i'),0) end3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,1,'i'),0) start4
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,5,1,'i'),0) end4
, LENGTH(lsqlbind) lensqlbind
FROM sys.fga_log$ l
) x
WHERE x.sessionid = USERENV('SESSIONID')
AND x.entryid = USERENV('ENTRYID')
AND x.obj$name = 'PS_NVS_REPORT';
EXCEPTION
WHEN no_data_found THEN
RAISE_APPLICATION_ERROR(-20000,'AEG_FGA_NVISION_HANDER: No Audit Row');
END;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
l_parm3 := l_parm2;
l_parm2 := l_parm1;
END IF;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
l_parm3 := l_parm2;
END IF;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
END IF;

dbms_output.put_line(l_sqlbind);
dbms_output.put_line(l_parm1);
dbms_output.put_line(l_parm2);
dbms_output.put_line(l_parm3);
dbms_output.put_line(l_parm4);

dbms_application_info.set_action(SUBSTR('PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3,1,64));
--EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER=''PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3||'''';
END;
/
The action attribute is then picked up by the ASH data.  It is easy to extract the report ID and business unit from the action string with regular expressions, as in this example query.
set lines 160 trimspool on
column module format a12
column action format a32
column client_id format a12
column prcsinstance format a9 heading 'Process|Instance'
column business_unit format a8 heading 'Business|Unit'
column report_id format a10
select DISTINCT module, action, client_id
, REGEXP_SUBSTR(h.action,'[[:digit:]]+') prcsinstance
, substr(regexp_substr(h.action,':([[:alnum:]])+',1,2),2) business_unit
, substr(regexp_substr(h.action,':([A-Za-z0-9_-])+',1,1),2) report_id
from v$active_session_History h
where program like 'PSNVS%'
/
Here you can see how process instance, report ID and business unit are held in action and how they can be extracted.  Now, it is possible to profile ASH data for nVision processes, find the long-running SQL and determine which layout it came from.
                                                           Process   Business
MODULE ACTION CLIENT_ID Instance Unit REPORT_ID
------------ -------------------------------- ------------ --------- -------- ----------
RPTBOOK PI=1780508:GBGL224S:UK001 GBNVISION 1780508 UK001 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK002 GBNVISION 1780509 UK002 GBGL010E
RPTBOOK PI=1780502:GBGL91PF:UK001 GBNVISION 1780502 UK001 GBGL91PF
RPTBOOK PI=1780502:GBGL91FR:UK001 GBNVISION 1780502 UK001 GBGL91FR
RPTBOOK PI=1780502:GBGL91GB:UK001 GBNVISION 1780502 UK001 GBGL91GB
RPTBOOK PI=1780502:GBGL91DM:UK002 GBNVISION 1780502 UK002 GBGL91DM
RPTBOOK PI=1780506:GBEXP2AM:UK001 GBNVISION 1780506 UK001 GBEXP2AM
RPTBOOK PI=1780509:Processing GBNVISION 1780509 Processing
RPTBOOK PI=1780500:GBGL113S:UK003 GBNVISION 1780500 UK003 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK000 GBNVISION 1780508 UK000 GBGL010E 
This code in this blog is available on github.
Other recommendations
  • Create an index on SYS.FGA_LOG$ to support the query in the FGA handler package.
CREATE INDEX sys.fga_log$_obj$name
ON sys.fga_log$ (obj$name, sessionid, entryid)
TABLESPACE sysaux PCTFREE 1 COMPRESS 1
/
  • Put a regular purge of the FGA_LOG$ table in place, to purge rows after, say, 31 days.  Otherwise, it will grow indefinitely, one row will be added for every nVision report run.
DELETE FROM fga_log$ 
WHERE obj$name = 'PS_NVS_REPORT'
AND ntimestamp#
  • Move SYS.AUD$ and SYS.FGA_LOG$ from the SYSTEM tablespace to another ASSM tablespace using the instructions in Oracle support note 1328239.1.

Pages