PL/SQL tuning 1


When to Tune PL/SQL Code
  • Programs that do a lot of mathematical calculations. You will want to investigate the datatypes PLS_INTEGER, BINARY_FLOAT, and BINARY_DOUBLE.
  • Functions that are called from PL/SQL queries, where the functions might be executed millions of times. You will want to look at all performance features to make the function as efficient as possible, and perhaps a function-based index to precompute the results for each row and save on query time.
  • Programs that spend a lot of time processing INSERT, UPDATE, or DELETE statements, or looping through query results. You will want to investigate the FORALL statement for issuing DML, and the BULK COLLECT INTO and RETURNING BULK COLLECT INTO clauses for queries.
  • Older code that does not take advantage of recent PL/SQL language features. (With the many performance improvements in Oracle Database 10g, any code from earlier releases is a candidate for tuning.)
  • Any program that spends a lot of time doing PL/SQL processing, as opposed to issuing DDL statements like CREATE TABLE that are just passed directly to SQL. You will want to investigate native compilation. Because many built-in database features use PL/SQL, you can apply this tuning feature to an entire database to improve performance in many areas, not just your own code.
Reasons for PL/SQL Performance Problems           
               
Duplication of Built-in Functions
Built-in functions are more efficient. Even when a built-in function has more power than you need, use it rather than hand-coding a subset of its functionality. PL/SQL provides many highly optimized functions such as REPLACE, TRANSLATE, SUBSTR, INSTR, RPAD, and LTRIM.

Inefficient Conditional Control Statements
When evaluating a logical expression, PL/SQL uses short-circuit evaluation. That is, PL/SQL stops evaluating the expression as soon as the result can be determined.
Use AND instead of OR in IF .... THEN ... END IF / EXIT-WHEN statements.

Implicit Datatype Conversions
At run time, PL/SQL converts between structurally different datatypes implicitly. For instance, assigning a PLS_INTEGER variable to a NUMBER variable results in a conversion because their internal representations are different.
Avoiding implicit conversions can improve performance.

Unnecessary NOT NULL Constraints
In PL/SQL, using the NOT NULL constraint incurs a performance cost.

Size Declarations for VARCHAR2 Variables
VARCHAR2 datatype involves a trade-off between memory use and efficiency.
For a VARCHAR2(>= 2000) variable, PL/SQL dynamically allocates only enough memory to hold the actual value. However, for a VARCHAR2(< 2000) variable, PL/SQL preallocates enough memory to hold a maximum-size value. So, for example, if you assign the same 500-byte value to a VARCHAR2(2000) variable and to a VARCHAR2(1999) variable, the latter uses 1499 bytes more memory.

Misuse of Shared Memory in a PL/SQL Program
You can improve performance by sizing the shared memory pool correctly. Make sure it is large enough to hold all frequently used packages but not so large that memory is wasted.
When you call a packaged subprogram for the first time, the whole package is loaded into the shared memory pool. So, subsequent calls to related subprograms in the package require no disk I/O, and your code executes faster. However, if the package is aged out of memory, it must be reloaded if you reference it again.

Pinned Packages
Another way to improve performance is to pin frequently used packages in the shared memory pool. When a package is pinned, it is not aged out by the least recently used (LRU) algorithm that Oracle normally uses. The package remains in memory no matter how full the pool gets or how frequently you access the package.
You can pin packages with the help of the supplied package DBMS_SHARED_POOL.

Serially Reusable Packages
To help you manage the use of memory, PL/SQL provides the pragma SERIALLY_REUSABLE, which lets you mark some packages as serially reusable. You can so mark a package if its state is needed only for the duration of one call to the server (for example, an OCI call to the server or a server-to-server RPC).
The global memory for such packages is pooled in the System Global Area (SGA), not allocated to individual users in the User Global Area (UGA). That way, the package work area can be reused. When the call to the server ends, the memory is returned to the pool. Each time the package is reused, its public variables are initialized to their default values or to NULL.
The maximum number of work areas needed for a package is the number of concurrent users of that package, which is usually much smaller than the number of logged-on users. The increased use of SGA memory is more than offset by the decreased use of UGA memory. Also, Oracle ages-out work areas not in use if it needs to reclaim SGA memory.

Identifying PL/SQL Performance Problems
As you develop larger and larger PL/SQL applications, it becomes more difficult to isolate performance problems. So, PL/SQL provides Profiler API and Trace API

Profiler API: Package DBMS_PROFILER
The Profiler API is implemented as PL/SQL package DBMS_PROFILER, which provides services for gathering and saving run-time statistics. The information is stored in database tables, which you can query later. For example, you can learn how much time was spent executing each PL/SQL line and subprogram.
To use the Profiler, you start the profiling session, run your application long enough to get adequate code coverage, flush the collected data to the database, then stop the profiling session. In a typical session, you take the following steps:
  • Start by calling the procedure start_profiler in package DBMS_PROFILER and associating a comment with the Profiler session.
  • Run the application to be profiled.
  • Call the procedure flush_data repeatedly to save incremental data and free memory allocated for data structures.
  • Stop by calling the procedure stop_profiler.
The Profiler traces the execution of your program, computing the time spent at each line and in each subprogram. You can use the collected data to improve performance. For instance, you might focus on subprograms that run slowly.

Analyzing the Collected Performance Data
The next step is to determine why more time was spent executing certain code segments / accessing certain data structures. Find the problem areas by querying the performance data.
Focus on the subprograms and packages that use up the most execution time, inspecting possible performance bottlenecks such as SQL statements, loops, and recursive functions.

Using Trace Data to Improve Performance
Use the results of your analysis to rework slow algorithms. For example, due to an exponential growth in data, you might need to replace a linear search with a binary search. Also, look for inefficiencies caused by inappropriate data structures, and, if necessary, replace those data structures.

Trace API: Package DBMS_TRACE
Trace API is implemented as PL/SQL package DBMS_TRACE, which provides services for tracing execution by subprogram or exception.
To use Trace, you start the tracing session, run your application, then stop the tracing session. As the program executes, trace data is collected and stored in database tables. In a typical session, you take the following steps:
  • Optionally, select specific subprograms for trace data collection.
  • Start by calling the procedure set_plsql_trace in package DBMS_TRACE.
  • Run the application to be traced.
  • Stop by calling the procedure clear_plsql_trace
Controlling the Trace
Tracing large applications can produce huge amounts of data that are difficult to manage. Before starting Trace, you can optionally limit the volume of data collected by selecting specific subprograms for trace data collection.
In addition, you can choose a tracing level. For example, you can choose to trace all subprograms and exceptions, or you can choose to trace selected subprograms and exceptions.

PL/SQL Features for Performance Tuning
After correcting the flaws that slow down an application, you can tune PL/SQL Performance with
  • Native Dynamic SQL
  • Bulk Binds
  • NOCOPY Compiler Hint
  • RETURNING Clause
  • External Routines
  • Object Types and Collections
  • Compiling PL/SQL Code for Native Execution
These easy-to-use features can speed up an application considerably.

Native Dynamic SQL
Some programs must build and process a variety of SQL statements at run time. So, their full text is unknown until then. Such statements can, and probably will, change from execution to execution. So, they are called dynamic SQL statements.
You can execute any kind of dynamic SQL statement using an interface called native dynamic SQL.
Native dynamic SQL is easier to use and much faster than the DBMS_SQL package.

Bulk Binds
When SQL statements execute inside a loop using collection elements as bind variables, context switching between the PL/SQL and SQL engines can slow down execution. For example, the following UPDATE statement is sent to the SQL engine with each iteration of the FOR loop:
FOR i IN depts.FIRST..depts.LAST LOOP
      UPDATE emp SET sal = sal * 1.10 WHERE deptno = depts(i);
END LOOP;
In such cases, if the SQL statement affects four or more database rows, the use of bulk binds can improve performance considerably. For example, the following UPDATE statement is sent to the SQL engine just once, with the entire nested table:
FORALL i IN depts.FIRST..depts.LAST
   UPDATE emp SET sal = sal * 1.10 WHERE deptno = depts(i);

To maximize performance, rewrite your programs as follows:
  • If an INSERT, UPDATE, or DELETE statement executes inside a loop and references collection elements, move it into a FORALL statement.
  • If a SELECT INTO, FETCH INTO, or RETURNING INTO clause references a collection, incorporate the BULK COLLECT clause.
  • If possible, use host arrays to pass collections back and forth between your programs and the database server.
  • If the failure of a DML operation on a particular row is not a serious problem, include the keywords SAVE EXCEPTIONS in the FORALL statement and report or clean up the errors in a subsequent loop using the %BULK_EXCEPTIONS attribute.
These are not trivial tasks. They require careful analysis of program control-flows and dependencies.

NOCOPY Compiler Hint
When parameters hold large data structures such as collections, records, and instances of object types, all this copying slows down execution and uses up memory. To prevent that, you can specify the NOCOPY hint, which allows the PL/SQL compiler to pass OUT and IN OUT parameters by reference. In the following example, you ask the compiler to pass IN OUT parameter my_unit by reference instead of by value:
PROCEDURE reorganize (my_unit IN OUT NOCOPY Platoon) IS ...
By default, OUT and IN OUT parameters are passed by value. That is, the value of an IN OUT actual parameter is copied into the corresponding formal parameter. Then, if the subprogram exits normally, the values assigned to OUT and IN OUT formal parameters are copied into the corresponding actual parameters.

RETURNING Clause
INSERT, UPDATE, and DELETE statements can include a RETURNING clause, which returns column values from the affected row into PL/SQL variables or host variables. This eliminates the need to SELECT the row after an insert or update, or before a delete. As a result, fewer network round trips, less server CPU time, fewer cursors, and less server memory are required.
BEGIN
   UPDATE emp SET sal = sal * 1.1
      WHERE empno = emp_id
      RETURNING ename, sal INTO name, new_sal;     -- Now do computations involving name and new_sal
END;

External Routines
PL/SQL provides an interface for calling routines written in other languages. Standard libraries already written and available in other languages can be called from PL/SQL programs. This promotes reusability, efficiency, and modularity.
PL/SQL is specialized for SQL transaction processing. Some tasks are more quickly done in a lower-level language such as C, which is very efficient at machine-precision calculations.
To speed up execution, you can rewrite computation-bound programs in C. In addition, you can move such programs from client to server, where they will execute faster thanks to more computing power and less across-network communication.
For example, you can write methods for an image object type in C, store them in a dynamic link library (DLL), register the library with PL/SQL, then call it from your applications. At run time, the library loads dynamically and, for safety, runs in a separate address space (implemented as a separate process).

Object Types and Collections
Collection types and object types increase productivity by allowing for realistic data modeling. Complex real-world entities and relationships map directly into object types. And, a well-constructed object model can improve application performance by eliminating table joins, reducing round trips, and the like.
Client programs, including PL/SQL programs, can declare objects and collections, pass them as parameters, store them in the database, retrieve them, and so on. Also, by encapsulating operations with data, object types let you move data-maintenance code out of SQL scripts and PL/SQL blocks into methods.
Objects and collections are more efficient to store and retrieve because they can be manipulated as a whole. Also, object support is integrated with the database architecture, so it can take advantage of the many scalability and performance improvements built into each Oracle release.

Compiling PL/SQL Code for Native Execution
You can speed up PL/SQL procedures by compiling them into native code residing in shared libraries. The procedures are translated into C code, then compiled with your usual C compiler and linked into the Oracle process.
You can use this technique with both the supplied Oracle packages, and procedures you write yourself. Procedures compiled this way work in a variety of server environments, such as the shared server configuration.
Because this technique cannot do much to speed up SQL statements called from PL/SQL, it is most effective for compute-intensive PL/SQL procedures that do not spend most of their time executing SQL.
To speed up one or more procedures using this technique:
  • Update the supplied makefile and enter the appropriate paths and other values for your system. The path of this makefile is $ORACLE_HOME/plsql/spnc_makefile.mk.
  • Use the ALTER SYSTEM or ALTER SESSION command, or update your initialization file, to set the parameter PLSQL_COMPILER_FLAGS to include the value NATIVE. The default setting includes the value INTERPRETED, and you must remove this keyword from the parameter value.
  • Compile one or more procedures, using one of these methods:
  1. Use the ALTER PROCEDURE or ALTER PACKAGE command to recompile the procedure or the entire package.
  2. Drop the procedure and create it again.
  3. Use CREATE OR REPLACE to recompile the procedure.
  4. Run one of the SQL*Plus scripts that sets up a set of Oracle-supplied packages.
  5. Create a database using a preconfigured initialization file with PLSQL_COMPILER_FLAGS=NATIVE. During database creation, the UTLIRP script is run to compile all the Oracle-supplied packages.
  • To be sure that the process worked, you can query the data dictionary to see that a procedure is compiled for native execution. To check whether an existing procedure is compiled for native execution or not, you can query the data dictionary views USER_STORED_SETTINGS, DBA_STORED_SETTINGS, and ALL_STORED_SETTINGS. For example, to check the status of the procedure MY_PROC, you could enter:
  • SELECT param_value FROM user_stored_settings WHERE param_name = 'PLSQL_COMPILER_FLAGS' and object_name = 'MY_PROC';
The PARAM_VALUE column has a value of NATIVE for procedures that are compiled for native execution, and INTERPRETED otherwise.
After the procedures are compiled and turned into shared libraries, they are automatically linked into the Oracle process. You do not need to restart the database, or move the shared libraries to a different location. You can call back and forth between stored procedures, whether they are all compiled in the default way (interpreted), all compiled for native execution, or a mixture of both.
Because the PLSQL_COMPILER_FLAGS setting is stored inside the library unit for each procedure, procedures compiled for native execution are compiled the same way when the procedure is recompiled automatically after being invalidated, such as when a table that it depends on is recreated.
You can control the behavior of PL/SQL native compilation through the ALTER SYSTEM or ALTER SESSION commands or by setting or changing these parameters in the initialization file:
  • PLSQL_COMPILER_FLAGS
  • PLSQL_NATIVE_LIBRARY_DIR (cannot be set by ALTER SESSION for security reasons)
  • PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT
  • PLSQL_NATIVE_MAKE_UTILITY
  • PLSQL_NATIVE_MAKE_FILE_NAME
As the procedure is compiled, you see the various compilation and link commands being executed. The procedure is immediately available to call, and runs as a shared library directly within the Oracle process.
Limitations of Native Compilation
  • If a package specification is compiled for native execution, the corresponding body should be compiled using the same setting.
  • Debugging tools for PL/SQL do not handle procedures compiled for native execution.
  • When many procedures and packages (typically, over 5000) are compiled for native execution, the large number of shared objects in a single directory might affect system performance. In this case, you can set the initialization parameter PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT in the initialization file before creating the database or compiling the PL/SQL packages or procedures. Set this parameter to a value (typically around 1000). Then create subdirectories underneath the directory specified in the PLSQL_NATIVE_LIBRARY_DIR parameter. Name the subdirectories d0, d1, d2 ... d999, up to the value specified for the subdirectory count. When the procedures are compiled for native execution, the DLLs are automatically distributed among these subdirectories by the PL/SQL compiler.