Tracing session execution is one of the most powerful ways of tuning an application.   Oracle’s SQL tracing facility allows you to capture all of the activity from a session or groups of sessions and identify tuning opportunities or bottlenecks.  Naturally, we’ve added facilities to the TOAD DBA suite to make this even more useful.

Oracle’s tracing facility consists of two components:

  • The ability to initiate a trace in your session or in another session, we call this SQL trace.
  • A tool - tkprof - that allows the trace files generated to be formatted in a meaningful way.

The SQL tracing facility and tkprof utility are a powerful tuning combination, but they are somewhat awkward to use and the output is sometimes difficult to interpret. Consequently, these tools are not used as widely as they should be. However, the SQL tracing facility is one of the most powerful and widely available tools for tuning SQL statements.

Starting a SQL trace

There’s a variety of ways of starting SQL trace:

  • From a command within your application code.
  • Issuing a command to trace another session
  • By invoking the DBMS_MONITOR package to specify sessions that should be traced.
  • By creating a login trigger that traces based on your individual needs.

Tracing from within your session

The simplest way of starting a trace within your own session is to issue the following command:


This creates a “basic” trace in which SQL statement execution statistics and execution plans are recorded, but not the values of bind variables, or the time spent waiting for various events.  To get a more advanced trace, we can use the DBMS_SESSION package:

DBMS_SESSION.session_trace_enable (
       waits       IN   BOOLEAN DEFAULT TRUE,
       binds       IN   BOOLEAN DEFAULT FALSE,
       plan_stat   IN   VARCHAR2 DEFAULT NULL –-11g Only

Here are the parameters for DBMS_SESSION.SESSION_TRACE_ENABLE:

WAITS TRUE if wait information is to be collected, FALSE otherwise.
BINDS TRUE if bind information is to be collected, FALSE otherwise
PLAN_STAT For 11g, determines when row counts for individual plan steps are collected.  Valid values are ‘never’, ’first_execution’ or ‘all_executions’

For instance, the following command will commence tracing, will include wait information, will not include bind information and will include step row counts aggregated for all executions:

    DBMS_SESSION.session_trace_enable (waits          => TRUE,
                                       binds          => FALSE,
                                       plan_stat      => 'all_executions'

Getting tracing status

In V$SESSION, the columns SQL_TRACE, SQL_TRACE_WAITS, SQL_TRACE_BINDS show the status of tracing for a session, and in V$PROCESS, the columns TRACEID and TRACEFILE show the name of the trace file and the TRACEFILE_IDENTIFIER in effect for a given server process. 

The following query (11g only) shows the values of these columns for the current session:

SQL> SELECT s.sql_trace, s.sql_trace_waits, s.sql_trace_binds,
2 traceid, tracefile
3 FROM v$session s JOIN v$process p ON (p.addr = s.paddr)
5 /

---------- ---------------- ---------------- ----------

Formating traces with Tkprof

Once the trace file is found, the tkprof utility is used to render it into a usable form. The basic syntax for tkprof is:

    tkprof trace_file output_file
explain=connection waits=yes|no sort=(sort options)


trace_file is the raw trace file generated by the SQL_TRACE facility
output_file is the file to which formatted trace information will be written.
Explain=connection Connect to Oracle using the connection parameters given and issue an explain plan for each statement.  This is really only necessary if the trace file doesn’t have embedded execution plans (if it were truncated for instance)
Waits=yes/no Provide a summary of wait information for each SQL
sort=(sort keys) Displays the SQL statements in descending values of the sort keys. The sort keys “(prsela,exeela,fchela)” sort the SQL statements in descending order of elapsed time and are a common choice.

A typical tkprof invocation would be:

$ tkprof mysid_ora_31429.trc tkprof_out1.prf explain=guy/guy

which processes the raw trace file mysid_ora_31429.trc and writes the output file tkprof_out1.prf, generating execution plans using the GUY account and sorting SQL statements by elapsed time. Note that because this example was created under linux, the parentheses were enclosed in quotes; this isn’t necessary in windows.

Tkprof output

Tkprof generates a text file containing execution statistics for each SQL statement and some summary information.  Tkprof output can be a little overwhelming at first, so let us step through an example.  Figure 1 shows some sample tkprof output. Some highlighted superscripts have been added, which are mentioned in the commentary.

Figure 1 Sample tkprof output

Let’s closely examine each item in the tkprof output.  Letters in brackets refer to superscripts in Figure 1.

i)     The SQL text is displayed (1).

ii)    Next is a table containing the execution statistics. Working across the top of the table:

  • The number of times each category of call was issued (2).
  • The CPU time required in seconds (3).
  • The elapsed time required in seconds (4).
  • Number of disk reads required (5)
  • Number of buffers read in query (consistent) (6) or current (7) mode. Blocks read in query mode are usually for consistent read queries. Blocks read in current mode are often for modifications to existing blocks. I don’t believe the distinction is particularly important when tuning SQL, so I say add them together and call them “logical reads.”
  • The number of rows processed (8).

iii)     Working down the table, we see that each measurement is broken down by the category of Oracle call. The three categories are:

  • Parse (a), in which the SQL statement is checked for syntax, valid objects and security, and in which an execution plan is determined by the optimizer.
  • Execute (b), in which an SQL statement is executed, or in the case of a query, prepared for first fetch.  Some queries, such as those which use FOR UPDATE or perform a sort will actually retrieve every row at this point.
  • Fetch (c), in which rows are returned from a query.

We can tell a great deal about the efficiency of the SQL statement by deriving some ratios from this output. Some of the important ratios are:

  • Blocks read (f g) to rows processed (h). This is a rough indication of the relative expense of the query. The more blocks that have to be accessed relative to the number of rows returned, the more “expensive” each row is. A similar ratio is blocks read (f g) to executions (e).
  • Parse count (d) over execute count (e). Ideally, the parse count should be close to one. If it is high in relation to execute count, then the statement has been needlessly re-parsed. 
  • Rows fetched (i) to fetches (j). This indicates the level to which the array fetch facility has been exercised (see here for a discussion of array processing).
  • Disk reads (k) to logical reads (f g). This is a measurement of the “miss rate” within the data buffer cache.

Tkprof execution plans

You may notice that in our example, the execution plan is displayed twice.  The first execution plan, marked in our example as (o), is based on information stored in the trace file when the statement was closed.   This execution plan is not constructed using the output of the EXPLAIN PLAN statement and represents the “real” plan that was used at execution time.  If present, this additional plan is potentially more accurate than that generated by EXPLAIN PLAN.  This is because:

  • The execution plan may have been affected by session options such as OPTIMIZER_GOAL, PARALLEL DML ENABLED or SORT_AREA_SIZE which were set in the session when the SQL was executed.  Tkprof cannot take these settings into account when executing EXPLAIN PLAN.
  • If you wait too long before executing tkprof, table statistics might change sufficiently to alter the execution plan that EXPLAIN PLAN will generate.

Occasionally this “real” execution plan will not be shown because the trace was terminated before the SQL statement was closed. Both types of execution plan will show both the step (m) and also the number of rows processed by each step (l). The row count (l) can indicate which step did the most work and hence might be most usefully tuned.

Also shown in tkprof are detailed statistics for each step, though in a rather unfriendly format.  Each step in the plan may have information similar to the following appended (some of these are truncated in our example):

(cr=49754 pr=49749 time=296110 us cost=13766 size=10458 card=747)

The abbreviations in these annotations represent the following statistics:

cr: Number of consistent (logical) data block reads processed in this step
pr:  Number of physical reads processed by the step
time: Amount of time in microseconds spent processing this step
cost:  Optimizer cost of the step
size: Number of bytes processed by the step
card: Number of rows processed by the step

In 11g, the PLAN_STAT options of DBMS_SESSION.SESSION_TRACE_ENABLE or DBMS_MONITOR.SESSION_TRACE_ENABLE will determine whether these statistics refer to the first execution, or all executions of the SQL within the trace file. Later on, you'll see that it's much easier to look at these statistics using TOAD DBA suite tools.

The execution step row counts provided by SQL_TRACE/tkprof are an invaluable aid to SQL tuning. All other things being equal, the more rows processed, the more computer resources required. Therefore, the step with the highest execution count is usually the step that is most in need of optimization.  Tkprof gives you exact counts on a per-step basis; You can get similar data from DBMS_XPLAN.DISPLAY_CURSOR() only if the server parameter STATISTICS_LEVEL is set to ALL.   

Tkprof wait statistics

Oracle sessions often need to relinquish the CPU to wait for a resource – such as a lock – to become available or to wait for IO to complete.  The time spent waiting for these resources can be recorded in the trace file and will be shown if you choose the WAITS=YES tkprof option.

In Figure 1 this wait information is marked as (p).  Our SQL statement experienced about 5.3 seconds of network wait, and 17.6 seconds of IO wait.  Theoretically, CPU and wait time together should add up to the total elapsed time, but in practice on busy systems time spent waiting for the CPU and other unaccounted time will result in the data not “quite” adding up.

SQL Trace in TOAD DBA suite

Starting traces, identifying the correct trace file and interpreting the requires a fair bit of experience.  Luckily,  the TOAD DBA suite includes the ability to initiate and analyse trace files and powerful tools to help you analyse them. 

For active sessions, you can start the trace from within TOAD itself.

Figure 2 starting a trace from within TOAD

You can also start a trace from within Spotlight, as shown in Figure 3.

Figure 3 Starting a trace from Spotlight

Finding and analysing the trace file can be done from within Spotlight, by going to the Analyze Trace drilldown – represented by the    icon.  Analyze trace lists the tracefiles, and you can analyse them directly, or submit the analysis to be performed by a background job.   Once the trace file has been analysed, it will display a time line of SQL activity as shown in Figure 4.

Figure 4 Spotlights analyse trace drill down

Drilling into the trace file shows a useful summary of overall activity.  One thing you can get from this screen - that is not shown in tkprof - is details about where the wait time is spent.  For instance, tkprof will show a summary of all wait times that looks like this:


But although we can see that there were some db file scattered reads, we can’t tell what object was involved.  As you can see in Figure 5,  Spotlight shows the objects against which the IO occurred.

Figure 5 Trace file summary in Spotlight

We can easily drill into each SQL within the trace file;  Figure 6 shows some of the details available for the SQL.  We can see detailed wait timings (including objects involved in IOs and waits) as well as the SQL execution plan.  This is the execution plan recorded in the trace file which is the "real" plan and quite often more accurate that an execution plan you get from issuing EXPLAIN PLAN.   Furthermore, it shows actual times spent in each step of the execution plan, often pointing you directly to the step with the greatest tuning opportunities.

Figure 6 SQL execution plan in Spotlight Analyze Trace

Another option in Spotlight that is unique is the ability see bind variable execution statistics.  IF you include bind variables in your trace file – either by using the BINDS option in SESSION_TRACE_ENABLE or by specifying binds from Spotlight top Sessions (Figure 3), then you can see how often each bind variable was associated with the SQL and elapsed timings for each.  This can be useful when you suspect that an SQL statement works well with some bind variables but not with others (perhaps indicating the need for a histogram).  

Figure 7 Bind variable information in Spotlight analyze trace.


Tracing SQL statement execution is one of the most powerful techniques for optimizing application performance.   The Toad DBA suite contains powerful features that allow you to easily create and SQL trace files and analyse the results.