Bug fixes for BASH (ASH for the rest of us) in version 8

Version 8 of BASH is available and fixes some bugs resulting in no data being flushed to BASH$HIST_ACTIVE_SESS_HISTORY. Again, thanks to Robert Ortel for reporting and tracking down the cause.

If you already have BASH installed, you can use the SQL*Plus update script update_v7tov8.sql.

Changes:

  • Fixed another UTC-conversion resulting in no entries in BASH$HIST_ACTIVE_SESS_HISTORY
  • Added missing trigger on BASH.BASH$SETTINGS
  • Renamed INST_ID to INSTANCE_NUMBER in views accessed through public synonyms
  • Fixed a bug causing no data flushed to BASH$HIST_ACTIVE_SESS_HISTORY
Posted in Mumbai | Leave a comment

BASH version 7 fixes two bugs

Version 7 of BASH is available and fixes two bugs leading to inconsistent rows in BASH$HIST_ACTIVE_SESS_HISTORY. Thanks to Robert Ortel for reporting and tracking down the cause.

If you already have BASH installed, you can use the SQL*Plus update script update_v6tov7.sql.

Changes:

  • Fixed a UTC-conversion bug around midnight, resulting in too many entries in BASH$HIST_ACTIVE_SESS_HISTORY
  • Fixed a bug leading to duplicate rows in BASH$HIST_ACTIVE_SESS_HISTORY after 10 seconds with no active sessions sampled
Posted in Mumbai | Leave a comment

BASH (ASH without Diagnostic Pack) now works on RAC

Version 6 of BASH is available. If you already have it installed, you can use the SQL*Plus update script update_v5tov6.sql.

Changes:

  • Made sure BASH works on Oracle 12c
  • BASH is now compatible with RAC on Oracle 11.1 and higher. Each instance runs its own collector through a separate scheduler job.

    Scheduler jobs are created and deleted when starting and stopping the collector through EXEC BASH.BASH.RUN;. The collector detects new instances when running and creates collector jobs for them.

    The following public synonyms to select the collected data now exist:

    BASH$ACTIVE_SESSION_HISTORY: ASH data from the current instance
    BASH$HIST_ACTIVE_SESS_HISTORY: Historic ASH data from the current instance
    BASHG$ACTIVE_SESSION_HISTORY: ASH data from all instances
    BASHG$HIST_ACTIVE_SESS_HISTORY: Historic ASH data from all instancess

Posted in Mumbai | Leave a comment

Update for BASH (ASH for the rest of us)

Version 5 of BASH is available. If you already have it installed, you can use the SQL*Plus update script update_v4tov5.sql.

Changes:

  • Added a nightly purge job for historic data (defaults to 93 “days to keep”) (if you already created one yourself and use the update_v4tov5.sql update script you’ll end up with two)
  • Changed the TERMINAL column from VARCHAR2(16) to VARCHAR2(32)
  • Added an SQL*Plus uninstall script
Posted in Mumbai | 4 Comments

ASH data viewer in Mumbai version 2.2.1.824

Version 2.2.1.824 of Mumbai is available for download. Here’s the list of changes:

  • Added ASH data viewer (supports ASH and BASH data)
  • Added one-click 10046 tracing for scripts in the console window
  • Lots of small tweaks and fixes

After releasing the BASH package that makes active session history data available without a Diagnostic Pack license, an ASH data viewer in Mumbai was the logical next step.

The layout as shown below is similar to what you might know from the ASH data shown in Enterprise Manager/Cloud Control/DB Console or from ASH Viewer, but there are, in my opinion, a few features and options that let you dig the ASH data a bit deeper.

ASH

At the top there is the ASH graph over a time scale and you can select an interval with the mouse or use an automatic selection like “Last 5 minutes” from the selection box on the toolbar. The yellow arrows mark the beginning and end of the selected interval.

At the lower half of the window you see details for the selected time period.

New ASH data is fetched every 15 seconds or whenever you click the update button on the toolbar.

The optional red line shows the CPU count for the database server.

Until here it’s probably what you know from other ASH visualizations.

The ASH data is grouped by minutes or hours (whatever you selected on the toolbar). The grouping by hour produces a more regularly shaped graph when you look at long time intervals. You can also set the number of hours that you want to see without scrolling the graph (“Page size [hours]“).

When looking at longer time intervals, days are separated by grey lines:

ASH_longterm

A very useful feature are the grouping and filter options, so you don’t have to look at the data by the standard WAIT_CLASS_OR_CPU, but you can group by wait event, terminal, module or whatever column there is in V$ACTIVE_SESSION_HISTORY.

Further there is a simple “equal” filter, in case you want to look at the data from just one session, module, user, etc. or just at one specific wait event or wait class.

For the detail data of the selected interval at the lower half of the window you can group by up to three different columns.

You can right click on the detail data grids to quickly filter the graph at the top by the contents of the selected cell. Further for some columns (e.g. SESSION_ID, SQL_ID) there are some menu actions available:

ash_context_menu

When you click the ASH icon in Mumbai you are first asked which data Mumbai should select from:

ASH_setup1

You can select the recent ASH or BASH data or the historical ASH or BASH data by selecting the appropriate views.

Further note that you can select data from more than one instance. This might be useful, e.g. for analyzing I/O load from multiple instances running against the same SAN. If you select the multi-instance option, the ASH data will have an additional column DB_SOURCE, so you can group and filter by the source database later.

Next you need to specify for which interval you want to fetch the ASH/BASH data:

ASH_setup2

The “Fetch new data every 15 seconds” option is only available when you select the latest “To” time (which is always rounded up to the next full hour).

I suggest you start with 2 hour intervals for recent ASH data or up to 24 hour intervals for historic ASH data to see if Mumbai can handle the amount of data and go up from there if you need to.

Some additional features:

  • You can click on a series name in the legend to mark it yellow. Click again to go back to the default color
  • Double click within the legend (but not on a series name) to move it to the other side in case it’s in you way.
  • Note that as default series with values always below 0,1 Average Active Sessions are not shown in the graph, so that irrelevant series do not clutter up the graph. You might need to set it to 0 sometimes to see what you are looking for.

I’ve tested the ASH data viewer in Mumbai for about a months now and it was extremely useful for me (together with BASH). I hope you like it, too! :-)

Posted in Mumbai | 8 Comments

BASH – It’s ASH for the rest of us…

BASH – It’s ASH for the rest of us

Note: This is a historic blog post, that is no longer being updated. Please go to the BASH page for up-to-date information.

If you, like me, work with Standard Edition, Standard Edition One or don’t have a license for Oracle’s Diagnostic Pack, you probably miss Oracle’s ASH (Active Session History) badly. While Statspack, in my opinion, still can fill most of the gap the unavailable AWR leaves on SE/SE1, at least I miss ASH very much.

Like lots of DBAs and consultants, I am a huge fan of Tanel Poder’s snapper script (which has an ASH mode) and I used it almost on a daily basis. However, one of its main feature is that nothing has to be installed, which on the other hand means that there is no historic information available. You have to know when it makes sense to start snapper.

BASH on the other hands replicates what ASH does. It samples V$SESSION every second, makes the samples available through the view BASH$ACTIVE_SESSION_HISTORY and persists the samples from every 10 seconds to BASH$HIST_ACTIVE_SESS_HISTORY, so it is meant a drop-in replacement for ASH.

You can download the BASH installation script here: BASH page

BASH needs Oracle 10g or 11g, a positive job_queue_processes parameter, but no Diagnostic Pack License.

EDIT: I learned that Kyle Hailey wrote S-ASH, which is similiar to BASH, a few years ago already. You might want to look into it: http://dboptimizer.com/ash-masters-2/s-ash/

Here is the header from the bashcreate.sql script with further information:


------------------------------------------------------------------------------------
--
-- Name:          bashcreate.sql - BASH Installation script for Oracle 10.2 to 11.2
--
-- Purpose:       It's ASH for the rest of us (no EE or no diagnostic pack license).
--
-- Requirements:  * Oracle 10.2 to 11.2 (tested with 10.2 and 11.2) 
--                * SE1, SE or EE - Diagnostic Pack NOT needed
--                * Parameter job_queue_processes > 0 
--                   (since the bash data collector permanently runs as a scheduler 
--                    job, you might want to consider raising the job_queue_processes 
--                    parameter by one)
--
-- Installation:  1.) Create a new tablespace for the BASH schema (optional, but recommended).
--                2.) Run: sqlplus sys/<sys_password>@<TNS_ALIAS> as sysdba @bashcreate.sql
--                3.) When asked, enter the password for the BASH user to be created and the 
--                    names for the permanent and temporary tablespace for the BASH user.
--                4.) When asked, enter "N" if you don't want to start the data 
--                    collector job right away. 
--
-- Uninstall:     1.) Execute bash.bash.stop;
--                2.) drop user bash cascade;
--
--
-- Usage:         *** CONTROLLING THE DATA COLLECTION *** 
--
--                The package BASH.BASH has the following procedures that let you
--                control the data gathering:
--
--                procedure run;
--                    Creates and start the bash data collector scheduler job.
--
--                procedure stop;
--                    Stops the bash data collector scheduler job.
--
--                procedure purge (days_to_keep NUMBER);
--                    Purges the data in BASH$HIST_ACTIVE_SESS_HISTORY
--
--                procedure runner;  
--                    Blocking procedure that collects the bash data. Might be
--                    usefull e.g. when scheduler jobs are not available and the
--                    data collector can not be run from a job session.
--
--
--                *** SETTINGS ***
--
--                The table BASH.BASH$SETTINGS has the following columns that let
--                you control how the BASH data is gathered:
--
--                 sample_every_n_centiseconds NUMBER (Default: 100 = 1 second)
--                     Number of centiseconds V$SESSION is sampled
--
--                 max_entries_kept NUMBER (Default: 30000)
--                     How many entries are kept in BASH$ACTIVE_SESSION_HISTORY
--
--                 cleanup_every_n_samples NUMBER (Default: 100)
--                     How often the data in BASH$ACTIVE_SESSION_HISTORY is purged  
--
--                 persist_every_n_samples NUMBER (Default: 10 )
--                     How many of the samples are persisted to BASH$HIST_ACTIVE_SESS_HISTORY
--
--                 logging_enabled NUMBER (Default: 0)
--                     If logging to BASH$LOG is enabled 
--
--                 keep_log_entries_n_days NUMBER (Default: 1)
--                     How many days log entries in BASH$LOG are kept
--
--                 updated NUMBER 
--                     An internally used column that tracks changes in the settings table
--                     through a trigger
--
--                 version NUMBER 
--                     The version number of BASH. Might be used with future update scripts.
--                     Do not change.
--
--                If you change a setting in the BASH.BASH$SETTINGS table and commit,
--                the updated setting will be used by the data collector the next time
--                it persists data to DBA_HIST_ACTIVE_SESS_HISTORY (default: every 10 seconds) 
--
--                The default values for sample_every_n_centiseconds and 
--                persist_every_n_samples replicate the ASH behaviour. 
--                
--
--                *** QUERYING THE COLLECTED BASH DATA *** 
--
--                BASH$ACTIVE_SESSION_HISTORY
--                  Replaces V$ACTIVE_SESSION_HISTORY (1-second samples)
--
--                BASH$HIST_ACTIVE_SESS_HISTORY
--                  Replaces DBA_HIST_ACTIVE_SESS_HISTORY (10-second samples)
--
--                BASH$LOG
--                  Logging table (logging is off by default)
--
--
--                If want to use scripts or tools (e.g. "Mumbai" or "ASH Viewer") that 
--                select from V$ACTIVE_SESSION_HISTORY or DBA_HIST_ACTIVE_SESS_HISTORY,
--                you might want to replace the following public synonyms with synonyms 
--                pointing to BASH$ACTIVE_SESSION_HISTORY and 
--                BASH$HIST_ACTIVE_SESS_HISTORY:
--                
--                  CREATE OR REPLACE PUBLIC SYNONYM "V$ACTIVE_SESSION_HISTORY" 
--                      FOR BASH$ACTIVE_SESSION_HISTORY;
--
--                  CREATE OR REPLACE PUBLIC SYNONYM "DBA_HIST_ACTIVE_SESS_HISTORY" 
--                      FOR BASH$HIST_ACTIVE_SESS_HISTORY;
--                
--                Note that these synonyms will not work for user SYS, so selecting from 
--                V$ACTIVE_SESSION_HISTORY as user sys will still return the Oracle ASH data,
--                not BASH data.
--                
--                Also note that you are still not allowed to use Oracle Enterprise Manager,
--                Oracle Database Console or the Oracle supplied ASH scripts in RDBMS/ADMIN
--                against BASH data without a valid Diagnostic Pack license.
--
--
--                *** CLEANUP AND PURGING *** 
--
--                The data collected for BASH$ACTIVE_SESSION_HISTORY is automatically purged,
--                based on the max_entries_kept setting.
--
--                The data collected for BASH$HIST_ACTIVE_SESS_HISTORY is not purged 
--                automatically. You need to execute BASH.BASH.PURGE(<days_to_keep>) to purge
--                the data. You might want to create a scheduler job to do this on a
--                regular basis.
--
-- Background:    *** Performance impact of the BASH data collector ***
--
--                Oracle's own ASH uses a circular buffer in the SGA, which is something
--                a user process like the BASH data collector can not. After trying a few 
--                setups (global temporary tables, communications though DBMS_PIPE, etc.), I
--                decided to implement BASH as simple as possible using standard heap tables.
--                (The buffer cache is probably the closest thing to a separate memory area 
--                that can be used from a user session.)
--
--                I tested BASH on ten productive databases with quite different loads, both
--                on the OLTP and OLAP side. Since the load from BASH is not recorded by BASH 
--                (when sampling the sampler has to be ignored) I used Tanel Poder's snapper 
--                and latchprof scripts to check for load and excessive latch gets by the bash
--                data collector. The load was usually 0,01 AAS (usually on CPU), on some 
--                database with a large number of active session it sometimes was 0,02 AAS. 
--                The latchprof script showed only very low numbers of latch gets from the 
--                bash data collector.
--                
--                While the ASH setup with a circular buffer in the SGA and its latch-free 
--                access is definetly the superior architecture, I can not see any serious
--                side-effects with the down-to-earth BASH architecture.
--                
--                If you worry about the additonal 1-2% AAS load, you probably need BASH 
--                badly, to fix a few performance problems... ;-)
--                
--                
--                *** Columns in BASH$ACTIVE_SESSION_HISTORY ***
--                
--                For compatibilty reasons with 3rd party tools that select from 
--                V$ACTIVE_SESSION_HISTORY (but actually BASH$ACTIVE_SESSION_HISTORY if
--                you decide to replace the V$ACTIVE_SESSION_HISTORY public synonym), I made
--                all columns from V$ACTIVE_SESSION_HISTORY available in 
--                BASH$ACTIVE_SESSION_HISTORY, however some columns are not really filled
--                with data and always NULL: qc_session_id, qc_instance_id
--                and blocking_session_serial# from the 10.2 version of 
--                V$ACTIVE_SESSION_HISTORY and a whole series of columns from the 11.2
--                version of V$ACTIVE_SESSION_HISTORY (see comments in PL/SQL code).
--                
--                On the other hand, there are three columns in BASH$ACTIVE_SESSION_HISTORY
--                orginating from V$SESSION that are not available in V$ACTIVE_SESSION_HISTORY, 
--                because I think they are useful: OSUSER, TERMINAL, USERNAME
--                
--
-- Author:        Marcus Monnig
-- Copyright:     (c) 2012, 2013 Marcus Monnig - All rights reserved.
--
-- Disclaimer:    No guarantees. Use at your own risk. 
--
-- Changelog:     v1: 2012-12-28 First public release 
--
------------------------------------------------------------------------------------

Posted in Mumbai | 2 Comments

Presenting at DOAG 2012: Analyse und Visualisierung von Statspack und AWR Daten

Presenting at DOAG 2011 was a great experience, so I am very happy to present again at DOAG 2012. The title of the session is “Analysis and Visualizing Statspack and AWR data” (it will be in german language though).

Here’s the abstract:


Der Referent gibt eine Überblick über Statspack und Automatic Workload Repository (AWR), erläutert die Konzepte um die meist großen Mengen von vorhandenen Statspack/AWR Daten zu analysieren und führt in Demos vor, wie diese Konzepte in "Mumbai", einem frei-verfügbaren Windowsprogramm u.a. zur Oracle Performanceanalyse, implementiert wurden.
Der Referent zeigt die Probleme der Durchschnittsbildung, die Grenzen der Statspack/AWR Daten und der von Oracle mitgelieferten existierenden Reports und demonstriert die Analyse und graphische Visualisierung der Statspack/AWR Daten, mit denen sich beispielsweise Zeitintervalle finden lassen, für die sich eine noch weitergehende Analyse lohnt. Weiterhin wird gezeigt wie sich zeitliche Korrelationen zwischen unterschiedlichen aus Statspack/AWR Daten ermittelten Statistikwerten finden lassen, die ein tieferes Verständnis dafür eröffnen, was in der Vergangenheit in Ihrer Datenbank passiert ist.

Hope to see you in Nürnberg in November!

Posted in Mumbai | 2 Comments