Configure Oracle OBIEE with Standby Database (Active Data Guard)

Because an Oracle Standby database (Active Data Guard) is essentially a read-only database, it can be used as a Business intelligence query server, relieving the workload of the primary database and improving query performance.

How it works

You would think since Oracle Standby database is read only database and Oracle OBIEE only generate sql queries, it should work with default configuration. But it’s not that simple , OBIEE generates some write operations and they need to route to Primary database. Following are the example of OBIEE write operations.

  1. Oracle BI Scheduler job and instance data
  2. Temporary tables for performance enhancements
  3. Writeback scripts for aggregate persistence
  4. Usage tracking data, if usage tracking has been enabled
  5. Event polling table data, if event polling tables are being used

Configuration Steps:

  • Create a single database object for the standby database configuration, with temporary table creation disabled.

Capture database object

  • Configure two connection pools for the database object:

A read-only connection pool that points to the standby database


A second connection pool that points to the primary database for write Operations


  • Update any connection scripts that write to the database so that they explicitly specify the primary database connection pool.
  • If usage tracking has been enabled, update the usage tracking configuration to use the primary connection.
  • If event polling tables are being used, update the event polling database configuration to use the primary connection.
  • Ensure that Oracle BI Scheduler is not configured to use any standby sources.

Gathering Optimizer Statistics on Exadata

Because the cost-based approach relies on statistics, you should generate statistics for all tables and clusters and all indexes accessed by your SQL statements before using the cost-based approach. If the size and data distribution of the tables change frequently, then regenerate these statistics regularly to ensure the statistics accurately represent the data in the tables.

Collecting optimizer statistics on Exadata is not any different than other systems. I usually recommend my clients for migrate existing gather stats methods from old system. In case you were not collecting stats on existing system , you should gather should at least following types of optimizer statistics.

  • Table stats
  • Index stats
  • System stats

You can gather table / index stats using following procedure at schema level :


Gathering Exadata specific system statistics ensure the optimizer is aware of Exadata scan speed. Accurately accounting for the speed of scan operations will ensure the Optimizer chooses an optimal execution plan in a Exadata environment. Lack of Exadata specific stats can lead to less performant optimizer plans.
The following command gathers Exadata specific system statistics

exec dbms_stats.gather_system_stats(‘EXADATA’);

Note this best practice is not a general recommendation to gather system statistics in Exadata mode for all Exadata environments. For existing customers who have acceptable performance with their current execution plans, do not gather system statistics in Exadata mode.

For existing customers whose cardinality estimates are accurate, but suffer from the optimizer over estimating the cost of a full table scan where the full scan performs better, then gather system statistics in Exadata mode.

For new applications where the impact can be assessed from the beginning, and dealt with easily if there is a problem, gather system statistics in Exadata mode.


Using Oracle In-memory Parallel Execution with Exadata

Traditional parallel query(PQ) execution, it adopted Direct Path Read to load data which bypassed the database buffer cache(buffer cache) and load directly from the disks. In-Memory parallel execution takes advantage of this large aggregated database buffer cache. By having parallel execution servers access objects using the database buffer cache, they can scan data at least ten times faster than they can on disk. It allows you to cache your hottest tables across buffer caches of multiple RAC nodes

What about Exadata ? In-memory PQ is a great option, only if I/O is your bottleneck. With Exadata you get Terabytes falsh cache and Flash I/O , In-memory PQ is probably not a got idea.

Drop all Indexes on Exadata ?

As I have been a part of many detailed conversations with Oracle Experts regarding “Use of indexes on Exadata”, I decided to share my thoughts and experience on this topic.

I have been working with Exadata since 2011 and have been a part of many implementations and POC’s. As per my experience, Exadata works better without indexes but getting rid of all indexes is not a practical approach. I have implemented / migrated different types of applications (OLTP / OLAP) to Exadata and there were some cases where I was not allowed to make any application changes.  Application changes like dropping an index, partitioning will require testing and will not be easy as it sounds. if you have worked with applications like EBS and SAP, you understand how difficult it will be to make any changes to the environment.

Personally I recommend following balance approach when it comes to use of indexes on Exadata.

  • Don’t drop all the indexes
  • Keep primary key / unique indexes
  • You can drop bit map indexes
  • Use invisible index options when possible
  • Avoid indexes using SQL HINTS
  • Drop and rebuild indexes during ETL load

Forcing Smart Scans on Exadata

Let start with listing Smart Scan requirements here:

  1. Full table scan / Full Index Scan
  2. Direct path read

As you can see that “direct path read” is required for Smart Scan.  Parallel queries automatically use direct path read and serial queries may use direct path read. So if you don’t see Smart Scan happening with “Full Table / Index Scan”, its probably serial query not doing “Direct Path Read”.

Direct Path Read

For serial queries Oracle dynamically decides to use “Direct Path Read” base on following parameters

  • _serial_direct_read = auto
  • _direct_read_decision_statistics_driven = TRUE
  • _small_table_threshold = XXX


As you can see that we can force “Direct Path Read” by setting “_serial_direct_read” to “ALWAYS”. It’s a hidden parameter and if you are not is comfortable using it in production, I will suggest making your queries run in parallel and keep db cache size small.