Running Exachk on Exadata Machine

Exachk is design to evaluate HW & SW configuration, MAA Best practices and database critical issues for all Oracle Engineered Systems.   All checks have explanations, recommendations, and manual verification commands so that customers can self-correct all FAIL, ERROR and WARNING conditions reported.

Step 1 : Download latest exachk version from Oracle Metalink (Doc ID 1070954.1). Copy exack.zip to /opt/oracle.SupportTools/exachk and unzip.

Capture 0

Step : 2 Check exachk Version

$ ./exachk -v

Capture 1

Step 3 : Run Exadata check

./exachk –a

Capture 2

Step 4: Select Database(s) for checking best practices  

Capture 12

Step 5 : Enter root password:

Capture 4

Step 6: Download .zip file and unzip

$ ls –ltr

Capture 13

Step 7:  Analyze exachk_XXXXX_html

Capture 14

Step 8 : Check Exadata System Health Score

Capture 7

Step 9 : Check for fail items

Capture 15

 

 

 

Exadata Support Oracle 10g with ACFS

ACFS is now supported on Exadata. But ACFS does not support Exadata smart scan and offloading , this mean you cannot place your critical databases on ACFS. Please see following Oracle note 1929629.1 for details.

ACFS Support database version :

  • Oracle Database 10g Rel. 2 (10.2.0.4 and 10.2.0.5)
  • Oracle Database 11g (11.2.0.4 and higher)
  • Oracle Database 12c (12.1.0.1 and higher)

Restrictions:

  • Oracle ACFS replication or security/encryption/audit is only supported with general purpose files.
  • Oracle ACFS does not currently support the Exadata offload features.
  • Hybrid Columnar Compression (HCC) support requires fix for bug 19136936.
  • Exadata Smart Flash Cache will cache read operations.
  • Exadata Smart Flash Logging is not supported.

 

 

Do we need to Multiplex Redo Logs with Exadata ?

According to Oracle, “Oracle recommends that you multiplex your redo log files. The loss of the log file data can be catastrophic if recovery is required”

Oracle also has a cautionary note on performance that is “When you multiplex the redo log, the database must increase the amount of I/O that it performs. Depending on your configuration, this may impact overall database performance.”

So the question is should we multiplex redo logs with Exadata, which is highly protected from disk failures? The answer YES / NO, It will all depend on your ASM disk group redundancy levels. Oracle recommends making DATA disk group redundancy level high and placing all the online Redo Logs / Standby Logs on DATA disk group and not to be multiplexed.

Please use following Exadata Best practice matrix to decide whether to multiplex online redo logs or not.

  • If a high redundancy disk group exists, place all redo logs in that high redundancy disk group.
  • If both DATA and RECO are high redundancy, place all redo logs in DATA.
  • If only normal redundancy disk groups exist, multiplex redo logs, placing them in separate disk groups.

Sharing Exadata Machine Between SAP and Non-SAP Databases

Recently I was tasked to look into the possibility of sharing Exadata machine between SAP and NON-SAP databases. As many of you already know, SAP has its own bundle patches called SBP (SAP Bundle Patches). Most of these patches are applied to Oracle RDBMS home and some are, may be, applied to Oracle GI Home. You are required to maintain patches for both RDBMS and GRID Home. Sharing RDBMS homes between SAP and NON-SAP databases are not supported.

Now if you want to share Exadata Machine between SAP and NON-SAP databases you have the following options:

  1. Install two separate RDBMS homes, one for SAP databases and one for non-SAP databases. Maintain SAP RDBMS home as per SAP specific instruction and maintain non-SAP database as per Oracle provide instructions. You also have a GRID Home (GI Home) that you need to maintain as per SAP specific instructions.
  2. If you have more than 2 compute nodes ( e.g Exadata half rack ) , you can install 2 clusters using 2 nodes for each cluster. Once you have installed two clusters, you can dedicate 1 cluster each for SAP and NON-SAP databases.

NOTE : SAP has not yet certified OVM with Exadata. Once that is done, you can Install and maintain two separate VM Clusters using OVM, 1 each for SAP and NON-SAP databases.

 

 

 

 

 

Choosing High vs Normal ASM Redundancy with Exadata

Every time I go through an Exadata deployment process with my client, there is a discussion about ASM Redundancy level. As many of you already know that Exadata only supports two ASM redundancy levels (Normal or High) and Oracle Recommends using a High Redundancy level for both DATA and RECO disk groups. Keep in mind that changing the redundancy level will require recreating disk groups.

A brief description about respective redundancy levels is as follows:

*NORMAL redundancy provides protection against a single disk failure or an entire storage server failure.

*HIGH redundancy provides protection against 2 simultaneous disk failures from 2 distinct storage servers or 2 entire storage servers. HIGH redundancy provides redundancy during Exadata storage server rolling upgrades.

Choosing redundancy level for your Exadata machine will depend on your database environment, available capacity, and desired protection level. Some databases are critical and need a HIGH redundancy disk group while most other databases can use NORMAL redundancy disk groups. So if you choose Normal redundancy, it will not be against the norm but you will not be following Oracle recommendations. I have seen clients using Normal Redundancy more often than I want to. Following are some reasons where you should always use High Redundancy level:

  • If it is a production system with no DR in place.
  • If your storage requirement is low and using HP capacity disks
  • If you want to perform storage server rolling upgrades.

Now following are some situations where you can use Normal redundancy:

  • If it is a Dev or UAT system.
  • If you are space constrained.
  • If you have Data Guard in place for production databases.

NOTE: Standard Exadata deployment will create 3 disk groups (DATA, RECO and DBFS_DG), but you can create additional disk groups with different redundancy levels based on your requirement.

 

 

Configure Oracle OBIEE with Standby Database (Active Data Guard)

Because an Oracle Standby database (Active Data Guard) is essentially a read-only database, it can be used as a Business intelligence query server, relieving the workload of the primary database and improving query performance.

How it works

You would think since Oracle Standby database is read only database and Oracle OBIEE only generate sql queries, it should work with default configuration. But it’s not that simple , OBIEE generates some write operations and they need to route to Primary database. Following are the example of OBIEE write operations.

  1. Oracle BI Scheduler job and instance data
  2. Temporary tables for performance enhancements
  3. Writeback scripts for aggregate persistence
  4. Usage tracking data, if usage tracking has been enabled
  5. Event polling table data, if event polling tables are being used

Configuration Steps:

  • Create a single database object for the standby database configuration, with temporary table creation disabled.

Capture database object

  • Configure two connection pools for the database object:

A read-only connection pool that points to the standby database

stadnbyconnection

A second connection pool that points to the primary database for write Operations

primaryconnection

  • Update any connection scripts that write to the database so that they explicitly specify the primary database connection pool.
  • If usage tracking has been enabled, update the usage tracking configuration to use the primary connection.
  • If event polling tables are being used, update the event polling database configuration to use the primary connection.
  • Ensure that Oracle BI Scheduler is not configured to use any standby sources.

Gathering Optimizer Statistics on Exadata

Because the cost-based approach relies on statistics, you should generate statistics for all tables and clusters and all indexes accessed by your SQL statements before using the cost-based approach. If the size and data distribution of the tables change frequently, then regenerate these statistics regularly to ensure the statistics accurately represent the data in the tables.

Collecting optimizer statistics on Exadata is not any different than other systems. I usually recommend my clients for migrate existing gather stats methods from old system. In case you were not collecting stats on existing system , you should gather should at least following types of optimizer statistics.

  • Table stats
  • Index stats
  • System stats

You can gather table / index stats using following procedure at schema level :

EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(‘HR’,DBMS_STATS.AUTO_SAMPLE_SIZE);

Gathering Exadata specific system statistics ensure the optimizer is aware of Exadata scan speed. Accurately accounting for the speed of scan operations will ensure the Optimizer chooses an optimal execution plan in a Exadata environment. Lack of Exadata specific stats can lead to less performant optimizer plans.
The following command gathers Exadata specific system statistics

exec dbms_stats.gather_system_stats(‘EXADATA’);

Note this best practice is not a general recommendation to gather system statistics in Exadata mode for all Exadata environments. For existing customers who have acceptable performance with their current execution plans, do not gather system statistics in Exadata mode.

For existing customers whose cardinality estimates are accurate, but suffer from the optimizer over estimating the cost of a full table scan where the full scan performs better, then gather system statistics in Exadata mode.

For new applications where the impact can be assessed from the beginning, and dealt with easily if there is a problem, gather system statistics in Exadata mode.

 

Using Oracle In-memory Parallel Execution with Exadata

Traditional parallel query(PQ) execution, it adopted Direct Path Read to load data which bypassed the database buffer cache(buffer cache) and load directly from the disks. In-Memory parallel execution takes advantage of this large aggregated database buffer cache. By having parallel execution servers access objects using the database buffer cache, they can scan data at least ten times faster than they can on disk. It allows you to cache your hottest tables across buffer caches of multiple RAC nodes

What about Exadata ? In-memory PQ is a great option, only if I/O is your bottleneck. With Exadata you get Terabytes falsh cache and Flash I/O , In-memory PQ is probably not a got idea.

Drop all Indexes on Exadata ?

As I have been a part of many detailed conversations with Oracle Experts regarding “Use of indexes on Exadata”, I decided to share my thoughts and experience on this topic.

I have been working with Exadata since 2011 and have been a part of many implementations and POC’s. As per my experience, Exadata works better without indexes but getting rid of all indexes is not a practical approach. I have implemented / migrated different types of applications (OLTP / OLAP) to Exadata and there were some cases where I was not allowed to make any application changes.  Application changes like dropping an index, partitioning will require testing and will not be easy as it sounds. if you have worked with applications like EBS and SAP, you understand how difficult it will be to make any changes to the environment.

Personally I recommend following balance approach when it comes to use of indexes on Exadata.

  • Don’t drop all the indexes
  • Keep primary key / unique indexes
  • You can drop bit map indexes
  • Use invisible index options when possible
  • Avoid indexes using SQL HINTS
  • Drop and rebuild indexes during ETL load