Exadata Install Process Got Much Simpler With Elastic Configuration

Recently I had performed an Exadata X-6 Bear Metal install and I thought it will be a good idea to share my experience with all of you. Exadata deployment is itself a mini project. You need to work with database /network  team to run OEDA configuration tool to generate all the configuration files. But for this blog, I will focus on the install part only. Before you can this proceed with Exadata Install process, you need to make sure you have run the check-IP script to verify all the DNS entries and IP addresses. It’s very important that you don’t start with install process without verify network configuration.

At this point you should have all the configuration and software files ready on a USB drive or on your laptop. Now you can use following steps to compete Exadata install.

Step 1: You can use Ethernet port 48 to connect to Exadata network. Mostly likely, there is going to blue Ethernet cable you can use for this step. Modify your local area network settings as per following configuration.

Capture 1

Step 2: Connection to Exadata node8 ( using ssh and run ibhosts. You should see all the nodes with their addresses and elasticNode keyword with all nodes.

Capture 2

Step 3: cd into /opt/oracle.supporTools and run reclaimdisks.sh –free –reclaim on node 8 ( and node 10 (

Capture 4

Step 4: run reclaim disk check on both nodes. (reclaimdisks.sh –check) and make sure you get the following output.

Capture 5

Node: – Please make sure you run step 3, 4 on all Compute nodes (node8, node10).

Step 5: Plugin the USB and mount it using following procedure. This step is optional, if you are planning to copy software through ssh.

for x in `ls -1 /sys/block`; do udevadm info –attribute-walk 

–path=/sys/block/$x | grep \

 -iq ‘DRIVERS==”usb-storage”‘; if [ $? -eq 0 ] ; then echo /dev/${x}1; \

fi ; done

Capture 6

Step 6: Unzip OEDA tool to /opt/oracle.SupportTools/onecommand directory , then copy all the configuration files to /opt/oracle.SupportTools/onecommand/linux-x64.

Capture 11

Step 7 : Copy /opt/oracle.SupportTools/onecommand/ to node10 ( 2nd dbnode or all compute nodes )

Capture 19

Step 8: Apply Elastic configuration from node8. This is a very important step, you only need to run this step from 1 node only. This step will use your Exadata configuration file to assign new IP addresses and reboot all exadata nodes including storage nodes.

  • cd /opt/oracle.SupportTools/onecommand/linux-x64
  • ./applyElasticConfig.sh -cf customer_name-configFile.xml

Capture 12

Step 9: Connect to Exadata using new IP address. It’s important to understand that your machine has new ip addresses now. Change your local area settings and assign same IP/MASK as your PDU01 . You can find this information in your OEDA installation template.

Capture 13

Step 10: Connect through ssh and run ibhosts to see if new ip addresses has been assign. Its important that you don’t see any elasticNode keyword on the screen, otherwise your elastic configuration is not completed.

Capture 14

Step 11: Move onecommand directory under /u01 on both dbnodes.

Capture 15

Step 12: Copy all the required software and patches to (/u01/onecommand/linux-x64/WorkDir). You only have to do this on node1. Again you can find complete list of software in your Exadata installation template.

Capture 21

Step 13: Run check-IP script one more time to validate all the network settings.

/opt/oracle.SupportTools/onecommand/linux-x64/checkip.sh -cf customer_name-configFile.xml

Step 14: If you don’t get any error during check-IP process, you can proceed with Exadata install. At this point, there are 19 steps to complete Exadata software install. You can go thought with them 1 by 1 or you can run all of them together. I will strongly recommend doing it 1 by 1, it will help you troubleshoot any issue during the install process. You can using following command to list all the steps.

  • ./install.sh -cf customer_name-configFile.xml -l

Capture 16

Step 15: You can start going through each steps 1 by 1 using following command. I will only post screen shots for step 1, 19

(Step 1 Screen Shot)

Capture 17

(Step 19 Screen Shot)

Capture 19b

Final Step: Please login to all nodes (Compute & Storage) and change the root password.

Capture 20


Installing Additional RDBMS Home on Exadata Machine

As you may already know that you can have multiple RDBMS Versions/Homes running on Exadata machine. You can request to install multiple homes by Oracle ACS or Oracle Certified Partner with your original Exadata deployment. But in case you end up needing to add Oracle RDBMS on Exadata machine, you can follow below steps to add RDBMS home successfully. In my case I already had a home but I needed to add home.

Step 1 : Create new local storage mount for new Oracle RDBMS home. Steps to add local storage can be little different if your Exadata Machine is virtualized, please follow Exadata maintenance guide to create a local storage mount. You can also use existing storage mount but its best practice to have separate mount for maintenance and support reasons.

Upload 01

Step 2 : Identify RDBMS Software and Patches, You can use following 3 methods to get the complete list.

  • Open SR with Oracle
  • Get complete list from already installed home
  • Use OEDA utility

Upload 02

Step 3 : Download Software & Patches to Exadata Machine, I used latest OEDA utility to get the complete list of following software and patches.

  • RDBMS Software
  • Latest Opatch Utility
  • Latest Exadata Bundle Patche April 2016
  • ORACLE DATABASE Overlay Patch for Bug#23200778
  • Oracle JavaVM Component PSU

Upload 03

Step 4 :Install Oracle Software to new database mount. I will not get into details here, since most of you are already familiar with RDBMS Cluster install.

Upload 04

Step 5 : Implement RDS protocol over Infiniband network

  • Set ORACLE_HOME environment variable
  • cd $ORACLE_HOME/rdbms/lib
  • make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle


Step 6 :  Install latest Opatch Utility: There are many ways to you install latest Optach Utility, I simply created a text (dbs_groups) with all the dbnodes name in it and install opatch utility using dcli command. You do need to copy zip file to all the nodes.

  • Create dbs_group
  • Scp p6880880_112000_Linux-x86-64.zip Node2: /u01/app/oracle/product/software/
  • dcli -l oracle -g dbs_group unzip -oq -d /u01/app/oracle/product/ /u01/app/oracle/product/software/p6880880_112000_Linux-x86-64.zip

Upload 05

Step 7 : Check version of new dcli utility and verify if patch is installed successfully

  • dcli -l oracle -g dbs_group /u01/app/oracle/product/ version

Upload 06

Step 8 : Apply Bundle April patch 22899777: I am applying April bundle patch, so I can be consistent with my other database home. I already have GRID (12c) and RDBMS (12c) home running from this machine and I wanted to be at same patch level as them. Please follow below step to Install patch database home only.  You will be applying patch locally, so you need to copy patch to all the nodes and repeat same process for all nodes in cluster.

  • scp p22899777_112040_Linux-x86-64.zip to other nodes
  • unzip p22899777_112040_Linux-x86-64.zip
  • export ORACLE_HOME=/u01/app/oracle/product/
  • export PATH=$PATH:/u01/app/oracle/product/
  • $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/app/oracle/product/software/22899777/22738760
  • $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/app/oracle/product/software/22899777/22502549/custom/server/22502549
  • $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir /u01/app/oracle/product/software/22899777/22738760
  • $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir /u01/app/oracle/product/software/22899777/22502549/custom/server/22502549
  • change Permision on following files :
  • chmod 775 /u01/app/oracle/product/software/22899777/22502549/custom/scripts/*
  • /u01/app/oracle/product/software/22899777/22502549/custom/scripts/prepatch.sh -dbhome $ORACLE_HOME
  • $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/app/oracle/product/software/22899777/22738760
  • $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/app/oracle/product/software/22899777/22502549/custom/server/22502549
  • /u01/app/oracle/product/software/22899777/22502549/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME
  • $ORACLE_HOME/OPatch/opatch lsinventory

upload 07

Step 9 : Install ORACLE DATABASE Overlay Patch for Bug#23200778 for Linux-x86-64 Platforms

  • unzip p23200778_11204160419ExadataDatabase_Linux-x86-64.zip
  • cd 23200778
  • opatch prereq CheckConflictAgainstOHWithDetail -ph ./
  • cd 23200778
  • opatch apply
  • $ORACLE_HOME/OPatch/opatch lsinventory

upload 08

Step 10 : Install Oracle JavaVM Component Database PSU (Apr2016)

  • unzip p22674697_112040_Linux-x86-64
  • cd 22674697
  • opatch prereq CheckConflictAgainstOHWithDetail -ph ./
  • cd 22674697
  • opatch apply
  • $ORACLE_HOME/OPatch/opatch lsinventory

upload 09

Step 11 : List opatch Ispatches

upload 10

Runing Exachk In Virtualized Exadata Environment

Many of you have already used exachk for your Exadata machines and familiar with its uses. But with the virtual Exadata machine, things are little different. You need to run exachk from multiple locations. Number of locations will depend on how you have virtualized your Exadata machine. For example, if you have 2 VM clusters within your Exadata Machine, you will have to run exachk from 3 locations. It does not matter how many nodes you have in your VM cluster, you only need to run exachk in the first user domain (domU) and management domain (domO).

Capture 8

Why Management Domain?

Even though there is no rdbms and clusterware software installed on management domain, you will still need to run exachk to perform hardware and operating system level checks for database nodes, storage servers, InfiniBand fabric and InfiniBand switches. You can also run exachck individually for database servers, storage servers and InfiniBand switches by specifying following command line options (-clusternodes, -cells, -ibswitches).

For this blog, I am only going to focus on running exachk on domO (management domain), you can check following blog (http://blog.umairmansoob.com/running-exachk-on-exadata-machine/) for running exachk on VM clusters. You will need exachk version and higher for virtualization support. You will be using the same command line option, exachk automatically detects that it is running in an Exadata OVM environment and whether it is running in a management domain or user domain and performs the applicable audit checks.

  1. Download latest exachk version from Oracle Metalink (Doc ID 1070954.1). Copy exack.zip to /opt/oracle.SupportTools/exachk and unzip

Capture 0

2. Check Version ( ./exachk –v )

Capture 00

3. Run exachk (./exachk –a )

Capture 1

Notes : – You will need root password for each InfiniBand switch. 

4. Collecting Database Nodes information

Capture 3

5. Collecting Storage Nodes information

Capture 4

6. You can download zip file generated by exachk to your laptop and check exachk_XXXX_html

Capture 6

7. Check for fail items and warnings

Capture 7

Capture 9

Capture 10


Oracle OBIEE Query Performence On Exadata

Note : – If there are queries that return slowly from the underlying databases, then you can capture the SQL statements for the queries in the query log and provide them to the database administrator (DBA) for analysis. Usually DBA are able to fix performance issues. But let me summarizes methods that you can use to improve query performance:

  • Table Indexes : It is very import for underlying table or tables to have indexes. There are different types of indexes E.g. (Primary Key, Bitmap, and Composite), make sure to use them properly. Indexes can become invalid for many reasons, make sure to check them on regular bases. It’s a big topic and I am planning to write a blog about Indexes in Data Warehouse in details.
  • Table Partitions : if table is big and contain a lot of rows, you can improve OBIEE query performance by partitioning underlying table(s). There are many types to partitions in Oracle like range, hash and interval, make sure to use them properly.
  • Table Join : There are different type of join in Oracle (Hash Join, Nested Loop Join), I have personally drastic performance different using different type of join.
  • Avoid Disk Sorts : SQL statement execution can create sort activity, especially if you are using Oracle aggregate functions. Check if query is doing Disk Sort and find a way to avoid disk sorts.
  • SQL HINT(s) : Sometime query don’t get best execution plan from optimizer and you can use SQL hints to enforce  optimum execution plan in Oracle.
  • Aggregate Tables : It is extremely important to use aggregate tables to improve query performance. Aggregate tables contain precalculated summarizations of data. It is much faster to retrieve an answer from an aggregate table than to recompute the answer from thousands of rows of detail.
  • Database Cache : There are different types of caching technique in Oracle like result cache, database cache and Exadata Flash Cache. Caching can significantly improve query performance , use them properly.
  • OBIEE Caching : The Oracle BI Server can store query results for reuse by subsequent queries. Query caching can dramatically improve the apparent performance of the system for users, particularly for commonly used dashboards, but it does not improve performance for most ad-hoc analysis.

Running Exachk on Exadata Machine

Exachk is design to evaluate HW & SW configuration, MAA Best practices and database critical issues for all Oracle Engineered Systems.   All checks have explanations, recommendations, and manual verification commands so that customers can self-correct all FAIL, ERROR and WARNING conditions reported.

Step 1 : Download latest exachk version from Oracle Metalink (Doc ID 1070954.1). Copy exack.zip to /opt/oracle.SupportTools/exachk and unzip.

Capture 0

Step : 2 Check exachk Version

$ ./exachk -v

Capture 1

Step 3 : Run Exadata check

./exachk –a

Capture 2

Step 4: Select Database(s) for checking best practices  

Capture 12

Step 5 : Enter root password:

Capture 4

Step 6: Download .zip file and unzip

$ ls –ltr

Capture 13

Step 7:  Analyze exachk_XXXXX_html

Capture 14

Step 8 : Check Exadata System Health Score

Capture 7

Step 9 : Check for fail items

Capture 15




Exadata Support Oracle 10g with ACFS

ACFS is now supported on Exadata. But ACFS does not support Exadata smart scan and offloading , this mean you cannot place your critical databases on ACFS. Please see following Oracle note 1929629.1 for details.

ACFS Support database version :

  • Oracle Database 10g Rel. 2 ( and
  • Oracle Database 11g ( and higher)
  • Oracle Database 12c ( and higher)


  • Oracle ACFS replication or security/encryption/audit is only supported with general purpose files.
  • Oracle ACFS does not currently support the Exadata offload features.
  • Hybrid Columnar Compression (HCC) support requires fix for bug 19136936.
  • Exadata Smart Flash Cache will cache read operations.
  • Exadata Smart Flash Logging is not supported.



Do we need to Multiplex Redo Logs with Exadata ?

According to Oracle, “Oracle recommends that you multiplex your redo log files. The loss of the log file data can be catastrophic if recovery is required”

Oracle also has a cautionary note on performance that is “When you multiplex the redo log, the database must increase the amount of I/O that it performs. Depending on your configuration, this may impact overall database performance.”

So the question is should we multiplex redo logs with Exadata, which is highly protected from disk failures? The answer YES / NO, It will all depend on your ASM disk group redundancy levels. Oracle recommends making DATA disk group redundancy level high and placing all the online Redo Logs / Standby Logs on DATA disk group and not to be multiplexed.

Please use following Exadata Best practice matrix to decide whether to multiplex online redo logs or not.

  • If a high redundancy disk group exists, place all redo logs in that high redundancy disk group.
  • If both DATA and RECO are high redundancy, place all redo logs in DATA.
  • If only normal redundancy disk groups exist, multiplex redo logs, placing them in separate disk groups.

Sharing Exadata Machine Between SAP and Non-SAP Databases

Recently I was tasked to look into the possibility of sharing Exadata machine between SAP and NON-SAP databases. As many of you already know, SAP has its own bundle patches called SBP (SAP Bundle Patches). Most of these patches are applied to Oracle RDBMS home and some are, may be, applied to Oracle GI Home. You are required to maintain patches for both RDBMS and GRID Home. Sharing RDBMS homes between SAP and NON-SAP databases are not supported.

Now if you want to share Exadata Machine between SAP and NON-SAP databases you have the following options:

  1. Install two separate RDBMS homes, one for SAP databases and one for non-SAP databases. Maintain SAP RDBMS home as per SAP specific instruction and maintain non-SAP database as per Oracle provide instructions. You also have a GRID Home (GI Home) that you need to maintain as per SAP specific instructions.
  2. If you have more than 2 compute nodes ( e.g Exadata half rack ) , you can install 2 clusters using 2 nodes for each cluster. Once you have installed two clusters, you can dedicate 1 cluster each for SAP and NON-SAP databases.

NOTE : SAP has not yet certified OVM with Exadata. Once that is done, you can Install and maintain two separate VM Clusters using OVM, 1 each for SAP and NON-SAP databases.






Choosing High vs Normal ASM Redundancy with Exadata

Every time I go through an Exadata deployment process with my client, there is a discussion about ASM Redundancy level. As many of you already know that Exadata only supports two ASM redundancy levels (Normal or High) and Oracle Recommends using a High Redundancy level for both DATA and RECO disk groups. Keep in mind that changing the redundancy level will require recreating disk groups.

A brief description about respective redundancy levels is as follows:

*NORMAL redundancy provides protection against a single disk failure or an entire storage server failure.

*HIGH redundancy provides protection against 2 simultaneous disk failures from 2 distinct storage servers or 2 entire storage servers. HIGH redundancy provides redundancy during Exadata storage server rolling upgrades.

Choosing redundancy level for your Exadata machine will depend on your database environment, available capacity, and desired protection level. Some databases are critical and need a HIGH redundancy disk group while most other databases can use NORMAL redundancy disk groups. So if you choose Normal redundancy, it will not be against the norm but you will not be following Oracle recommendations. I have seen clients using Normal Redundancy more often than I want to. Following are some reasons where you should always use High Redundancy level:

  • If it is a production system with no DR in place.
  • If your storage requirement is low and using HP capacity disks
  • If you want to perform storage server rolling upgrades.

Now following are some situations where you can use Normal redundancy:

  • If it is a Dev or UAT system.
  • If you are space constrained.
  • If you have Data Guard in place for production databases.

NOTE: Standard Exadata deployment will create 3 disk groups (DATA, RECO and DBFS_DG), but you can create additional disk groups with different redundancy levels based on your requirement.