Improve Temp Reads and Writes with Exadata Storage Server releases

We all have worked with large temp tablespaces in our data warehouse databases. I personally have worked with 10 TB temp tablespace for 50 TB Data Warehouse running on Exadata machine, which was required for large table joints and aggregate operations. Temp writes and temp reads are used when large joints or aggregation operations don’t fit in memory and must be spilled to storage. Before Oracle Exadata Storage Server released, temp writes were not cached in flash cache. Both temp writes and subsequent temp reads were from hard disk only. With the release of Oracle Exadata Storage Server, temp writes are sent to flash cache so that subsequent temp reads can be read from flash cache as well. This can drastically improve performance for queries that spill into temp area. As per Oracle, for certain queries performance can improve up to four times faster.

Additionally, imagine an application using a lot of temp tables and now they can run entirely from flash. This feature can enhance performance for these applications many folds. This feature uses a threshold of 128KB to decide whether to send request directory to disk or write it to flash cache. Therefore, direct load writes, flashback database log writes, archived log writes, and incremental backup writes would bypass flash cache. This feature will redirect large writes into the flash cache, provided that such large writes do not disrupt the higher priority OLTP or scan workloads. Such writes are later written back to the disks when the disks are less busy.


  • Write-back flash cache has to be enabled for this feature to work.
  • Oracle Database 11g release 2 (11.2) or Oracle Database 12c release 1 (12.1), then you need the patches for bug 24944847.
  • This feature is supported on all Oracle Exadata hardware except for V2 and X2 storage servers.
  • Flash caching of temp writes and large writes is not supported when flash compression is enabled


Configure Exadata Storage Server Email Alerts

It will very important to configure proper monitoring and alerting for your Exadata Machine to decreased risk of a problem not being detected in a timely manner. Oracle recommended best practice to monitor an Oracle Exadata Database Machine is with Oracle Enterprise Manager (OEM) and the suite of OEM plugins developed for the Oracle Exadata Database Machine.  Please reference My Oracle Support (MOS) Note 1110675.1 for details.

Additionally, Exadata Storage Servers can send alerts via emails. Sending these messages can helps to ensure that a problem is detected and corrected. First use following cellcli command to validate the email configuration by sending a test email:

alter cell validate mail;

The output will be similar to:

Cell slcc09cel01 successfully altered

If the output is not successful, configure a storage server to send email alerts using the following cellcli command (tailored to your environment):

ALTER CELL smtpServer='', -

smtpFromAddr='', -

smtpToAddr='', -

smtpFrom='Exadata cell', -

smtpPort='<port for mail server>', -

smtpUseSSL='TRUE', -

notificationPolicy='critical,warning,clear', -





Checking Exadata Image Info

Login to any storage cell & compute node using root user & run imageinfo command.

Checking Storage cell image :

login as: root
root@XX.XXX.XX.XX's password:
Last login: Mon Oct 16 17:13:57 2017 from cellnode1

[root@cellnode1 ~]# imageinfo

Kernel version: 4.1.12-61.47.1.el6uek.x86_64 #2 SMP Fri Jun 23 19:43:18 PDT 2017 x86_64
Cell version: OSS_12.
Cell rpm version: cell-

Active image version:
Active image kernel version: 4.1.12-61.47.1.el6uek
Active image activated: 2017-08-15 11:16:34 -0400
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

Cell boot usb partition: /dev/sdm1
Cell boot usb version:

Inactive image version:
Inactive image activated: 2017-05-20 03:20:27 -0700
Inactive image status: success
Inactive system partition on device: /dev/md5
Inactive software partition on device: /dev/md7

Inactive marker for the rollback: /boot/I_am_hd_boot.inactive
Inactive grub config for the rollback: /boot/grub/grub.conf.inactive
Inactive kernel version for the rollback: 4.1.12-61.33.1.el6uek.x86_64
Rollback to the inactive partitions: Possible

Checking compute node image :

[root@dbnode1 ~]# imageinfo

Kernel version: 4.1.12-61.47.1.el6uek.x86_64 #2 SMP Fri Jun 23 19:43:18 PDT 2017 x86_64
Image kernel version: 4.1.12-61.47.1.el6uek
Image version:
Image activated: 2017-08-15 15:44:12 -0400
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

Verify Exadata Machine Configuration

Hello Everyone! I was recently task to perform a 360 health review and decide to share my experience with my readers. Part of Exadata 360 review, I performed detail review of Exadata configuration by verifying following 50 items. Initial Exadata Deployment usually don’t require verify following item if you have old deployment of Exadata machine, you might want to review following items at least once a year.


  1. Primary and standby databases should NOT reside on the same IB Fabric
  2. Use hostname and domain name in lower case
  3. Verify ILOM Power Up Configuration
  4. Verify Hardware and Firmware on Database and Storage Servers
  5. Verify InfiniBand Cable Connection Quality
  6. Verify Ethernet Cable Connection Quality
  7. Verify InfiniBand Fabric Topology (verify-topology)
  8. Verify InfiniBand switch software version is 1.3.3-2 or higher
  9. Verify InfiniBand subnet manager is running on an InfiniBand switch
  10. Verify celldisk configuration on flash memory devices
  11. Verify there are no griddisks configured on flash memory devices
  12. Verify griddisk count matches across all storage servers where a given prefix name exists
  13. Verify griddisk ASM status
  14. Verify InfiniBand is the Private Network for Oracle Clusterware Communication
  15. Verify Oracle RAC Databases use RDS Protocol over InfiniBand Network.
  16. Verify Database and ASM instances use same SPFILE
  17. Configure Storage Server alerts to be sent via email
  18. Configure NTP and Timezone on the InfiniBand switches
  19. Verify NUMA Configuration
  20. Verify Exadata Smart Flash Log is Created
  21. Verify Exadata Smart Flash Cache is Created
  22. Verify Exadata Smart Flash Cache status is “normal”
  23. Verify database server disk controllers use writeback cache
  24. Configure NTP and Timezone on the InfiniBand switches
  25. Verify that “Disk Cache Policy” is set to “Disabled”
  26. Verify Management Network Interface (eth0) is on a Separate Subnet
  27. Verify Platform Configuration and Initialization Parameters for Consolidation
  28. Verify all datafiles have “AUTOEXTEND” attribute “ON”
  29. Verify all “BIGFILE” tablespaces have non-default “MAXBYTES” values set
  30. Ensure Temporary Tablespace is correctly defined
  31. Enable auditd on database servers
  32. Verify AUD$ and FGA_LOG$ tables use Automatic Segment Space Management
  33. Use dbca templates provided for current best practices
  34. Gather system statistics in Exadata mode if needed
  35. Verify Hidden Database Initialization Parameter Usage
  36. Verify bundle patch version installed matches bundle patch version registered in database
  37. Verify service exachkcfg autostart status
  38. Verify database server file systems have “Maximum mount count” = “-1”
  39. Verify database server file systems have “Check interval” = “0”
  40. Set SQLNET.EXPIRE_TIME=10 in DB Home
  41. Verify /etc/oratab
  42. Verify all Database and Storage Servers are synchronized with the same NTP server
  43. Verify there are no failed diskgroup rebalance operations
  44. Verify the CRS_HOME is properly locked
  45. Verify db_unique_name is used in I/O Resource Management (IORM) interdatabase plans
  46. Verify Database Server Quorum Disks are used when beneficial
  47. Verify Oracle Clusterware files are placed appropriately
  48. Verify “_reconnect_to_cell_attempts=9” on database servers which access X6 storage servers
  49. Verify Flex ASM Cardinality is set to “ALL”
  50. Verify no Oracle Enterprise Linux 5 (el5) rpms exist on database servers running Oracle Linux (ol6)


Reference: Oracle Sun Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)


Using Oracle ZFS Storage Appliance for OLTP Workload

As it mentioned earlier that ZFS appliance is best suited to OLAP workload but you can use ZFS Appliance for Non-critical OLTP workload. By nature, Online transaction processing (OLTP) workloads tend to go after a small number of rows per IO request. Imagine a busy OLTP system where thousands of random IO’s going after a small amount of data, will require shorter response time to maintain and achieve healthy response time. That means you should use utilize ZFS read & write flash SSD’s to get reasonable performance for your OLTP applications. Similar to OLAP database workload, it is recommended to use Bigfile tablespace and Oracle Direct NFS. But unlike OLAP workload its best to use Advance Row Compression to optimize IO response time and memory utilization.


Recommended OLAP Database Layout on ZFS Storage
Oracle Files Record Size Sync Write Bias Read Cache Comp


Example Share Name
Datafiles 128K throughput do not use cache devices LZ4 data
Temp 128K latency do not use cache devices LZ4 temp
Archive Logs 1M throughput do not use cache devices LZ4 archive
RMAN Backup 1M throughput do not use cache devices LZ4 backup

Source: Oracle 

Again, let’s look into rest of the recommendation of mapping different databases files to their recommended location and record sizes. Data files, temp files, and archive logs can be placed ton ZFS share with respective record sizes 32K, 128K and 1M.  Similarly, it’s best practices to place online redo logs and control files in the default Fast Recovery Area (FRA) on the preconfigured Exadata ASM diskgroup.


Using Oracle ZFS Storage Appliance for OLAP Workload

OLAP database workloads are mostly SQL queries going after a large chunk of tables or batch process loading bulk of data overnight. You might be loading Terabytes of data to your data warehouse system but those will not be considered random DML statements, hence critical workload will be users only query supporting DSS. This very true nature of DDS can be a good fit for databases compression including HCC. When it comes to using database compression one must understand that HCC will require some maintenance work to keep database HCC compress and it does not support OLTP compression. You should only use compression if you are cloning or duplicating a databases environment which already uses database compression so it can be a true representation of source environment. Additionally, you have an option to both tablespace format traditional big file tablespace but it’s recommended to user big file tablespace to achieve better performance, flexibility, and capacity. Finally, remember to use Oracle Direct NFS as it’s not recommended to use Oracle Intelligent Storage Protocol (OISP) for OLAP workload.


Recommended OLTP Database Layout on ZFS Storage
Oracle Files Record Size Sync Write Bias Read Cache Compression Example Share Name Storage Profile
Datafiles 32K latency all data and metadata LZ4 data Mirrored (NSPF)
Temp 128K latency do not use cache devices LZ4 temp Mirrored (NSPF)
Archive logs 1M throughput do not use cache devices LZ4 archive Mirrored (NSPF)
RMAN Backup 1M throughput do not use cache devices LZ4 backup Mirrored (NSPF)

Source: Oracle

Now let’s look into rest of the recommendation mapping different databases files to their recommended location and record sizes. Data files, temp files, and archive logs can be placed ton ZFS share with respective record sizes 128K, 128K, and 1M.  Even though you have the option to place redo logs and control on ZFS share, it’s best practices to place online redo logs and control files in the default Fast Recovery Area (FRA) on the preconfigured Exadata ASM diskgroup. It is hard to justify placing these two type of sensitive database files on ZFS share as they will not take much space and require high latency.

What is so special about Oracle ZFS Storage Appliance?

  1. Oracle ZFS provides extreme network bandwidth with 10G and InfiniBand connection with built-in network redundancy.
  2. Oracle ZFS provides extreme network bandwidth with 10G and InfiniBand connection with built-in network redundancy.
  3. Oracle ZFS ensure data integrity using feature like copy-on-write, metadata check summing, detect silent data corruption and errors correction before it is too late.
  4. Oracle ZFS uses an Oracle Intelligent Storage Protocol (OISP) to uniquely identify different types databases IO request and help you effectivity addressing performance bottlenecks using built-in Analytics.
  5. Oracle ZFS is extremely easy to manage using native web management interface and provides integration option with Oracle Enterprise Manager.
  6. Oracle ZFS support full range to compression options and tightly integrated with Oracle database to provide Hybrid Columnar Compression (HCC) which is only available on Oracle storage.
  7. Oracle ZFS is tightly integrated with Oracle database which helps achieve extreme backup rate up to 42 TB/hr and restore rate up 55 TB/hr.
  8. Oracle ZFS storage supports highly redundant and scalable InfiniBand architecture which can be seamlessly integrated with Oracle Exadata Machine to provide cost-effective storage option.
  9. Oracle ZFS appliance is integrated with Oracle RMAN to provide up to 2000 concurrent threads evenly distributed across many channels spread across multiple controllers.
  10. Oracle ZFS uses Oracle Direct NFS to reducing CPU and memory overhead by bypassing the operating system and writing buffers directly to user space.
  11. Oracle ZFS support 1MP record size to reduce number of IOPS that are required to disk, preserves the I/O size from RMAN buffers to storage, and improves performance of large block sequential operations.

Isolate your Exadata Network with InfiniBand Partitioning

An efficient system is which provide a balance of CPU performance, memory bandwidth, and I/O performance. A lot of professionals will agree that Oracle Engineered Systems are good examples of efficient systems. InfiniBand network plays a key role to provide that balance and gives close to 32 Gigabit per second network with very low latency. It provides a high-speed and high-efficiency network for Oracle’s Engineered Systems namely Exadata, Exalogic, Big Data Appliance, SuperCluster, ZFS Storage Appliances etc.

If you are planning to consolidate systems on Exadata machine, you might be required to implement network isolation across the multiple environments within the consolidated system for security or compliance reasons. This is accomplished by using custom InfiniBand partitioning with the use of dedicated partition keys and partitioned tables. InfiniBand partitioning will provide you isolation across the different RAC clusters so that network traffic of one RAC cluster is not accessible to another RAC cluster. Note that you can implement similar functionality for the Ethernet networks with VLAN tagging.

An InfiniBand partition creates a group of InfiniBand members that only allows communicating with each other. A unique partition key plays a key role in identifying and maintaining a partition that is managed by the master subnet manager. Members are assigned to these new custom partitions and they can only communicate to another member within that partition. For example, if you implement InfiniBand partitioning with OVM Exadata clusters, one particular cluster is assigned to one dedicated partition for the Clusterware communication and one partition for communication with the storage cells. One RAC cluster will not be able to talk to the nodes of another RAC cluster which will belong to a different partition, hence provide you network isolation within one Exadata Machine.

Temporary Tablespace on Exadata Machine

As we all know Oracle generate a lot of temporary data by operations like bitmap merges, hash join, bitmap index creation, sort. This data only persists only for duration of a transaction or session and will not require media and instance recovery. Since Oracle RAC environment share tablespace between multiple instances, high concurrency of space management operations is very critical. Please use following guidelines for creating temporary tablespaces on Exadata machine.

  1. A BigFile Tablespace
  2. Located in DATA or RECO, whichever one is not HIGH redundancy
  3. Sized 32GB Initially
  4. Configured with AutoExtend on at 4GB
  5. Configured with a Max Size defined to limit out of control growth.


Upgrade Exadata Machine Clusterware to

Hell All! It’s time to upgrade Exadata Machine infrastructure and database to Oracle 12c Release 2. There are some software and patch requirement you need to meet, before you can upgrade your Exadata machine to Oracle 12c Release 2. You might be able to perform most of the upgrade using rolling manner but database upgrade will require some down time. Ideally you would like to upgrade your infrastructure to 12c release 2 and install Oracle 12c release 2 software on separate mount with our impacting sexting database s running target Exadata machine. Later upgrade your databases to release 2 based on availability and downtime requirements. In any case, I strongly suggest making fool proof recovery plan for your Exadata machine and be prepared for a complete Exadata machine recovery. If possible, start with non-production environment and try to leverage DR Exadata Machine for production environments. This blog will provide you overview of upgrade process for your exadata machine to 12c release 2.

Caution : – I consider this a high risk activity and if you don’t have experience performing these kind of upgrades, hire someone who have experience with such upgrades.  

Software Requirements

Your current Exadata machine configuration should meet following requirement for Oracle 12c Release 2 upgrade:

  • Current Oracle Database and Grid Infrastructure version must be,, or
  • Upgrades from or directly to are not supported.
  • Exadata Storage Server version will be required for full Exadata functionality including  ‘Smart Scan offloaded filtering’, ‘storage indexes’ and’ I/O Resource Management’ (IORM).
  • When available: GI PSU or later (which includes DB PSU To be applied during the upgrade process, before running on the Grid Infrastructure home, or after installing the new Database home, before upgrading the database.
  • Fix for bug 17617807 and bug 21255373 is required to successfully upgrade to from, and The fix is already contained in and
  • Fix for bug 25556203 is required on top of the Grid Infrastructure home before running


Create temporary directory to hold all the software and patches

dcli -l oracle -g ~/dbs_group mkdir /u01/app/oracle/patchdepot

Download following software and patches from E-delivery Oracle Database 12c Release 2 Grid Infrastructure

( for Linux x86-64 Oracle Database 12c Release

( for Linux x86-64 Exadata Storage Server Software

Patch 6880880 - OPatch latest update for 11.2, 12.1 and 12.2

Create the new Grid Infrastructure (GI_HOME) directory

(root)# dcli -g ~/dbs_group -l root mkdir -p /u01/app/
 (root)# dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/

Install Grid Software using zip option , runinstaller is no longer supported

(oracle)$ unzip -q /u01/app/oracle/patchdepot/ -d /u01/app/

Obtain and Apply latest OPatch to the target Grid Infrastructure home.

[oracle@exadb1 unzip -oq -d /u01/app/ /u01/upgrade/

[oracle@exadb1 OPatch]# cd /u01/app/

[oracle@exadb1 OPatch]# ./opatch version

Validate Readiness for Oracle Clusterware upgrade using CVU

Before executing CVU as the owner of the Grid Infrastructure unset ORACLE_HOME, ORACLE_BASE and ORACLE_SID.


(Oracle)$ /u01/app/ stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/ -dest_crshome /u01/app/ -dest_version -fixupnoexec –verbose

Verifying User Equivalence ...PASSED

Verifying /dev/shm mounted as temporary file system ...PASSED

Verifying File system mount options for path /var ...PASSED

Verifying zeroconf check ...PASSED

Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was successful.

CVU operation performed:      stage -pre crsinst

Date:                         Apr 20, 2018 8:15:00 AM

CVU home:                     /u01/app/

Backup /u01 filesystem

tar -cvf oracle_inventory_apr20.tar /u01/app/oraInventory

tar -cvf oracle_gi_home_apr20.tar /u01/app/

Note : Task need to be performed on all three nodes as root

Run Exachk report 

./exachk –a

Verify no active rebalance is running

SYS@+ASM1> select count(*) from gv$asm_operation;




 Upgrade Clusterware

Upgrade Grid Infrastructure to (Will also apply latest CPU)

(oracle)$ cd /u01/app/

(oracle)$ -applyPSU /u01/upgrade/26737266

Launching Oracle Grid Infrastructure Setup Wizard...

Upgrade wizard steps 

On "Select Configuration Options" screen, select "Upgrade Oracle Grid Infrastructure , and then click Next.

On "Grid Infrastructure Node Selection" screen, verify all database nodes are shown and selected, and then click Next.

On "Specify Management Options" screen, specify Enterprise Management details when choosing for Cloud Control registration.

On "Privileged Operating System Groups" screen, verify group names and change if desired, and then click Next. If presented with warning: IINS-41808, INS-41809, INS-41812 OSDBA for ASM,OSOPER for ASM, and OSASM are the same group Are you sure you want to continue? Click Yes

On "Specify Installation Location" screen, choose "Oracle Base" and change the software location. The GI_HOME directory cant be chosen. It shows software location: /u01/app/ from where you started

If prompted "The Installer has detected that the specified Oracle Base location is not empty on this and remote servers!" Are you sure you want to continue? Click Yes

On "Root script execution" screen, do not check the box. Keep root execution in your own control

On "Prerequisite Checks" screen, resolve any failed checks or warnings before continuing.

Solaris only: Solaris only: Review Document <2186095.1> Oracle Solaris-specific guidelines for GI software installation prerequisite check failure.

On "Summary" screen, verify the plan and click 'Install' to start the installation (recommended to save a response file for the next time)

On "Install Product" screen monitor the install, until you are requested to run (recommended to save a response file for the next time)

Before executing the last steps ( of the installation process an additional step is required. execution will happen after next steps.

When required relink oracle binary with RDS 

(oracle)$ dcli -l oracle -g ~/dbs_group /u01/app/

 If the command does not return 'rds' relink as follows:

For Linux: as owner of the Grid Infrastructure Home on all nodes execute the steps as follows before running

(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/ \ make -C /u01/app/ -f ipc_rds ioracle

Execute on each database server


Note:- Don't Run this script all at once on all nodes

Verify cluster status

(root)# /u01/app/ check cluster -all
 CRS-4537: Cluster Ready Services is online
 CRS-4529: Cluster Synchronization Services is online
 CRS-4533: Event Manager is online
 CRS-4537: Cluster Ready Services is online
 CRS-4529: Cluster Synchronization Services is online
 CRS-4533: Event Manager is online

check, re-configuring for Cloud Control, and cleaning up the old, unused home areas.


Disable Diagsnap for Exadata

NOTE: Due to unpublished bugs 24900613 25785073 and 25810099, Diagsnap should be disabled for Exadata.

(oracle)$ cd /u01/app/

(oracle)$ ./oclumon manage -disable diagsnap

Advance COMPATIBLE.ASM diskgroup attribute




De-Install GI HOME

Wait for few days before De-Install

oracle)$ cd $ORACLE_HOME/deinstall

(oracle)$ ./deinstall -checkonly

(oracle)$ ./deinstall

change ORACLE_HOME to grid home for previous grid home deinstall:

(root)# dcli -l root -g ~/dbs_group chown -R oracle:oinstall /u01/app/

(root)# dcli -l root -g ~/dbs_group chmod -R 755 /u01/app/

(oracle)$ ./deinstall -checkonly

(oracle)$ ./deinstall