Oracle Direct NFS (dNFS) is an NFS client which resides within the database kernel and should be enabled on Exadata for all direct database and RMAN workloads between Exadata and the Oracle ZFS Storage Appliance. With this feature enabled, you will have increased bandwidth and reduced CPU overhead. Even though there are no additional steps are required to enable dNFS although it is recommended to increase the number of NFS server threads from default to 1000.
As per Oracle documentation, using Oracle Direct NFS with Exadata can provide following benefits.
- Significantly reduces system CPU utilization by bypassing the operating system (OS) and caching data just once in user space with no second copy in kernel space
- Boosts parallel I/O performance by opening an individual TCP connection for each database process
- Distributes throughput across multiple network interfaces by alternating buffers to multiple IP addresses in a round-robin fashion
- Provides high availability (HA) by automatically redirecting failed I/O to an alternate address
In Oracle Database 12c, dNFS is already enabled by default. In 11g, Oracle Direct NFS may be enabled on a single database node with the following command:
$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on
Exadata dcli may be used to enable dNFS on all of the database nodes simultaneously:
$ dcli -l oracle -g /home/oracle/dbs_group make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on
Note: – The database instance should be restarted after enabling Oracle Direct NFS.
You can confirm that dNFS are enabled by checking the database alert log:
“Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0”
You can also use following SQL Query to confirm dNFS activity:
SQL> select * from v$dnfs_servers;
Even though Oracle Offers free Exadata patching to their Exadata customers under Oracle Platinum Services, you might still end up applying patches to your Exadata Machine for many reasons. There can be a compliance issue or scheduling problem which may prevent you from using Oracle Platinum Service to patch your Exadata systems. Remember, Oracle needs Minimum 8-12 weeks prior notice before customer wants to be patched and might not work for you. So if you are one of those lucky Exadata Machine Admin planning to apply patches to your Exadata systems, here are some guidelines for safely completing patching task with minimum risks.
- You must carefully review the patch readme file and familiarize yourself with known issues and rollback options.
- Create a detailed workbook to Patch Exadata Machine including rollback option.
- Find Test system in your organization mimicking production system in terms of capacity and software version.
- Run Exachk utility before you start applying the patch to establish a baseline. Additionally, fix any major issues you see in the Exadata Health Check report.
- Reboot your Exadata Machine before you start applying the patch.
- Make sure you have enough Storage on all the mounts affected by the patch.
- Backup everything, I mean everything. Backup all the databases and storage mount holding software binaries.
- Apply the patch on a test system and document each step in a workbook to deploy patches for rest of the Exadata systems.
- Run Exachk utility after the successful patch application and compare its baseline Exachk report.
- Reboot Exadata Machine after deploying the patch to make sure there will not be issues with future Exadata Reboots.
- Verify all the Exadata Software and Hardware components InfiniBand, Storage Cells and Compute nodes.
- Move to applying the patch to Production systems, after successful patching exercise.
We all have worked with large temp tablespaces in our data warehouse databases. I personally have worked with 10 TB temp tablespace for 50 TB Data Warehouse running on Exadata machine, which was required for large table joints and aggregate operations. Temp writes and temp reads are used when large joints or aggregation operations don’t fit in memory and must be spilled to storage. Before Oracle Exadata Storage Server released 188.8.131.52.0, temp writes were not cached in flash cache. Both temp writes and subsequent temp reads were from hard disk only. With the release of Oracle Exadata Storage Server 184.108.40.206.0, temp writes are sent to flash cache so that subsequent temp reads can be read from flash cache as well. This can drastically improve performance for queries that spill into temp area. As per Oracle, for certain queries performance can improve up to four times faster.
Additionally, imagine an application using a lot of temp tables and now they can run entirely from flash. This feature can enhance performance for these applications many folds. This feature uses a threshold of 128KB to decide whether to send request directory to disk or write it to flash cache. Therefore, direct load writes, flashback database log writes, archived log writes, and incremental backup writes would bypass flash cache. This feature will redirect large writes into the flash cache, provided that such large writes do not disrupt the higher priority OLTP or scan workloads. Such writes are later written back to the disks when the disks are less busy.
- Write-back flash cache has to be enabled for this feature to work.
- Oracle Database 11g release 2 (11.2) or Oracle Database 12c release 1 (12.1), then you need the patches for bug 24944847.
- This feature is supported on all Oracle Exadata hardware except for V2 and X2 storage servers.
- Flash caching of temp writes and large writes is not supported when flash compression is enabled
It will very important to configure proper monitoring and alerting for your Exadata Machine to decreased risk of a problem not being detected in a timely manner. Oracle recommended best practice to monitor an Oracle Exadata Database Machine is with Oracle Enterprise Manager (OEM) and the suite of OEM plugins developed for the Oracle Exadata Database Machine. Please reference My Oracle Support (MOS) Note 1110675.1 for details.
Additionally, Exadata Storage Servers can send alerts via emails. Sending these messages can helps to ensure that a problem is detected and corrected. First use following cellcli command to validate the email configuration by sending a test email:
alter cell validate mail;
The output will be similar to:
Cell slcc09cel01 successfully altered
If the output is not successful, configure a storage server to send email alerts using the following cellcli command (tailored to your environment):
ALTER CELL smtpServer='mailserver.maildomain.com', -
smtpFrom='Exadata cell', -
smtpPort='<port for mail server>', -