Acceptable Hidden Parameters on Exadata Machine

We all have seem hidden parameters being used as a workaround to solve a specific problem, and should be removed once a system has been upgraded to a version level that contains the fix for the specific problem. So what happen when you migrate database to Exadata Machine, specially using physical migration methods? Most likely they are not removed during the migration process, even though the version level might contains the correct fix. Verifying the hidden database initialization parameter usage helps avoid hidden parameters being used any longer than necessary. Otherwise, use of hidden initialization parameters not recommended by Oracle development can introduce instability, performance problems, corruptions, and crashes to your Exadata environments.

Please verify hidden initialization parameter usage in each ASM and database instance using following sql.

select name,value from v$parameter where substr(name,1,1)=’_’;

All being said, there are some acceptable hidden parameters for Exadata Machine. Please review the list of acceptable hidden parameters based on their usage.

Generally Acceptable Hidden Parameters Table

  1. _file_size_increase_increment with possible value of 2143289344
  2. _enable_NUMA_support depend on database versions
  3. _asm_resyncckpt with value of 0 to Turns off resync checkpointing
  4. _smm_auto_max_io_size 1024 to permits 1MB IOs for hash joins that spill to disk
  5. _parallel_adaptive_max_users with value of 2
  6. _assm_segment_repair_bg as false for bug 23734075 work around
  7. _backup_disk_bufcnt as 64 (Only when ZFS based backups are in use)
  8. _backup_disk_bufsz as 1048576 (Only when ZFS based backups are in use)
  9. _backup_file_bufcnt as 64 (Only when ZFS based backups are in use)
  10. _backup_file_bufsz as 1048576 (Only when ZFS based backups are in use)



How to Perform a Detail Exadata Health Check

Exadata is a significant investment for any customer and one should make sure to maximize investment by configuring Exadata machine as per best practices and utilize all the features of engineered systems. Oracle has provided an array of tools for Exadata machine, but we see a gap between standard Exadata configuration vs a truly optimize Exadata machine. Exachk is a great tool provided by Oracle to validate Exadata configuration and Oracle best practices, but it’s designed as a standard tool for all Exadata machines. Exachk is not specific to a particular type of workload or application and doesn’t investigate enchantment opportunities to achieve extreme performance from Exadata machine.

That is why you should perform a detail Exadata health check of your Exadata machine which goes above and beyond Exachk validation and Oracle Enterprise Manager monitoring capabilities. The goal of this health check is to maximize the Exadata investment and reduce the number of incidents which can impact the availability of critical applications. Here is list of task you should perform to perform a detail Exadata Health check

  1. Review Exachk report to evaluate Exadata configuration, MAA Best practices, and database critical issues.
  2. Review various types of Exadata logs including Exawatcher, alert, trace, CRS, ASM, listener.
  3. Review Flash cache contents, verify smart flash log feature and check write-back cache functionality.
  4. Review Exadata feature usage like HCC Compression, Smart Scan, offloading, Storage Indexes
  5. Review Maximum Availability Architecture including backup of critical configuration files
  6. Review and validate Oracle Enterprise Manager Configuration of Exadata plugin.
  7. Review resource utilization at storage & database level and provide recommendations.
  8. Review AWR reports for contention and slow running processes.
  9. Review database parameter settings as per Oracle best practices including hidden parameters.
  10. Review log retention policy to optimize storage utilization and maintain historical data for troubleshooting any future issues.


Benefit of Using Oracle Bare Metal Cloud

As many organizations look to the cloud as a way to improve agility and flexibility, as well as to try and cut down their infrastructure support and maintenance costs, they are introduced to new cloud terminology: “Bare Metal Cloud,” or “Dedicated Instances.”

Let’s start by describing Oracle Bare Metal Cloud Services: A dedicated physical hardware/server with no other tenant residing and running workload on it. Bare Metal Cloud Services basically let you lease or rent dedicated physical hardware resource for running isolated workloads at an optimal cost. Depending on the cloud vendor, billing can be metered (hourly) or non-metered (monthly fixed cost).

When compared to the traditional public cloud, which is a Hypervisor Cloud that has many tenants per physical machine sharing hardware resources, Bare Metal Cloud Services is a dedicated physical hardware resource for isolation and performance comparable to on-premises hardware.


Flexibility is a key benefit of Oracle Bare Metal Cloud Services. It gives you complete control over cloud resources, so you can setup and customize based on your requirements. Basically, you have direct physical access to the resources when compared to typical cloud offerings where physical resources are hidden behind the hypervisor layer.

Bare Metal Cloud Services also allows a hypervisor on top of the dedicated physical resources, giving you the best of both worlds: allowing you to control the number of virtual machines and the workload on them. It is also important to understand that Bare Metal Cloud Services flexibility comes with a price — it takes a little longer to provision cloud resources, introducing time and complexity to the provisioning process.

Given the added complexity, you might ask why you would opt for Bare Metal Cloud Services. It’s the same reason customers opt for IaaS versus PaaS / SaaS cloud models. You have more control over your environment to install and configure your applications; you start to lose that control as you climb up the cloud stack from IaaS>>>PaaS>>>SaaS models.

Bare Metal Cloud Services offers agility for fast provisioning and on-demand resources, as well as high flexibility to define your servers, network, storage and security based on your requirements. All this makes Bare Metal Cloud Services a great alternative to traditional cloud offerings.


Performance is a major concern for organizations when it comes to moving their workload to the public cloud. Migrating to a traditional cloud environment can be considered risky for some environments because going from on-premise dedicated hardware to virtualized shared-cloud resources can introduce performance issues. Also, applications that require high memory and CPU sometimes do not fit well into the traditional cloud model. Bare Metal Cloud Services can offer Memory, CPU and Storage Allocations that the traditional shared-cloud service model cannot.

Though many public cloud vendors have not published concrete performance metrics, performance degradation can often occur due to the introduction of hypervisor layer as well as the inherent performance issues from a fully shared resource. Basically, public cloud is a shared environment where multiple Virtual Machines are fighting for the same physical resources, so performance degradation is to be expected. Therefore, if performance is key to your applications, then Bare Metal Cloud Services is probably the best option to run your application in cloud.

Bare Metal Cloud Services let you run your workload on dedicated physical servers without any noisy neighbors running their workload on the same server. This also allows you to troubleshoot performance issues more easily as you are the sole owner of the physical server, and you exactly understand what other type of workload is being run by other applications.

Security & Compliance

Like performance, security is a major concern for organizations when considering moving their environments to the public cloud. Cloud security is about requirements and capabilities to provide layers of security. It does not mean that Bare Metal Cloud Services is more secure than a traditional public cloud, but since you have more control, you can install and configure additional layers of security to further improve the security.

Additionally, because Bare Metal Cloud Services is a single-tenant solution, it provides you isolation, which can be an important compliance requirement for your organization. This allows the possibility that many security-sensitive organizations can move their workload to the public cloud by being able to conform to regulatory compliance requirements.

Furthermore, there are some software vendors who do not support or accept licensing on virtualized hardware because of soft partitioning because it’s hard to determine actual number of required software licenses for any given virtualized server in cloud. In this scenario, Bare Metal Cloud Services can be considered a viable public cloud option to satisfy licensing requirements for any application or a software vendor.

Exadata Patching Rolling vs Non-Rolling

Oracle Exadata patches can be applied in rolling manner while the database remains online and available, or non-rolling manner where the databases are shutdown. The number of patches needs to be applied will be the same if you chose rolling or non-rolling patches but usually rolling patches takes longer to complete. Let me briefly describe following 2 different patches methods.

  • Rolling
  • Non-rolling

Rolling patching method don’t required application downtime. Most of the components can be patched while databases are running requiring little or no downtime compared to non-rolling patches but the overall length of time to complete the rolling patches is longer. It is best to use rolling patches if your disk groups are configured with high redundancy.

Non-rolling patching can be done with minimum time if planned properly.  Non-rolling patches are applied while the database is offline and unavailable.  This typically means that all applications serviced by Exadata Database Machine are moved to a standby system or are unavailable for the duration of the update.  Non-rolling method is typically faster than rolling method for overall maintenance time because multiple systems are updated in parallel, but there may be a longer outage to the application.

If multiple components will be updated in the same maintenance window, it is possible to use a combination of rolling and non-rolling methods to achieve the desired balance of application downtime and maintenance time. One typical combination used when an application does not handle connection disruption efficiently is to apply Exadata Storage Server patches in a rolling manner, and Grid Infrastructure and Database, and Exadata Database Server patches in a non-rolling manner.

Whatever method you choose to patch Exadata machine, make sure to follow below guidelines to minimize your risk.

  1. Create a detail patching plan
  2. Patch your non production system first
  3. Make sure to backup databases, Oracle home and configuration files
  4. Use same patching method for all


How to Safely Reboot Exadata Machine

There are times, you need to reboot your Exadata machine. Since Exadata machine has many components like compute nodes, storage nodes and network devices, rebooting Exadata machine is little different than rebooting any other database host. It is important to follow a sequence presented in this blog and make sure to have proper approvals for rebooting Exadata machine. It is also important to understand that you will lose all of your storage indexes, flash cache and it will take some time for Exadata machine to recreate them once it’s back online. If your application is heavily dependent on storage indexes and flash cache, you might experience some performance issues for next few hours after the reboot.


  • Take a snap shot of all the services running on Exadata Machine
  • Review /etc/oratab file and make sure all the instances are define properly
  • Review storage cell services
  • Check ASM disk groups and disks statuses
  • Make sure you  have approved change request to reboot Exadata machine
  • Alert owners and users of Exadata machine about this reboot before hand
  • Blackout OEM monitoring for target Exadata Machine

Stop all database instances

  • Stop all the db’s using SRVCTL command
Srvctl stop database –d <XYZ>

Stop CRS service

  • Stop Oracle Cluster CRS using the Following command as Root user
GRID_HOME/grid/bin/crsctl stop cluster all
  • Check if all the services are stopped gracefully otherwise user –f option to stop them forcefully
GRID_HOME/grid/bin/crsctl stop cluster all –f

Reboot Storage Cells

  • Once CRS and databases are down you can reboot Storage cells using the following Command as root user from any compute node
dcli -l root -g cell_group shutdown -r -y now

Note: – Above command will require root user equivalence being setup between all nodes and cell_group being created, otherwise login to each storage node as root user and execute the following command

shutdown -r -y now
  • Verify all the storage cells are back up successfully using following commands
dcli –l root –g cell_group uptime

dcli –l root –g cell_group “su – celladmin –c\”cellcli –e list cell detail \”” | grep Status;

Reboot Compute Nodes

  • Reboot compute nodes using the following Command as root user from any compute node
dcli -l root -g dbs_group shutdown -r -y now

Note: – Above command will require root user equivalence being setup between all nodes and dbs_group being created, otherwise login to each compute node as root user and execute the following command

shutdown -r -y now
  • Verify all the storage cells are back up successfully using following commands
dcli –l root –g dbs_group uptime

Verify CRS and databases

  • Verify CRS services using the following command as root user
GRID_HOME/grid/bin/crsctl stat res -t
  • Verify all the database instances came back up online
dcli -l root –g dbs_group ps -ef | grep smon


Oracle ZFS Storage Pool Data Profile Best Practices

Hello everyone, recently I was part of Oracle ZFS storage Pool design discussion, mostly focused on data profile types and Oracle best practices. Oracle recommend Mirrored data profile for many ZFS storage used cases including RMAN traditional backup and image backups for best performance and availability. I strongly recommend using mirrored pool production systems. Additionally, you can use double parity or triple parity, wide stripes for non-production systems if performance is not a major concern. Believing picture say a thousand words, please see below chart representing availability, performance and capacity detail of a 70 GB storage pool.  As you see from below chart Stripe data profile will provide you the most capacity without providing availability which can lead to a data loss. Additionally, you can see Mirrored data profile provide you both performance and availability.

Note: – Above figure is based on 70GB storage pool storage capacity

Please see below detail description of all available data profiles: 

Double parity: Each array stripe contains two parity disks, yielding high availability while increasing capacity over mirrored configurations. Double parity striping is recommended for workloads requiring little or no random access, such as backup/restore.

Mirrored: Duplicate copies of data yield fast and reliable storage by dividing access and redundancy evenly between two sets of disks. Mirroring is intended for workloads favoring high performance and availability over capacity, such as databases. When storage space is ample, consider triple mirroring for increased throughput and data protection at the cost of one-third total capacity.

Single parity, narrow stripes: Each narrow stripe assigns one parity disk for each set of three data disks, offering better random read performance than double parity stripes and larger capacity than mirrored configurations. Narrow stripes can be effective for configurations that are neither heavily random nor heavily sequential as it offers a compromise between the two access patterns.

Striped: Data is distributed evenly across all disks without redundancy, maximizing performance and capacity, but providing no protection from disk failure whatsoever. Striping is recommended only for workloads in which data loss is an acceptable tradeoff for marginal gains in throughput and storage space.

Triple mirrored: Three redundant copies of data yield a very fast and highly reliable storage system. Triple mirroring is recommended for workloads requiring both maximum performance and availability, such as critical databases. Compared to standard mirroring, triple mirrored storage offers increased throughput and an added level of protection against disk failure at the expense of capacity.

Triple parity, wide stripes : Each wide stripe has three disks for parity and allocates more data disks to maximize capacity. Triple parity is not generally recommended due to its limiting factor on I/O operations and low random access performance, however these effects can be mitigated with cache.


EXADATA SL6 – Exadata with SPARC Processers

Oracle now offer Exadata with SPARC M7 with all the benefits of Exadata combined with Ultra-Fast SPARC Processors running on Linux. This is a good news for professionals or organizations, who prefers Linux Operating Systems. I hope Oracle will continue to expand this offering and extend this offer with other hardware and engineered systems.

This new addition to Exadata family is called SL6 (SPARC Linux), it nearly identical to existing Exadata machine except it uses Oracle SPARC T7-2 Database servers. Even though the database servers are based on SPARC processors, Exadata SL6 runs the exact same Linux Operating System as x86-based Exadata systems.

As some you already know that SPARC is not just a processor, it is the world’s most advanced processor to run Oracle databases, and uses a revolutionary technology called Software in Silicon. Software in Silicon technology enable databases to run faster with unprecedented security and reliability using three very unique technologies.

  1. SQL in Silicon : Incorporates 32 on-chip Data Analytics Accelerator (DAX) engines that are specifically designed to speed up analytic queries
  2. Capacity in Silicon : uses accelerators to offload in-memory query processing and perform real-time data decompression
  3. Security in Silicon : is a function of SPARC M7 continuously perform validity checks on every memory reference made by the processor without incurring performance overhead 

Exadata SL6 running with 32 core at 4.1 GHz will let you to run bigger workload with smaller configuration, saving you money on software licensing  and maintenance costs. Additionally, As per Oracle “Exadata SL6 also delivers 20-30% more IOs than an equivalent Exadata X6-2 configuration and hence further lowers the total cost of ownership of the solution”


Accelerate OLTP workloads on Exadata with Smart Fusion Block Transfer

If you have an OLTP application running on Exadata and frequently updating to adding rows to tables from multiple database blocks, you can take advantage of Smart Fusion Block Transfer capability which uniquely improves performance of a RAC configuration by eliminating the impact of redo log write latency. Especially DML queries running from multiple instances, can lead to hot blocks transfer between Oracle RAC Nodes. This feature can transfer hots blocks as soon as the I/O to the redo log is issued at the sending node, without waiting for it to complete. As per Oracle “It has been observed that Smart Block Transfer increases throughput (about 40% higher) and decreases response times (about 33% less) for communication intensive workloads”.

Without Exadata’s Smart Fusion Block Transfer feature, a hot block can be transferred from a sending node to a receiver node only after the sending node has made changes in its redo log buffer durable in its redo log. With Smart Fusion Block Transfer, this latency of redo log write at the sending node is eliminated. So if you have an OLTP workload where hot blocks need to updates frequently across multiple nodes in RAC, you should looking in enable this feature. This feature is disable by default.

To enable Smart Fusion Block Transfer:

  • Set the hidden static parameter “_cache_fusion_pipelined_updates” toTRUE on all Oracle RAC nodes. Because this is a 0 parameter, you need to restart your database for this change to take effect.
  • Set the “exafusion_enabled” parameter to1 on all Oracle RAC instances.

Caution :

This feature is only available on Lunix operating system of Exadata Machine, not supported on SPARC or non-engineered system. This feature also requires Oracle Database 12c Release 1 ( Bundle Patch 11 (BP11). Enabling this feature on unsupported versions can prevent you starting Oracle instances