Benefit of Using Oracle Bare Metal Cloud

As many organizations look to the cloud as a way to improve agility and flexibility, as well as to try and cut down their infrastructure support and maintenance costs, they are introduced to new cloud terminology: “Bare Metal Cloud,” or “Dedicated Instances.”

Let’s start by describing Oracle Bare Metal Cloud Services: A dedicated physical hardware/server with no other tenant residing and running workload on it. Bare Metal Cloud Services basically let you lease or rent dedicated physical hardware resource for running isolated workloads at an optimal cost. Depending on the cloud vendor, billing can be metered (hourly) or non-metered (monthly fixed cost).

When compared to the traditional public cloud, which is a Hypervisor Cloud that has many tenants per physical machine sharing hardware resources, Bare Metal Cloud Services is a dedicated physical hardware resource for isolation and performance comparable to on-premises hardware.


Flexibility is a key benefit of Oracle Bare Metal Cloud Services. It gives you complete control over cloud resources, so you can setup and customize based on your requirements. Basically, you have direct physical access to the resources when compared to typical cloud offerings where physical resources are hidden behind the hypervisor layer.

Bare Metal Cloud Services also allows a hypervisor on top of the dedicated physical resources, giving you the best of both worlds: allowing you to control the number of virtual machines and the workload on them. It is also important to understand that Bare Metal Cloud Services flexibility comes with a price — it takes a little longer to provision cloud resources, introducing time and complexity to the provisioning process.

Given the added complexity, you might ask why you would opt for Bare Metal Cloud Services. It’s the same reason customers opt for IaaS versus PaaS / SaaS cloud models. You have more control over your environment to install and configure your applications; you start to lose that control as you climb up the cloud stack from IaaS>>>PaaS>>>SaaS models.

Bare Metal Cloud Services offers agility for fast provisioning and on-demand resources, as well as high flexibility to define your servers, network, storage and security based on your requirements. All this makes Bare Metal Cloud Services a great alternative to traditional cloud offerings.


Performance is a major concern for organizations when it comes to moving their workload to the public cloud. Migrating to a traditional cloud environment can be considered risky for some environments because going from on-premise dedicated hardware to virtualized shared-cloud resources can introduce performance issues. Also, applications that require high memory and CPU sometimes do not fit well into the traditional cloud model. Bare Metal Cloud Services can offer Memory, CPU and Storage Allocations that the traditional shared-cloud service model cannot.

Though many public cloud vendors have not published concrete performance metrics, performance degradation can often occur due to the introduction of hypervisor layer as well as the inherent performance issues from a fully shared resource. Basically, public cloud is a shared environment where multiple Virtual Machines are fighting for the same physical resources, so performance degradation is to be expected. Therefore, if performance is key to your applications, then Bare Metal Cloud Services is probably the best option to run your application in cloud.

Bare Metal Cloud Services let you run your workload on dedicated physical servers without any noisy neighbors running their workload on the same server. This also allows you to troubleshoot performance issues more easily as you are the sole owner of the physical server, and you exactly understand what other type of workload is being run by other applications.

Security & Compliance

Like performance, security is a major concern for organizations when considering moving their environments to the public cloud. Cloud security is about requirements and capabilities to provide layers of security. It does not mean that Bare Metal Cloud Services is more secure than a traditional public cloud, but since you have more control, you can install and configure additional layers of security to further improve the security.

Additionally, because Bare Metal Cloud Services is a single-tenant solution, it provides you isolation, which can be an important compliance requirement for your organization. This allows the possibility that many security-sensitive organizations can move their workload to the public cloud by being able to conform to regulatory compliance requirements.

Furthermore, there are some software vendors who do not support or accept licensing on virtualized hardware because of soft partitioning because it’s hard to determine actual number of required software licenses for any given virtualized server in cloud. In this scenario, Bare Metal Cloud Services can be considered a viable public cloud option to satisfy licensing requirements for any application or a software vendor.

Oracle ZFS Storage Pool Data Profile Best Practices

Hello everyone, recently I was part of Oracle ZFS storage Pool design discussion, mostly focused on data profile types and Oracle best practices. Oracle recommend Mirrored data profile for many ZFS storage used cases including RMAN traditional backup and image backups for best performance and availability. I strongly recommend using mirrored pool production systems. Additionally, you can use double parity or triple parity, wide stripes for non-production systems if performance is not a major concern. Believing picture say a thousand words, please see below chart representing availability, performance and capacity detail of a 70 GB storage pool.  As you see from below chart Stripe data profile will provide you the most capacity without providing availability which can lead to a data loss. Additionally, you can see Mirrored data profile provide you both performance and availability.

Note: – Above figure is based on 70GB storage pool storage capacity

Please see below detail description of all available data profiles: 

Double parity: Each array stripe contains two parity disks, yielding high availability while increasing capacity over mirrored configurations. Double parity striping is recommended for workloads requiring little or no random access, such as backup/restore.

Mirrored: Duplicate copies of data yield fast and reliable storage by dividing access and redundancy evenly between two sets of disks. Mirroring is intended for workloads favoring high performance and availability over capacity, such as databases. When storage space is ample, consider triple mirroring for increased throughput and data protection at the cost of one-third total capacity.

Single parity, narrow stripes: Each narrow stripe assigns one parity disk for each set of three data disks, offering better random read performance than double parity stripes and larger capacity than mirrored configurations. Narrow stripes can be effective for configurations that are neither heavily random nor heavily sequential as it offers a compromise between the two access patterns.

Striped: Data is distributed evenly across all disks without redundancy, maximizing performance and capacity, but providing no protection from disk failure whatsoever. Striping is recommended only for workloads in which data loss is an acceptable tradeoff for marginal gains in throughput and storage space.

Triple mirrored: Three redundant copies of data yield a very fast and highly reliable storage system. Triple mirroring is recommended for workloads requiring both maximum performance and availability, such as critical databases. Compared to standard mirroring, triple mirrored storage offers increased throughput and an added level of protection against disk failure at the expense of capacity.

Triple parity, wide stripes : Each wide stripe has three disks for parity and allocates more data disks to maximize capacity. Triple parity is not generally recommended due to its limiting factor on I/O operations and low random access performance, however these effects can be mitigated with cache.


Using Oracle Direct NFS with Exadata Machine

Oracle Direct NFS (dNFS) is an NFS client which resides within the database kernel and should be enabled on Exadata for all direct database and RMAN workloads between Exadata and the Oracle ZFS Storage Appliance. With this feature enabled, you will have increased bandwidth and reduced CPU overhead. Even though there are no additional steps are required to enable dNFS although it is recommended to increase the number of NFS server threads from default to 1000.

As per Oracle documentation, using Oracle Direct NFS with Exadata can provide following benefits.

  • Significantly reduces system CPU utilization by bypassing the operating system (OS) and caching data just once in user space with no second copy in kernel space
  • Boosts parallel I/O performance by opening an individual TCP connection for each database process
  • Distributes throughput across multiple network interfaces by alternating buffers to multiple IP addresses in a round-robin fashion
  • Provides high availability (HA) by automatically redirecting failed I/O to an alternate address

In Oracle Database 12c, dNFS is already enabled by default.   In 11g, Oracle Direct NFS may be enabled on a single database node with the following command:

$ make -f $ORACLE_HOME/rdbms/lib/ dnfs_on

Exadata dcli may be used to enable dNFS on all of the database nodes simultaneously:

$ dcli -l oracle -g /home/oracle/dbs_group make -f $ORACLE_HOME/rdbms/lib/ dnfs_on

Note: – The database instance should be restarted after enabling Oracle Direct NFS. 

You can confirm that dNFS are enabled by checking the database alert log:

“Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0”

You can also use following SQL Query to confirm dNFS activity:

SQL> select * from v$dnfs_servers;


Patching Guidelines for Exadata Machine

Even though Oracle Offers free Exadata patching to their Exadata customers under Oracle Platinum Services, you might still end up applying patches to your Exadata Machine for many reasons. There can be a compliance issue or scheduling problem which may prevent you from using Oracle Platinum Service to patch your Exadata systems. Remember, Oracle needs Minimum 8-12 weeks prior notice before customer wants to be patched and might not work for you. So if you are one of those lucky Exadata Machine Admin planning to apply patches to your Exadata systems, here are some guidelines for safely completing patching task with minimum risks.


  1. You must carefully review the patch readme file and familiarize yourself with known issues and rollback options.
  2. Create a detailed workbook to Patch Exadata Machine including rollback option.
  3. Find Test system in your organization mimicking production system in terms of capacity and software version.
  4. Run Exachk utility before you start applying the patch to establish a baseline. Additionally, fix any major issues you see in the Exadata Health Check report.
  5. Reboot your Exadata Machine before you start applying the patch.
  6. Make sure you have enough Storage on all the mounts affected by the patch.
  7. Backup everything, I mean everything. Backup all the databases and storage mount holding software binaries.
  8. Apply the patch on a test system and document each step in a workbook to deploy patches for rest of the Exadata systems.
  9. Run Exachk utility after the successful patch application and compare its baseline Exachk report.
  10. Reboot Exadata Machine after deploying the patch to make sure there will not be issues with future Exadata Reboots.
  11. Verify all the Exadata Software and Hardware components InfiniBand, Storage Cells and Compute nodes.
  12. Move to applying the patch to Production systems, after successful patching exercise.

Isolate your Exadata Network with InfiniBand Partitioning

An efficient system is which provide a balance of CPU performance, memory bandwidth, and I/O performance. A lot of professionals will agree that Oracle Engineered Systems are good examples of efficient systems. InfiniBand network plays a key role to provide that balance and gives close to 32 Gigabit per second network with very low latency. It provides a high-speed and high-efficiency network for Oracle’s Engineered Systems namely Exadata, Exalogic, Big Data Appliance, SuperCluster, ZFS Storage Appliances etc.

If you are planning to consolidate systems on Exadata machine, you might be required to implement network isolation across the multiple environments within the consolidated system for security or compliance reasons. This is accomplished by using custom InfiniBand partitioning with the use of dedicated partition keys and partitioned tables. InfiniBand partitioning will provide you isolation across the different RAC clusters so that network traffic of one RAC cluster is not accessible to another RAC cluster. Note that you can implement similar functionality for the Ethernet networks with VLAN tagging.

An InfiniBand partition creates a group of InfiniBand members that only allows communicating with each other. A unique partition key plays a key role in identifying and maintaining a partition that is managed by the master subnet manager. Members are assigned to these new custom partitions and they can only communicate to another member within that partition. For example, if you implement InfiniBand partitioning with OVM Exadata clusters, one particular cluster is assigned to one dedicated partition for the Clusterware communication and one partition for communication with the storage cells. One RAC cluster will not be able to talk to the nodes of another RAC cluster which will belong to a different partition, hence provide you network isolation within one Exadata Machine.

VLAN Tagging with Oracle Exadata Database Machine

VLAN is a process, where we create an independent logical network within a physical network and when it’s spread across multiple switches, VLAN Tagging is required. Basically, you insert a VLAN ID with every network packet header so switches can identify proper VLAN network and route network packet to correct network interface and port. The network administrator can configure VLAN at switch level including specifying ports belonging to VLAN.

If there is a need for Exadata to access additional VLAN’s on the Public network, such as enabling network isolation, then 8021.Q based VLAN tagging is a solution. By default, Exadata switch is minimally configured, without VLAN tagging. If you want to use VLAN tagging, then you need to planned and enable it after the initial deployment. This applies to both physical and Oracle VM deployments. Both Compute nodes and storage nodes can be configured to use VLANs for the management network, ILOM, client network, and the backup access network

Overall configuration process can be divided into two phases.  In the first phase, VLAN tagged interfaces are created at Linux operating system level with persistent configurations. Their respective default gateway IP addresses are also configured via iproute2 using unique tables.  In the second phase, database listeners and Clusterware are provisioned.

Other considerations

  • VLANs do not exist in InfiniBand. For equivalent functionality, see InfiniBand partitioning.
  • Network VLAN tagging is supported for Oracle Real Application Clusters on the public network.
  • If the backup network is on a tagged VLAN network, the client network must also be on a separate tagged VLAN network.
  • The backup and client networks can share the same network cables.
  • OEDA supports VLAN tagging for both physical and virtual deployments.
  • Client and backup VLAN networks must be bonded.
  • The admin network is never bonded.


Improve Temp Reads and Writes with Exadata Storage Server releases

We all have worked with large temp tablespaces in our data warehouse databases. I personally have worked with 10 TB temp tablespace for 50 TB Data Warehouse running on Exadata machine, which was required for large table joints and aggregate operations. Temp writes and temp reads are used when large joints or aggregation operations don’t fit in memory and must be spilled to storage. Before Oracle Exadata Storage Server released, temp writes were not cached in flash cache. Both temp writes and subsequent temp reads were from hard disk only. With the release of Oracle Exadata Storage Server, temp writes are sent to flash cache so that subsequent temp reads can be read from flash cache as well. This can drastically improve performance for queries that spill into temp area. As per Oracle, for certain queries performance can improve up to four times faster.

Additionally, imagine an application using a lot of temp tables and now they can run entirely from flash. This feature can enhance performance for these applications many folds. This feature uses a threshold of 128KB to decide whether to send request directory to disk or write it to flash cache. Therefore, direct load writes, flashback database log writes, archived log writes, and incremental backup writes would bypass flash cache. This feature will redirect large writes into the flash cache, provided that such large writes do not disrupt the higher priority OLTP or scan workloads. Such writes are later written back to the disks when the disks are less busy.


  • Write-back flash cache has to be enabled for this feature to work.
  • Oracle Database 11g release 2 (11.2) or Oracle Database 12c release 1 (12.1), then you need the patches for bug 24944847.
  • This feature is supported on all Oracle Exadata hardware except for V2 and X2 storage servers.
  • Flash caching of temp writes and large writes is not supported when flash compression is enabled


Exadata Storage Indexes can store up to 24 Columns with

As we all know Exadata Storage indexes use to hold information up to eight columns till Exadata Storage Server Software release , now with the of Oracle Exadata Storage Server Software release, storage indexes have been enhanced to store column information for up to 24 columns.

It is important to understand that only space to store column information for eight columns is guaranteed. For more than eight columns, space is shared between column set membership summary and column minimum/maximum summary. The type of workload determines whether set membership summary gets stored in storage index. I am hopping to test out this new feature shortly and will post results for my readers.