Running Oracle Real Application Clusters (RAC) in Public Clouds

Good News! Oracle now Support Oracle RAC on Third party Clouds. One of the first vendors who utilized this policy was Amazon with the Amazon Web Services (AWS). Deployments made under this policy enable vendors to run the Oracle Database as part of their Infrastructure-as-a-Service (IaaS) offering as long as the infrastructure meets the installation prerequisites for the Oracle product being offered. A vendor can choose to provide additional deployment support for the Oracle Database or any other Oracle product in those environments, which in general are neither tested nor certified by Oracle. Most Third-Party Cloud vendors therefore choose to collaborate with Oracle on such support in order to improve the quality of the service offered.

 

As Per Oracle guide line:

“Oracle RAC is supported on all cloud environments supported by the Oracle Database, as long as the environment is able to provide the hardware, storage, and networking requirements as specified in the Oracle RAC and Grid Infrastructure documentation. With the exception of the Oracle Cloud, Oracle has not tested nor certified Oracle RAC in these environments.”

 

Oracle RAC is supported under following assumptions:

  1. Hardware, storage and networking requirements as specified in the Oracle RAC and Grid Infrastructure documentations are met
  2. Cloud infrastructure needs to provide shared storage
  3. Cloud infrastructure needs to provide multiple networks and ability to create a private, dedicated network

 

Caution: – It is possible to use local or server-based storage and make it appear as shared storage to the Oracle Database or create multiple virtual networks, while de facto there is only one physical network been provisioned. Such technologies are generally discouraged because they can have adverse effect on performance and availability, although they might be supported under Oracle’s policy.

 

Migrate Databases to Exadata using RMAN Duplicate

I am sure many of you have already migrated databases between different systems and know that migrating database to Exadata is not any different. There are many ways to migrate database to Exadata but for this blog, I will like to use RMAN duplicate method to migrate single instance database running Linux operating system to Exadata two node Rack.  I am planning to use RMAN duplicate from active database, but if your database size is too large and you have access backups, you can use existing RMAN backup to avoid putting strain on source system and network resources.

Steps to migrate database to Exadata Machine:

  1. Create Static Listener on Source
  2. Copy password file to Taret System (Exadata)
  3. Add TNS Names entries on both Systems (Source &   Target )
  4. Test Connections from Source & Target System
  5. Create pfile & make required changes
  6. Create required ASM / Local directories
  7. Startup Instance with nomoumt mode
  8. Connect to Target & AUX databases using RMAN
  9. Run RMAN Duplicate from Active Database
  10. Move spfile to ASM diskgroup
  11. Add Redo logs as needed
  12. Convert Single instance database to Cluster Database
  13. Register Database to CRS
  14. Database changes and enhancements
  15. Run Exachk report

 

  1. Login to Exadata machine node 1 only, configure static listener and reload.

 

LISTENER_duplica =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = EXADATA-HOST)(PORT = 1599))
)
)

SID_LIST_LISTENER_duplica=
(SID_LIST =
(SID_DESC =
(SID_NAME = DB_NAME)
(ORACLE_HOME =/u01/app/oracle/product/11.2.0.4/dbhome_1)
(GLOBAL_=duplica_DGMGRL)
)
)

lsnrctl reload  LISTENER_duplica

lsnrctl status  LISTENER_duplica

step-1

 

  1. Copy password file to Exadata machine
scp orapwXXXX* oracle@exadatanode1:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs

step-2

  1. Create following TNS Name entries on source / target system

 

dbname_source =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = SOURCE-HOST)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = source_db_service)
)
)


dbname_dup_target =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = EXADATA-HOST)(PORT = 1599))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = duplica_DGMGRL)(UR=A)
)
)


step-3

  1. Test connections from both source and target systems
sqlplus sys/XXXX@dbname_source as sysdba

sqlplus sys/XXXX@dbname_dup_target as sysdba

step-1b

 

  1. Create pfile from source database and make following parameter changes according to your target EXADATA environment.
*.control_files='+DATA/TARGET_DB/CONTROLFILE/current.397.920902581'
*.db_create_file_dest='+DATA/'
*.db_create_online_log_dest_1='+DATA/'
*.db_file_name_convert = '+DATA/DATAFILE/SOURCE_DB/','+DATA/DATAFILE/TARGET_DB/'
*.log_file_name_convert = '+DATA/ONLINELOG/SOURCE_DB/','+DATA/ONLINELOG/TARGET_DB/'
*.db_recovery_file_dest='USE_DB_RECOVERY_FILE_DEST'
*.db_recovery_file_dest_size=1932735283200

step5

  1. Create required directories (Local & ASM Diskgroups)
  • AUDIT & TRACE FILES
  • +DATA/DBNAME/DATAFILE
  • +DATA/DBNAME/ONLINELOG
  • +DATA/DBNAME/CONTROLFILE

step-6

 

  1. Startup the instance in nomount mode on Target System ( Exadata )
startup nomount

step-7

  1. Connect to target and auxiliary instances
rman target sys/XXX@dbname_source AUXILIARY sys/XXX@dbname_dup_target

step-8

 

  1. Duplicate database from active database
DUPLICATE TARGET DATABASE FROM ACTIVE DATABASE NOFILENAMECHECK;

step-9

  1. Move spfile to ASM disk group: Its best practice to move spfile to ASM. Maintaining spfiles locally for more than 1 instances can cause in consistence configuration between nodes.
create spfile='+DATA' from pfile='/tmp/initdbname.ora';

step-10

  1. Add more redo log groups as needed, as per Exadata best practices if you have ASM disk group with high redundancy level, place all your REDO logs on that group.
alter database add logfile thread 2 group 5 '+DATA' size 4294967296;

alter database add logfile thread 2 group 6 '+DATA' size 4294967296;

alter database add logfile thread 2 group 7 '+DATA' size 4294967296;

alter database add logfile thread 2 group 8 '+DATA' size 4294967296;

step-11

 

  1. Convert single instance database into cluster database: Most likely your database will have more than 1 instances on Exadata Machine. In my case i only have 2 nodes Exadata machine, but if you have half rack or full EXADATA rack, you will need to run some additional statements like below but concept will be the same.
alter system set instance_name='1' scope=spfile sid ='  1';
alter system set instance_name='  2' scope=spfile sid ='  2';
alter database enable public thread 2;
alter system set cluster_database_instances=2 scope=spfile sid ='*';
alter system set cluster_database=true scope=spfile sid ='*';
alter system set remote_listener='EXA-SCAN:1521' scope=spfile sid ='*';
alter system set instance_number=1 scope=spfile sid ='1';
alter system set instance_number=2 scope=spfile sid =' 2';
alter system set thread=1 scope=spfile sid ='1';
alter system set thread=2 scope=spfile sid ='2';
alter system set undo_tablespace='UNDOTBS1' scope=spfile sid ='1';
alter system set undo_tablespace='UNDOTBS2' scope=spfile sid =' 2';
alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='1';
alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='2';

 

  1. Register Database with CRS: In order for CRS to restart the database automatically, you need to register database to CRS.
srvctl add database -d dbname -o '/u01/app/oracle/product/11.2.0.4/dbhome_1' –p '+DATA/DBANAME/PARAMETERFILE/spfile.256.924518361'

srvctl add instance -d dbname -i dbname1 -n EXANODE1

srvctl add instance -d dbname -i dbname2 -n EXANODE2

 

  1. Database Changes and Enchantments (Optional): If you really like to take full advantage to EXADATA machine capacity and achieve extreme performance, you should look into implementing following Database/Exadata features. I won’t go into details here but following features will require you to do some testing.
  • Index / Storage Indexes
  • Partitioning
  • Compression
  • Parallelism
  • Resource Management
  1. Run EXACHK report and apply recommended changes as needed. Make sure you get at least 90 or above score in your exachk report. You can ignore following recommendations if they go against your organization standards.
  • Primary database is NOT protected with Data Guard
  • USE_LARGE_PAGES is NOT set to recommended value
  • GLOBAL_NAMES is NOT set to recommended value
  • Flashback on PRIMARY is not configured
  • DB_UNIQUE_NAME on primary has not been modified

step-13

 

 

 

 

 

Installing Latest OPatch Utility on EXADATA using dcli

Anyone working with Exadata, probably have already used dcli (Distributed Command Line Utility) for their day to day administrative tasks. The dcli utility let you execute administrative commands on multiple Exadata nodes (Both Compute/Storage) simultaneously. You can use dcli command for various administrative and monitoring tasks from changing password to query storage cells. The dcli utility requires user equivalency being setup between all the target nodes and group file (a text file containing a list of target compute & Storage Nodes to which commands are sent). For this blog, I am going to use dcli utility to install latest OPatch utility on my two node Exadata machine.

  1. Checking User user-equivalence between all the target nodes, in my case I only have two compute nodes.
dcli -g dbs_group -l oracle 'hostname -i'

dcli-7

  1. If you don’t have group file containing all the Database/Compute nodes, you can create one using vi text editor.

dcli-1

  1. You can download latest OPatch utility from Oracle Metalink, you will need Oracle support ID for this download.

dcli-6

 

  1. Copy zip file to all the compute nodes, in my case there are only two nodes.
scp p6880880_112000_Linux-x86-64.zip \ oracle@NODE2:/u01/app/oracle/product/software/

dcli-2

  1. You can also use dcli utility to Check existing OPatch Version on all target nodes.
dcli -l oracle -g dbs_group /u01/app/oracle/product/11.2.0.4/dbhome_1/OPatch/opatch version

dcli-3

  1. Unzip latest OPatch utility to all compute nodes using dcli
dcli -l oracle -g dbs_group unzip -oq -d \  /u01/app/oracle/product/11.2.0.4/dbhome_1 \ /u01/app/oracle/product/software/p6880880_112000_Linux-x86-64.zip

dcli-4

  1. Check existing OPatch Version again to verify if latest OPatch utility has been installed on all compute nodes.
dcli -l oracle -g dbs_group \  /u01/app/oracle/product/11.2.0.4/dbhome_1/OPatch/opatch version

dcli-5

Oracle Now Offers Bare Metal Cloud Service for your most Critical Workload !

With the announcement of Bare Metal Cloud Service, Oracle takes a significant step toward providing complete cloud solution to its customers. With Bare Metal Cloud Service, Customers will be able to setup whatever operating system they want on top of the hardware. Oracle Bare Metal Cloud services offer many solutions but the guiding principle is that the servers and resources will be bare metal. Oracle will handle all of the network virtualization work and provide tenants, physical isolation of workloads from other cloud tenants and the provider itself.

As of now Oracle Bare Metal Cloud is offering following services:

Computer Service: Provides two compute offerings for flexibility to run your most demanding workloads, Bare Metal Instance (Fully dedicated bare metal servers on a software-defined network) and Virtual Machine Instance (Managed Virtual Machine (VM) instances for workloads not requiring dedicated physical servers)

Block Volume: Offers persistent IO-intensive block storage option (Block Volume). Block Volume Service provides high-speed storage capacity with seamless data protection and recovery.

Object Storage Service: The Oracle Bare Metal Cloud Object Storage Service is an internet-scale storage platform that offers reliable and cost-efficient data durability

Networking Service: With This offering, you will be able to extend your network from on-premise to the cloud with a Virtual Cloud Network

Identity and Access Management Service: The IAM Service helps you set up administrators, users, and groups and specify their permissions

Database Service: offer dedicated hardware for you Oracle databases in cloud environment.

 

 

 

Intelligent Data Mapping through Oracle Integration Cloud Service

Have you ever wondered, how is your on premise applications will interact or integrate with your cloud applications? If yes then you should look into new Oracle Integration Cloud Service. I have seen many customers hesitant to move some of their applications to public cloud because they are tightly integrated with their other applications. With Oracle Integration Cloud Service you can develop integrations between your applications in the cloud and between applications in the cloud and on premises.

Integration partly require you to map data between different applications. For example, you can have Gender Code, and Country Code field exist in different applications. Even though they represent same data/values but they can be presented differently. E.g. Gender can be presented as M/F or Male/Female, Country Code can US or USA. To map these codes, you create cross-reference tables called Lookups that define and store mappings for this type of data for a set of applications. You can then look up the data within the tables in the data mappings. Data mapping is a complex task and will require in-depth application and data architectural knowledge. But with Oracle Integration Cloud service (Data Mapper) you can create those mapping without writing any code. You will be able to easily create / define data mapping from simple to complex transformation.

 

data-mapping-service

 

With Oracle Integration cloud Service, you can

  • Connect securely to applications and services in the cloud and on premises
  • Point and click to create integrations between your applications with a powerful browser-based visual designer—it even runs on your favorite tablet
  • Select from a growing list of integrations built by Oracle and Oracle partners
  • Monitor and manage your integrations
  • Manage errors

Is there a compelling reason to virtualize Exadata Machine?

Introduction

Virtualizing Exadata machine has become an important deployment decision for many Exadata customers and most of them like to explore or at least discuss virtualization to see if there is any benefit for them. I believe you should have a good use case to virtualize Exadata machine and it should not be your standard install. Keeping that in mind I like to list following use cases where it makes sense to virtualize Exadata machine.

Cost Saving: With the introduction for Elastic configuration and Capacity on Demand (COD), you can already save significant amount of money on licensing and initial investment. With Exadata Elastic configuration option, you can build Exadata with almost any configuration of compute and storage servers. And Capacity on Demand (COD) option allows you but Oracle license in increment. With minimum of 40% must be license, you will be able buy 1/8th rack by only licensing 8 cores per server. So how will OVM with safe money on Licensing? Additional Cost Option Licensing. Virtual machines on Exadata are considered Trusted Partitions and therefore software can be licensed at the virtual machine level instead of the physical processor level. Without Trusted Partitions, database options and other Oracle software must be licensed at a server or cluster level even though all databases running on that server or cluster may not require a particular option. Even with Unlimited License Agreement (ULA) organizations don’t have unlimited licensing for everything (Golden Gate, Advance Security, Advance compression, etc). Some of the licensing options are very expensive and can end up playing key role in your decision to buy Exadata machine.

Compliance: Secondly, I see compliance as another reason to virtualize Exadata Machine. There are different types of compliance requirements HIPPA, PCI DSS and Certifications. We already have a clear definition of HIPPA, PCI DSS compliance requirements and none of them will require you to virtualize Exadata machine. But certification is different, different software and hardware vendors will have set of Software & Hardware requirements to certify their application.  You might be required to isolate your workload at database level or cluster level or operating system level. For Example, if your databases contain sensitive client data from different business partners you might be required to isolate data at operating system level or even physical level. You can achieve different level of isolation with Exadata Machine without using OVM. You can have additional Oracle RDBMS Homes to provide Oracle binary files isolation, you can also have different disk groups to provide storage isolation and it is also possible to have a separate physical cluster if you have half or full Exadata rack. But you won’t be able to have two separate physical Oracle Clusters on quarter or eight rack.  Using VM’s you will be able to install two or more VM Oracle clusters and achieve operating system level isolation.

Consolidation: Exadata is optimized for both OLAP and OLTP database workloads. Its balanced database server and storage grid infrastructure also makes it an ideal platform for database consolidation. Consolidated environments running on Exadata can use Oracle Virtual Machine (OVM) to deliver a high degree of isolation between workloads. This is a very desirable feature for hosted, shared, service provider, and test/dev environments. Using OVM, multiple software clusters can be deployed on the same Exadata Database Machine, which enables consolidation of applications that have specific clusterware/rdbms/maintenance needs. Not every organization have a separate Exadata machine for development and performance testing. Ideally you should have development and test environments on Exadata machine, so you can take full advantage of Exadata features like smart scan and offloading. You would also like to separate prod, pre-prod and test environments to define separate maintenance windows. For example, if mission critical applications sharing the same Exadata machine with development or test systems, then the frequent changes made in development and test systems will impact the availability and stability of the mission critical applications.

Conclusion

Don’t do it unless you have good use case for it.

Virtualized Exadata Machine (Isolation vs Efficiency)

Virtualizing Exadata machine has become an important deployment decision for many Exadata customers and most of them like to explore or at least discuss virtualization to see if there is any benefit for them. Since I have already been part of those conversations, I decided to share my thoughts on this topic to help my readers.

Oracle started supporting Exadata virtualization while ago and it’s free. You might want to virtualize your Exadata Machine for many reasons (Consolidation, Security, and Compliance) and end result will be to achieve some level of Isolation. Isolation is probably one of the main reason to virtualized Exadata machine. And if you are planning to virtualize your Exadata, keep in mind that everything (CPU, memory, disk) will be hard partitioned. Even though you can over-provision CPU’s, Oracle recommend strongly against over provisioning any resources. With dedicated CPU’s, memory and disks you will be able to achieve great isolation but it will not be an efficient use of Exadata machine resources. For instance, virtualization will provide you opportunity to have different patching cycles for each Exadata VM cluster but not without maintenance overhead. I have worked with Exadata Rack with up to 3 VM’s and it was not fun patching them, imagine if you have multiple virtualize Exadata machines. Remember Oracle releases around 4 bundle patches a year and you need to apply at least 2 of them to be in compliance for Oracle Platinum Service. Additionally, since everything is hard partitioned in virtualized Exadata machine, you will not be able to use IDLE hardware resources from Other VM’s. Hence you are wasting very expensive Hardware and Software resources.

It’s also important to understand that there are many levels of isolation like physical level, OS level, Storage Level, cluster level or RDBMS level and you can still achieve some level of isolation without virtualizing Exadata machine. For example you can have multiple RDBMS homes, different ASM disk groups and isolated network using VLAN.  I am not against virtualizing Exadata Machine but you should have very good use case for it. I would suggest combining above mentioned isolation strategies with 12c multitenant option to achieve excellent efficiency. But again if you are required to isolate everything at OS level, virtualizing Exadata Machine using OVM is your only option. Even though Exadata VM’s are also great for consolidation but best strategy would be to combine VM’s with database native consolidation options like multitenant. Exadata VMs have good Isolation but have poor efficiency and higher maintenance. Virtualizing Exadata Machine should not be your standard built, you should always consider bear metal install over Virtualized Exadata install.

Managing Virtualized Exadata Machine

First thing you should know about managing Exadata VMs is that you can migrate Bare Metal Oracle cluster to OVM cluster. Conversion of Bear Metal to OVM can be achieve with 0 down time or minimum downtime, depend on various migration methods.

Memory: You can decrease or increase amount of memory allocation to user domain with proper planning. For example: if you want to decrease memory allocated to user domain , you should consider instance memory parameter and make sure you still have enough memory left for user domain to support SGA/PGA for all the running databases. Memory changes to user domain is not dynamic and will require restart of user domain.

 CPU: Similarly, you can increase and decrease number of vCPU’s assign to a user domain. You can also dynamically increase or decrease number of vCPU’s assign to user domain as long as it will exceed number of vCPU’s assign to that domain. Overprovisioning is possible but not recommended, it will require full understanding of workload on all the user domains.

Storage:  Like CPU and memory you can also increase size of underline storage for any user domains. You can add new logical volume, increase size of root file system and increase size of Oracle Grid or RDBMS files system. . You can even add a new mount, if you like to add another Oracle RDBMS home

Backup: With addition to all other backups you need to backup both management and user domain in virtualized Exadata environment. As a best practice backup destination should reside outside of the local machine.