Migrate Databases to Exadata using RMAN Duplicate

I am sure many of you have already migrated databases between different systems and know that migrating database to Exadata is not any different. There are many ways to migrate database to Exadata but for this blog, I will like to use RMAN duplicate method to migrate single instance database running Linux operating system to Exadata two node Rack.  I am planning to use RMAN duplicate from active database, but if your database size is too large and you have access backups, you can use existing RMAN backup to avoid putting strain on source system and network resources.

Steps to migrate database to Exadata Machine:

  1. Create Static Listener on Source
  2. Copy password file to Taret System (Exadata)
  3. Add TNS Names entries on both Systems (Source &   Target )
  4. Test Connections from Source & Target System
  5. Create pfile & make required changes
  6. Create required ASM / Local directories
  7. Startup Instance with nomoumt mode
  8. Connect to Target & AUX databases using RMAN
  9. Run RMAN Duplicate from Active Database
  10. Move spfile to ASM diskgroup
  11. Add Redo logs as needed
  12. Convert Single instance database to Cluster Database
  13. Register Database to CRS
  14. Database changes and enhancements
  15. Run Exachk report


  1. Login to Exadata machine node 1 only, configure static listener and reload.


LISTENER_duplica =

(ORACLE_HOME =/u01/app/oracle/product/

lsnrctl reload  LISTENER_duplica

lsnrctl status  LISTENER_duplica



  1. Copy password file to Exadata machine
scp orapwXXXX* oracle@exadatanode1:/u01/app/oracle/product/


  1. Create following TNS Name entries on source / target system


dbname_source =
(SERVICE_NAME = source_db_service)

dbname_dup_target =


  1. Test connections from both source and target systems
sqlplus sys/XXXX@dbname_source as sysdba

sqlplus sys/XXXX@dbname_dup_target as sysdba



  1. Create pfile from source database and make following parameter changes according to your target EXADATA environment.
*.db_file_name_convert = '+DATA/DATAFILE/SOURCE_DB/','+DATA/DATAFILE/TARGET_DB/'


  1. Create required directories (Local & ASM Diskgroups)



  1. Startup the instance in nomount mode on Target System ( Exadata )
startup nomount


  1. Connect to target and auxiliary instances
rman target sys/XXX@dbname_source AUXILIARY sys/XXX@dbname_dup_target



  1. Duplicate database from active database


  1. Move spfile to ASM disk group: Its best practice to move spfile to ASM. Maintaining spfiles locally for more than 1 instances can cause in consistence configuration between nodes.
create spfile='+DATA' from pfile='/tmp/initdbname.ora';


  1. Add more redo log groups as needed, as per Exadata best practices if you have ASM disk group with high redundancy level, place all your REDO logs on that group.
alter database add logfile thread 2 group 5 '+DATA' size 4294967296;

alter database add logfile thread 2 group 6 '+DATA' size 4294967296;

alter database add logfile thread 2 group 7 '+DATA' size 4294967296;

alter database add logfile thread 2 group 8 '+DATA' size 4294967296;



  1. Convert single instance database into cluster database: Most likely your database will have more than 1 instances on Exadata Machine. In my case i only have 2 nodes Exadata machine, but if you have half rack or full EXADATA rack, you will need to run some additional statements like below but concept will be the same.
alter system set instance_name='1' scope=spfile sid ='  1';
alter system set instance_name='  2' scope=spfile sid ='  2';
alter database enable public thread 2;
alter system set cluster_database_instances=2 scope=spfile sid ='*';
alter system set cluster_database=true scope=spfile sid ='*';
alter system set remote_listener='EXA-SCAN:1521' scope=spfile sid ='*';
alter system set instance_number=1 scope=spfile sid ='1';
alter system set instance_number=2 scope=spfile sid =' 2';
alter system set thread=1 scope=spfile sid ='1';
alter system set thread=2 scope=spfile sid ='2';
alter system set undo_tablespace='UNDOTBS1' scope=spfile sid ='1';
alter system set undo_tablespace='UNDOTBS2' scope=spfile sid =' 2';
alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='1';
alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='2';


  1. Register Database with CRS: In order for CRS to restart the database automatically, you need to register database to CRS.
srvctl add database -d dbname -o '/u01/app/oracle/product/' –p '+DATA/DBANAME/PARAMETERFILE/spfile.256.924518361'

srvctl add instance -d dbname -i dbname1 -n EXANODE1

srvctl add instance -d dbname -i dbname2 -n EXANODE2


  1. Database Changes and Enchantments (Optional): If you really like to take full advantage to EXADATA machine capacity and achieve extreme performance, you should look into implementing following Database/Exadata features. I won’t go into details here but following features will require you to do some testing.
  • Index / Storage Indexes
  • Partitioning
  • Compression
  • Parallelism
  • Resource Management
  1. Run EXACHK report and apply recommended changes as needed. Make sure you get at least 90 or above score in your exachk report. You can ignore following recommendations if they go against your organization standards.
  • Primary database is NOT protected with Data Guard
  • USE_LARGE_PAGES is NOT set to recommended value
  • GLOBAL_NAMES is NOT set to recommended value
  • Flashback on PRIMARY is not configured
  • DB_UNIQUE_NAME on primary has not been modified







Installing Latest OPatch Utility on EXADATA using dcli

Anyone working with Exadata, probably have already used dcli (Distributed Command Line Utility) for their day to day administrative tasks. The dcli utility let you execute administrative commands on multiple Exadata nodes (Both Compute/Storage) simultaneously. You can use dcli command for various administrative and monitoring tasks from changing password to query storage cells. The dcli utility requires user equivalency being setup between all the target nodes and group file (a text file containing a list of target compute & Storage Nodes to which commands are sent). For this blog, I am going to use dcli utility to install latest OPatch utility on my two node Exadata machine.

  1. Checking User user-equivalence between all the target nodes, in my case I only have two compute nodes.
dcli -g dbs_group -l oracle 'hostname -i'


  1. If you don’t have group file containing all the Database/Compute nodes, you can create one using vi text editor.


  1. You can download latest OPatch utility from Oracle Metalink, you will need Oracle support ID for this download.



  1. Copy zip file to all the compute nodes, in my case there are only two nodes.
scp p6880880_112000_Linux-x86-64.zip \ oracle@NODE2:/u01/app/oracle/product/software/


  1. You can also use dcli utility to Check existing OPatch Version on all target nodes.
dcli -l oracle -g dbs_group /u01/app/oracle/product/ version


  1. Unzip latest OPatch utility to all compute nodes using dcli
dcli -l oracle -g dbs_group unzip -oq -d \  /u01/app/oracle/product/ \ /u01/app/oracle/product/software/p6880880_112000_Linux-x86-64.zip


  1. Check existing OPatch Version again to verify if latest OPatch utility has been installed on all compute nodes.
dcli -l oracle -g dbs_group \  /u01/app/oracle/product/ version


Oracle Now Offers Bare Metal Cloud Service for your most Critical Workload !

With the announcement of Bare Metal Cloud Service, Oracle takes a significant step toward providing complete cloud solution to its customers. With Bare Metal Cloud Service, Customers will be able to setup whatever operating system they want on top of the hardware. Oracle Bare Metal Cloud services offer many solutions but the guiding principle is that the servers and resources will be bare metal. Oracle will handle all of the network virtualization work and provide tenants, physical isolation of workloads from other cloud tenants and the provider itself.

As of now Oracle Bare Metal Cloud is offering following services:

Computer Service: Provides two compute offerings for flexibility to run your most demanding workloads, Bare Metal Instance (Fully dedicated bare metal servers on a software-defined network) and Virtual Machine Instance (Managed Virtual Machine (VM) instances for workloads not requiring dedicated physical servers)

Block Volume: Offers persistent IO-intensive block storage option (Block Volume). Block Volume Service provides high-speed storage capacity with seamless data protection and recovery.

Object Storage Service: The Oracle Bare Metal Cloud Object Storage Service is an internet-scale storage platform that offers reliable and cost-efficient data durability

Networking Service: With This offering, you will be able to extend your network from on-premise to the cloud with a Virtual Cloud Network

Identity and Access Management Service: The IAM Service helps you set up administrators, users, and groups and specify their permissions

Database Service: offer dedicated hardware for you Oracle databases in cloud environment.




Intelligent Data Mapping through Oracle Integration Cloud Service

Have you ever wondered, how is your on premise applications will interact or integrate with your cloud applications? If yes then you should look into new Oracle Integration Cloud Service. I have seen many customers hesitant to move some of their applications to public cloud because they are tightly integrated with their other applications. With Oracle Integration Cloud Service you can develop integrations between your applications in the cloud and between applications in the cloud and on premises.

Integration partly require you to map data between different applications. For example, you can have Gender Code, and Country Code field exist in different applications. Even though they represent same data/values but they can be presented differently. E.g. Gender can be presented as M/F or Male/Female, Country Code can US or USA. To map these codes, you create cross-reference tables called Lookups that define and store mappings for this type of data for a set of applications. You can then look up the data within the tables in the data mappings. Data mapping is a complex task and will require in-depth application and data architectural knowledge. But with Oracle Integration Cloud service (Data Mapper) you can create those mapping without writing any code. You will be able to easily create / define data mapping from simple to complex transformation.




With Oracle Integration cloud Service, you can

  • Connect securely to applications and services in the cloud and on premises
  • Point and click to create integrations between your applications with a powerful browser-based visual designer—it even runs on your favorite tablet
  • Select from a growing list of integrations built by Oracle and Oracle partners
  • Monitor and manage your integrations
  • Manage errors