I am sure many of you have already migrated databases between different systems and know that migrating database to Exadata is not any different. There are many ways to migrate database to Exadata but for this blog, I will like to use RMAN duplicate method to migrate single instance database running Linux operating system to Exadata two node Rack. I am planning to use RMAN duplicate from active database, but if your database size is too large and you have access backups, you can use existing RMAN backup to avoid putting strain on source system and network resources.
Steps to migrate database to Exadata Machine:
- Create Static Listener on Source
- Copy password file to Taret System (Exadata)
- Add TNS Names entries on both Systems (Source & Target )
- Test Connections from Source & Target System
- Create pfile & make required changes
- Create required ASM / Local directories
- Startup Instance with nomoumt mode
- Connect to Target & AUX databases using RMAN
- Run RMAN Duplicate from Active Database
- Move spfile to ASM diskgroup
- Add Redo logs as needed
- Convert Single instance database to Cluster Database
- Register Database to CRS
- Database changes and enhancements
- Run Exachk report
- Login to Exadata machine node 1 only, configure static listener and reload.
LISTENER_duplica = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = EXADATA-HOST)(PORT = 1599)) ) ) SID_LIST_LISTENER_duplica= (SID_LIST = (SID_DESC = (SID_NAME = DB_NAME) (ORACLE_HOME =/u01/app/oracle/product/188.8.131.52/dbhome_1) (GLOBAL_=duplica_DGMGRL) ) )
lsnrctl reload LISTENER_duplica lsnrctl status LISTENER_duplica
- Copy password file to Exadata machine
scp orapwXXXX* oracle@exadatanode1:/u01/app/oracle/product/184.108.40.206/dbhome_1/dbs
- Create following TNS Name entries on source / target system
dbname_source = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = SOURCE-HOST)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = source_db_service) ) ) dbname_dup_target = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = EXADATA-HOST)(PORT = 1599)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = duplica_DGMGRL)(UR=A) ) )
- Test connections from both source and target systems
sqlplus sys/XXXX@dbname_source as sysdba sqlplus sys/XXXX@dbname_dup_target as sysdba
- Create pfile from source database and make following parameter changes according to your target EXADATA environment.
*.control_files='+DATA/TARGET_DB/CONTROLFILE/current.397.920902581' *.db_create_file_dest='+DATA/' *.db_create_online_log_dest_1='+DATA/' *.db_file_name_convert = '+DATA/DATAFILE/SOURCE_DB/','+DATA/DATAFILE/TARGET_DB/' *.log_file_name_convert = '+DATA/ONLINELOG/SOURCE_DB/','+DATA/ONLINELOG/TARGET_DB/' *.db_recovery_file_dest='USE_DB_RECOVERY_FILE_DEST' *.db_recovery_file_dest_size=1932735283200
- Create required directories (Local & ASM Diskgroups)
- AUDIT & TRACE FILES
- Startup the instance in nomount mode on Target System ( Exadata )
- Connect to target and auxiliary instances
rman target sys/XXX@dbname_source AUXILIARY sys/XXX@dbname_dup_target
- Duplicate database from active database
DUPLICATE TARGET DATABASE FROM ACTIVE DATABASE NOFILENAMECHECK;
- Move spfile to ASM disk group: Its best practice to move spfile to ASM. Maintaining spfiles locally for more than 1 instances can cause in consistence configuration between nodes.
create spfile='+DATA' from pfile='/tmp/initdbname.ora';
- Add more redo log groups as needed, as per Exadata best practices if you have ASM disk group with high redundancy level, place all your REDO logs on that group.
alter database add logfile thread 2 group 5 '+DATA' size 4294967296; alter database add logfile thread 2 group 6 '+DATA' size 4294967296; alter database add logfile thread 2 group 7 '+DATA' size 4294967296; alter database add logfile thread 2 group 8 '+DATA' size 4294967296;
- Convert single instance database into cluster database: Most likely your database will have more than 1 instances on Exadata Machine. In my case i only have 2 nodes Exadata machine, but if you have half rack or full EXADATA rack, you will need to run some additional statements like below but concept will be the same.
alter system set instance_name='1' scope=spfile sid =' 1'; alter system set instance_name=' 2' scope=spfile sid =' 2'; alter database enable public thread 2; alter system set cluster_database_instances=2 scope=spfile sid ='*'; alter system set cluster_database=true scope=spfile sid ='*'; alter system set remote_listener='EXA-SCAN:1521' scope=spfile sid ='*'; alter system set instance_number=1 scope=spfile sid ='1'; alter system set instance_number=2 scope=spfile sid =' 2'; alter system set thread=1 scope=spfile sid ='1'; alter system set thread=2 scope=spfile sid ='2'; alter system set undo_tablespace='UNDOTBS1' scope=spfile sid ='1'; alter system set undo_tablespace='UNDOTBS2' scope=spfile sid =' 2'; alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='1'; alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='2';
- Register Database with CRS: In order for CRS to restart the database automatically, you need to register database to CRS.
srvctl add database -d dbname -o '/u01/app/oracle/product/220.127.116.11/dbhome_1' –p '+DATA/DBANAME/PARAMETERFILE/spfile.256.924518361' srvctl add instance -d dbname -i dbname1 -n EXANODE1 srvctl add instance -d dbname -i dbname2 -n EXANODE2
- Database Changes and Enchantments (Optional): If you really like to take full advantage to EXADATA machine capacity and achieve extreme performance, you should look into implementing following Database/Exadata features. I won’t go into details here but following features will require you to do some testing.
- Index / Storage Indexes
- Resource Management
- Run EXACHK report and apply recommended changes as needed. Make sure you get at least 90 or above score in your exachk report. You can ignore following recommendations if they go against your organization standards.
- Primary database is NOT protected with Data Guard
- USE_LARGE_PAGES is NOT set to recommended value
- GLOBAL_NAMES is NOT set to recommended value
- Flashback on PRIMARY is not configured
- DB_UNIQUE_NAME on primary has not been modified