Migrating Database from AIX to Exadata using Data Pump

There are many methods to migrated Oracle databases but this blog is focus on migrating large database from AIX to Exadata Machine. This methods can also be used to migrate and upgrade Oracle database from Linux to Exadata machine if needed.

There is another popular method called “transportable databases” that can be used to migrate databases from AIX operating system. But if for some reason, you cannot used database transportable method, data pump is your only option.  Using data pump, you can use upgrade, migrate and optimize database in one shot.

Following steps are focused on migrating large databases using data pump. Ideally, you want divide migration into 4 steps. 1) META DATA only, 2) DATA Only, 3) Indexes and Constraints, 4) Validate. Creating indexes and enabling constraints with Terabyte DATA import can take days, this is why it’s very important that you separate them from actual DATA import.  You can use network impdp feature only if your have 10G network between source and target systems.

Step 1 : Create shell Database on target Exadata Machine using DBCA

You can use DBCA silent option to create database too. I strongly recommended using DBCA GUI to create databases on Exadata Machine.

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo ex01dbadm01,ex01dbadm02,ex01dbadm03 –ignorePreReqs

Step 2 : Create tablespaces based on source database

I have to enable encryption and compression for my migration but I recommend that you should enable OLTP complete at least for Exadata Machine.

select 'create bigfile tablespace '||tablespace_name ||' DATAFILE ''+DATAC1'' SIZE '||sum(bytes)/1024/1024 ||'M ENCRYPTION

 USING ''AES256'' DEFAULT COMPRESS for OLTP STORAGE(ENCRYPT) ;'

       from dba_data_files

       group by tablespace_name

Step 3 : Create public database link to source database  

Note : – DB link be used for network data pump and compare objects between source and target databases.

CREATE public DATABASE LINK src_db CONNECT TO umair IDENTIFIED BY password! USING 'MART';

Step 4 : Create migration directory for logs and dump files

Create directory for Data Pump Usage both on Source and Target

create or replace directory migration as '/etc/migration/mart';

Step 5 : Import Full database META DATA only using DB Link

nohup impdp / full=y content=metadata_only network_link=src_db directory=migration PARALLEL=16 logfile=imp_full_mart.log &

Step 6 : Import Data schema by schema (FIN_MART)

Note : – I am using network option for my data migration about you don’t have network bandwidth or a lot of unsupported objects , you should first export and then import data using dump files.

nohup impdp / schemas=FIN_MART network_link=src_db exclude=index,constraint TABLE_EXISTS_ACTION=REPLACE directory=migration parallel=16 logfile=impdpFIN_MART.log &

 Step 7 : Generate Indexes & Constraints scripts

First create export dump file (META DATA Only)

expdp / schemas=FIN_MART content=metadata_only directory=migration dumpfile=FIN_MART.dmp logfile=expdpFIN_MART.log

Then generate seperate SQL scripts for Indexes and Constraints.

impdp / schemas=FIN_MART include=index sqlfile=FIN_MART_IDX.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_IDX.log
impdp / schemas=FIN_MART include=contraint sqlfile=FIN_MART_CONS.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_CONS.log

Step 8 : Create Indexes using SQLscript

Update FIN_MART_IDX.sql script and replace parallel 1 with 16 then execute it

nohup sqlplus umair/password! @FIN_MART_IDX.sql &

Step 9 : Enable Contraints using SQL script

Update FIN_MART_CONS.sql script and replace ENABLE with ENABLE NOVALIDATE then execute it.

nohup sqlplus umair/password! @FIN_MART_CONS.sql &

 Step 10 : Validate objects

Lastly Validate objects using following SQL and bring any missing objects.

select owner,object_type,count(*) MISSING_OBJECTS

from (

select owner,object_type,object_name

from dba_objects@src_dih

where owner =’FIN_MART’

minus

select owner, object_type, object_name

from dba_objects where owner = ‘FIN_MART’

)

group by owner,object_type

order by owner,object_type

 

Repeat Steps though 6 -10 for each Schema in your database.