Migrating Database from AIX to Exadata using Data Pump

There are many methods to migrated Oracle databases but this blog is focus on migrating large database from AIX to Exadata Machine. This methods can also be used to migrate and upgrade Oracle database from Linux to Exadata machine if needed.

There is another popular method called “transportable databases” that can be used to migrate databases from AIX operating system. But if for some reason, you cannot used database transportable method, data pump is your only option.  Using data pump, you can use upgrade, migrate and optimize database in one shot.

Following steps are focused on migrating large databases using data pump. Ideally, you want divide migration into 4 steps. 1) META DATA only, 2) DATA Only, 3) Indexes and Constraints, 4) Validate. Creating indexes and enabling constraints with Terabyte DATA import can take days, this is why it’s very important that you separate them from actual DATA import.  You can use network impdp feature only if your have 10G network between source and target systems.

Step 1 : Create shell Database on target Exadata Machine using DBCA

You can use DBCA silent option to create database too. I strongly recommended using DBCA GUI to create databases on Exadata Machine.

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo ex01dbadm01,ex01dbadm02,ex01dbadm03 –ignorePreReqs

Step 2 : Create tablespaces based on source database

I have to enable encryption and compression for my migration but I recommend that you should enable OLTP complete at least for Exadata Machine.

select 'create bigfile tablespace '||tablespace_name ||' DATAFILE ''+DATAC1'' SIZE '||sum(bytes)/1024/1024 ||'M ENCRYPTION

 USING ''AES256'' DEFAULT COMPRESS for OLTP STORAGE(ENCRYPT) ;'

       from dba_data_files

       group by tablespace_name

Step 3 : Create public database link to source database  

Note : – DB link be used for network data pump and compare objects between source and target databases.

CREATE public DATABASE LINK src_db CONNECT TO umair IDENTIFIED BY password! USING 'MART';

Step 4 : Create migration directory for logs and dump files

Create directory for Data Pump Usage both on Source and Target

create or replace directory migration as '/etc/migration/mart';

Step 5 : Import Full database META DATA only using DB Link

nohup impdp / full=y content=metadata_only network_link=src_db directory=migration PARALLEL=16 logfile=imp_full_mart.log &

Step 6 : Import Data schema by schema (FIN_MART)

Note : – I am using network option for my data migration about you don’t have network bandwidth or a lot of unsupported objects , you should first export and then import data using dump files.

nohup impdp / schemas=FIN_MART network_link=src_db exclude=index,constraint TABLE_EXISTS_ACTION=REPLACE directory=migration parallel=16 logfile=impdpFIN_MART.log &

 Step 7 : Generate Indexes & Constraints scripts

First create export dump file (META DATA Only)

expdp / schemas=FIN_MART content=metadata_only directory=migration dumpfile=FIN_MART.dmp logfile=expdpFIN_MART.log

Then generate seperate SQL scripts for Indexes and Constraints.

impdp / schemas=FIN_MART include=index sqlfile=FIN_MART_IDX.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_IDX.log
impdp / schemas=FIN_MART include=contraint sqlfile=FIN_MART_CONS.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_CONS.log

Step 8 : Create Indexes using SQLscript

Update FIN_MART_IDX.sql script and replace parallel 1 with 16 then execute it

nohup sqlplus umair/password! @FIN_MART_IDX.sql &

Step 9 : Enable Contraints using SQL script

Update FIN_MART_CONS.sql script and replace ENABLE with ENABLE NOVALIDATE then execute it.

nohup sqlplus umair/password! @FIN_MART_CONS.sql &

 Step 10 : Validate objects

Lastly Validate objects using following SQL and bring any missing objects.

select owner,object_type,count(*) MISSING_OBJECTS

from (

select owner,object_type,object_name

from dba_objects@src_dih

where owner =’FIN_MART’

minus

select owner, object_type, object_name

from dba_objects where owner = ‘FIN_MART’

)

group by owner,object_type

order by owner,object_type

 

Repeat Steps though 6 -10 for each Schema in your database.

 

 

 

Migrate Databases to Oracle Cloud using Data Pump

Migrating Oracle Databases to Cloud is not any different than migrating databases to any other Host. Oracle database cloud instance is like any other host running on your data center with few exceptions like they are elastic and scalable in nature. Still I thought it will be a good idea to write few blogs about different database migration methods and provide some clarity around this subject.

This blog will be focus on Oracle Data Pump utility. This method is useful when your data set is small (May be few GB) and there are no dependencies between schema’s.  I would say this method is good for schema level migrations and pretty much let you migrate any database version to latest oracle database version. Please follow below steps to migrate databases to Oracle cloud using Data Pump utility.

  1. Check data size and dependencies on source database using following query and make note of tablespace names and their sizes.
select substr(OWNER,1,10),substr(TABLESPACE_NAME,1,10) ,sum(bytes)/1024/1024/1000 "SIZE_IN_GB" from dba_segments where owner not in ('SYS','SYSTEM','APEX_04020','WMSYS','XDB','MDSYS','AUDSYS','DBSNMP','SCOTT') group by owner, TABLESPACE_NAME order by 3 desc,1;SQL>   2    3



SUBSTR(OWN SUBSTR(TAB SIZE_IN_GB

---------- ---------- ----------

SALES       USERS     .6370625
META7     USERS       .611875
APEX_04020 SYSAUX     .193875
ORDDATA    SYSAUX     .0160625
DVSYS       SYSAUX    .0044375
CTXSYS     SYSAUX     .0038125
GSMADMIN_I SYSAUX     .001375
OUTLN     SYSTEM      .0005625
ORDSYS    SYSAUX      .0004375
OJVMSYS    USERS      .000375
LBACSYS    SYSTEM     .0003125
  1. Export all the schema’s using oracle expdp utility. Please note that you also you need database directory , can use existing directory or create new one like i did. For this blog I will only be migrating one schema (Sales). It is best to use compress option, since you will uploading this dump file to Oracle Cloud.
CREATE OR REPLACE DIRECTORY migration AS '/oradata';
GRANT READ, WRITE ON DIRECTORY migration TO sales;

expdp sales/sales schemas=sales directory=migration dumpfile=sales.dmp logfile=sales_expdp.log compression=all
  1. Now on cloud, create target database and size it based on your requirement and make notes data file location using show parameter db_create_file_dest.
[oracle@META7DB META7DB]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvdb3             25G   14G   11G  58% /
tmpfs                 7.3G     0  7.3G   0% /dev/shm
/dev/xvdb1            477M  148M  300M  34% /boot
/dev/xvde1             59G   11G   46G  19% /u01
/dev/mapper/dataVolGroup-lvol0
25G  4.1G   20G  18% /u02
/dev/mapper/fraVolGroup-lvol0
6.8G  3.1G  3.4G  48% /u03
/dev/mapper/redoVolGroup-lvol0
26G  3.4G   21G  14% /u04

[oracle@META7DB META7DB]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 7 16:37:32 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show parameter db_create_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------

db_create_file_dest                  string      /u02/app/oracle/oradata
  1. Create required user, tablespaces and database directory  based on your requirement. Please note that if you are using container database, set you container before creating user or tablespace.
<ALTER SESSION SET CONTAINER = PDB1;>

create tablespace sales datafile size 1G;
create user sales identified by sales default tablespace sales temporary tablespace temp;
grant create session to sales;
ALTER USER sales QUOTA UNLIMITED ON sales;

CREATE OR REPLACE DIRECTORY migration AS '/u04/app/oracle/migration';
GRANT READ, WRITE ON DIRECTORY migration TO sales;
  1. Copy dump file to oracle clouds instance. This can be achieve many different ways, you can winscip or scp to copy export dump file to oracle cloud instance.
[oracle@oraclenode1 dmp]$ scp -i /backup/lunix_rsa_cloudkey  sales.dmp  oracle@129.144.9.60:/u04/app/oracle/migration
sales.dmp     100%  358MB   1.5MB/s   04:06
  1. Now run oracle import utility to load data into your Oracle Cloud Instance.
[oracle@testdb1 migration]$ impdp sales/sales@pdb1 directory=migration dumpfile=sales.dmp logfile=sales_expdp.log

Import: Release 12.1.0.2.0 - Production on Sat Feb 18 20:51:29 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SALES"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting "SALES"."SYS_IMPORT_FULL_01": sales/********@pdb1 directory=migration dumpfile=sales.dmp logfile=sales_expdp.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SALES" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SALES"."UTILIZATION" 12.19 MB 7679886 rows
. . imported "SALES"."UTILIZATION_NOPART" 9.578 MB 6401656 rows
. . imported "SALES"."UTILIZATION_FACT_STAR" 3.925 MB 2559962 rows
. . imported "SALES"."DOBJECTS" 1.718 MB 124714 rows
. . imported "SALES"."DDBA_TAB_PRIVS" 519.8 KB 51649 rows
. . imported "SALES"."DINDEX" 286.1 KB 9560 rows
. . imported "SALES"."REV_MARGIN_F" 132.8 KB 3766 rows
. . imported "SALES"."CALENDER_DIM" 29.25 KB 731 rows
. . imported "SALES"."CALENDER_DIM_BK" 39.48 KB 1096 rows
. . imported "SALES"."CALENDER" 30.87 KB 731 rows
. . imported "SALES"."HW_SALE_F" 38.63 KB 890 rows
. . imported "SALES"."TIME_CALENDAR_DIM" 13.58 KB 365 rows
. . imported "SALES"."CLIENT_DIM" 5.062 KB 10 rows
. . imported "SALES"."CUSTOMER" 5.015 KB 2 rows
. . imported "SALES"."DUSERS" 7.640 KB 50 rows
. . imported "SALES"."EMPLOYEES" 6.421 KB 21 rows
. . imported "SALES"."ORACLE_USERS" 7.625 KB 50 rows
. . imported "SALES"."PRODUCT_DIM" 5.156 KB 6 rows
. . imported "SALES"."PROJECT_DIM" 5.242 KB 12 rows
. . imported "SALES"."REV_MAR_SUM_F" 5.390 KB 2 rows
. . imported "SALES"."SKILLS" 5.312 KB 20 rows
. . imported "SALES"."UTILIZATION_FACT" 5.609 KB 63 rows
. . imported "SALES"."ZIP_DESC" 4.796 KB 2 rows
. . imported "SALES"."SALES_FACT" 0 KB 0 rows
. . imported "SALES"."SKILLS_F" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SALES"."SYS_IMPORT_FULL_01" completed with  error(s) at Sat Feb 18 20:52:24 2017 elapsed 0 00:00:48

  1. Count all the database object and compile any invalid objects in your cloud database to complete your migration.
SQL> SELECT owner,
 object_type,
 object_name,
 status
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner, object_type, object_name; 2 3 4 5 6 7

no rows selected