Do you need detail application discovery for Oracle cloud migration?

Many of you must be wondering why we need a detailed discovery process for Oracle cloud migration. Why not just do a Proof of Concept (POC) for the target application to assess feasibility of Oracle cloud migration and iron out migration issues along the way. Based on my experience, POC for Oracle cloud migration will only work for one or two applications, this approach will not work if you are looking to migrate a group of applications. Additionally, POC will require more upfront cost and resources versus application discovery process. Finally, it will take longer to discover if target application is a good fit for Oracle public cloud through POC versus application discovery process.

I cannot overstate the importance of performing detail application discovery for Oracle cloud migration.  Application discovery is crucial to migrate all the applications, but level of detail can vary based on criticality of the application. There are many critical phases of Oracle cloud migration like provisioning, testing and migration, which will depend on detailed application discovery process. For example, you will not be able to provision Oracle cloud environment if you have not collected hardware and network information for the target application. Secondly, you will not able to identify any road blocks to the migration beforehand and that can lead to a waste of capital or even failed migration project.

Introduction to Oracle Cloud Analytics

As you know Oracle analytics is getting immense attention being the market leader in providing the analytics value in cloud marketplace. This blog will provide you with a thorough and high yield review of vision and strategy behind Oracle analytics while focusing on autonomous analytics.

As in every other aspect of life analytics is very important in making informed decisions. With the invention of autonomous analytics including machine learning and artificial intelligence, decision making, and predictive ability has been greatly enriched. Autonomous analytics works by revealing hidden patterns and making actionable insights more accessible thus empowering everyone to use data to drive every process direct every interaction and inform every decision so that the proposed outcome is achieved.

In order to drive broad consumption, Oracle make it easy for everyone to interact with information so that you can engage, analyze, and act in a way that is natural—as in asking questions in plain language using either text or voice, searching for answers, and receiving insights as narration. Cut through information overload with relevant, personalized insights delivered proactively to you in the context that makes the most sense. This is where context is the main key as it provides access to the information that you need. Speed is another important factor. Oracle analytics cloud can autonomously query the data and give appropriate illustrations. The text summarizes salient points so that you can focus on what the data really means and filter out the dissonance that comes from different interpretations of a visualization. Natural language generation makes the analytic experience much more conversational.

To deliver deeper, more meaningful insights, Autonomous analytics can recommend what data to analyze, how it could be augmented, and what it really means. It can suggest key drivers and insights to guide analysis, and it can recommend the best way to interpret data and what actions to take as a result.

Overview of Oracle MySQL Cloud Service

Oracle is continuously striving to offer best in-class cloud solutions to their customers. Oracle MySQL Cloud Service is another example, how Oracle is the only Enterprise cloud that stands out in town offering all 3 levels (SaaS, PaaS and IaaS) of solutions in public cloud. Here are some import take-a-ways about Oracle MySQL Cloud Service.

Time to Market:

Time to market is one of the key features of any cloud service, you can continue going with Oracle MySQL Cloud Service in just few clicks and manage your database deployments easily using superior tools.  Oracle MySQL Cloud Service offers manager tools to manage your database in cloud. For example, MySQL Enterprise Backup enables online hot backups of your databases including support of full, incremental and point-in-time recovery backups.  Oracle MySQL Cloud Service also offer robust monitoring tools like MySQL Enterprise Monitor and MySQL Query Analyzer to continuously monitor your databases and alert you to potential problems. MySQL Workbench provides data modeling, SQL development, and comprehensive visual tools for database architects, developers, and DBAs. Oracle Enterprise Manager monitors MySQL and your entire Oracle environment from a single management console

Scalability:

Scalability has always been a concern for opensource database in comparison to enterprise databases like Oracle. Customers for an open source database were long looking for opensource database offer that can provide availability and scalability of an enterprise database like Oracle with the cost of opensource solution. With Oracle MySQL Cloud Service, you can scale your computer resources as needed, up or down, with Elastic Compute. You will also increase your block storage as needed for your MSQL instance with Elastic storage. Sustain performance and scalability in spite of ever-increasing user, query, and data loads with MySQL Thread Pool. Hence, improve your application uptime with MySQL Replication and MySQL Replication Monitoring.

Support:

Based on my experience, support for an open source database is #1 concern for customer. Keeping that in mind, Oracle made support a key component for Oracle MySQL Cloud Service. With Oracle MySQL Cloud Service, you can get MySQL technical support directly from the experts and the engineers developing the MySQL products. They’ll help you to solve your most complex issues 24 hours a day and make the most of your MySQL deployments. You can get a single point of contact for both cloud infrastructure and MySQL issues. You don’t have to waste your valuable time wondering if you’ll find the right person to give you the answers that you need.

Security:

Oracle public cloud offer immense security for all its services and Oracle MySQL Cloud Service is no exception. Integrated Oracle public cloud, Oracle MySQL Cloud Service integrates MySQL Enterprise Edition and provides multilayered security when accessing from Oracle Public Cloud to the data stored in MySQL. MySQL Enterprise Audit lets you quickly and seamlessly add policy-based auditing compliance to the existing applications. Additionally, MySQL Enterprise Encryption provides industry-standard functionality to protect your sensitive data throughout its life cycle. Finally, MySQL Enterprise Transparent Data Encryption provides data-at-rest encryption and secure key management for all your security needs.

Data Dump Import Missing NOT NULL Constraints

Hi All, I like to share my experience with recent database migration to Exadata Machine using Oracle Export / Import utility. Many of us used Oracle Export / Import utility for database upgrades and migrations. If you are migrating data using Data Pump with all the constraints and indexes, you will not face this issue. This issue is only related to Data Pump option Exclude / Include = Constraints.

Let me start with the reason that why would we need to copy data using data pump without constraints or indexes. When you are migration few GB of data it will not matter that you are creating and build indexes during data import. But if you are planning to migrate 10+ TB database using data pump, you want to separate data copy from indexes and constraints. Creating indexes and constraints on terabyte tables can take days since they will run without any parallelism. Hence many DBA’s and migration experts uses Data Pump SQLFILE option for indexes and constraints creation. This way we are able to create indexes in parallel mode and create constraints without validate.

Now with 12c Oracle has change exclude option and it will not import NOT NULL constraints if you exclude constraints during data import. Please review following Oracle Support note and make sure to bring NOT NULL constraints manually after the data import. Fortunately, data import with exclude constraints option still bring CHECK constraints for now.

Data Pump Import With EXCLUDE=CONSTRAINT Or INCLUDE=CONSTRAINT Is Excluding And Respectively Not Including NOT NULL Constraints (Doc ID 1930631.1)

After doing a Data Pump import (impdp) in 12.1.0.2 database release with parameter EXCLUDE=CONSTRAINT the NOT NULL constraints from source tables are not in target tables. Sequence of steps leading to the problem:
– Data Pump export of tables with NOT NULL and referential integrity constraints
– Data Pump import with EXCLUDE=CONSTRAINT

For example:

==> In source database a table has the following structure:

Name Null? Type
—————————————– ——– —————————-
ID NOT NULL NUMBER(38)
NAME NOT NULL VARCHAR2(20)
AGE NOT NULL NUMBER(38)
ADDRESS CHAR(25)
SALARY NUMBER(18,2)

=> After import, the table structure:

Name Null? Type
—————————————– ——– —————————-
ID NUMBER(38) ———————> NOT NULL Constraints ignored
NAME VARCHAR2(20)———————> NOT NULL Constraints ignored
AGE NUMBER(38)———————> NOT NULL Constraints ignored
ADDRESS CHAR(25)
SALARY NUMBER(18,2)

Also, if using Data Pump import (impdp) with INCLUDE=CONSTRAINT parameter, NOT NULL constraints are not imported.

Perform Table Recovery With Oracle 12c Using RMAN

Import Consideration:

  • An rman backup containing the missing table must exist; it cannot rely on a backup from before the table was created alongside the application of archived redo.
  • The table cannot belong to SYS or SYSTEM and cannot reside in SYSTEM or SYSAUX.

Create Test Tables

SQL> create table test1 as select * from dba_tables;

Table created.

SQL> select count(*) from test1;

  COUNT(*)

———-

      6352

Make sure to perform Incremental or Full Backup

Drop test Tables

Connected to:

Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> drop table umair.test1;

Table dropped.

Set Recovery Destination Area

> Mkdir –p /zfssa/dbm01/backup1/fra

> cd /zfssa/dbm01/backup1/fra

oracle@exadbadm01.gain.tcprod.local:dbm01:/zfssa/dbm01/backup1/fra

Recover table using time or SCN

RMAN> recover table umair.test1 until time “to_date(’09/16/2017 21:01:15′,’mm/dd/yyyy hh24:mi:ss’)” auxiliary destination ‘/zfssa/r360pdimg/imgbackup1/fra’;

Starting recover at 17-SEP-17
using target database control file instead of recovery catalog
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=509 instance=dbm01 device type=DISK
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Tablespace UNDOTBS2
Tablespace UNDOTBS3
Tablespace UNDOTBS4
Tablespace UNDOTBS5
Tablespace UNDOTBS6

auxiliary instance file tspitr_plti_80445.dmp deleted
Finished recover at 18-SEP-17

Validate Table Recovery

SQL> select count(*) from test1;

  COUNT(*)

———-

      6352

Shutting down Exadata Storage cell for maintenance

There are times when you have to shutdown just Exadata Storage node for maintenance reasons like disk or memory replacement. You can use following steps to safely shutdown and startup storage node.

Login to to Storage cell using root user

root@ex01celadm09 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Mon Sep 17 09:28:58 CDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

Make sure there are no inactive GRIDDISKs

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

Deactivate all GRIDDISKs

CellCLI> ALTER GRIDDISK ALL INACTIVE
GridDisk DATAC1_CD_00_ex01celadm09 successfully altered
GridDisk DATAC1_CD_01_ex01celadm09 successfully altered
GridDisk DATAC1_CD_02_ex01celadm09 successfully altered
GridDisk DATAC1_CD_03_ex01celadm09 successfully altered
.
.
.
GridDisk RECOC1_CD_08_ex01celadm09 successfully altered
GridDisk RECOC1_CD_09_ex01celadm09 successfully altered
GridDisk RECOC1_CD_10_ex01celadm09 successfully altered
GridDisk RECOC1_CD_11_ex01celadm09 successfully altered

Make sure GRIDDISKs are all inactive

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

Shutdown Storage cell, you can also use ILOM interface to turn off cell node

[root@ex01celadm09 ~]# shutdown now

Broadcast message from root@ex01celadm09.corp.medtronic.com
(/dev/pts/0) at 9:35 ...

The system is going down for maintenance NOW!

Login to storage node once its online

[root@ex01celadm09 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Mon Sep 17 10:50:25 CDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

Check GRIDDISKs Status

CellCLI> LIST GRIDDISK WHERE STATUS != 'inactive'

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 OFFLINE
DATAC1_CD_01_ex01celadm09 OFFLINE
DATAC1_CD_02_ex01celadm09 OFFLINE
.
.
.
RECOC1_CD_08_ex01celadm09 OFFLINE
RECOC1_CD_09_ex01celadm09 OFFLINE
RECOC1_CD_10_ex01celadm09 OFFLINE
RECOC1_CD_11_ex01celadm09 OFFLINE

Activate all GRIDDISKs

CellCLI> ALTER GRIDDISK ALL ACTIVE
GridDisk DATAC1_CD_00_ex01celadm09 successfully altered
GridDisk DATAC1_CD_01_ex01celadm09 successfully altered
GridDisk DATAC1_CD_02_ex01celadm09 successfully altered
GridDisk DATAC1_CD_03_ex01celadm09 successfully altered
.
.
.
GridDisk RECOC1_CD_08_ex01celadm09 successfully altered
GridDisk RECOC1_CD_09_ex01celadm09 successfully altered
GridDisk RECOC1_CD_10_ex01celadm09 successfully altered
GridDisk RECOC1_CD_11_ex01celadm09 successfully altered

Continue to Check GRIDDISKS Status

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 SYNCING
DATAC1_CD_01_ex01celadm09 SYNCING
DATAC1_CD_02_ex01celadm09 SYNCING
.
.
.
RECOC1_CD_09_ex01celadm09 OFFLINE
RECOC1_CD_10_ex01celadm09 OFFLINE
RECOC1_CD_11_ex01celadm09 OFFLINE

Make sure all GRIDDISKs are online

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 ONLINE
DATAC1_CD_01_ex01celadm09 ONLINE
DATAC1_CD_02_ex01celadm09 ONLINE
DATAC1_CD_03_ex01celadm09 ONLINE
.
.
.
RECOC1_CD_08_ex01celadm09 ONLINE
RECOC1_CD_09_ex01celadm09 ONLINE
RECOC1_CD_10_ex01celadm09 ONLINE
RECOC1_CD_11_ex01celadm09 ONLINE

 

Enable TDE for 12.2 Databases on Exadata Machine

I have seen a lot of customers run into “Data At Rest Encryption” deli-ma , when they look into migrating databases to Exadata Machine from Traditional storage like EMC.  Storage like EMC’s provide encryption at storage level and in most cases it satisfies compliance requirement for many customers. Unfortunately, Exadata Storage Disk are not encrypted by default and if you need to comply with  “Data At Rest Encryption” requirement for your databases , you need to Enable Oracle TDE feature. It’s important to understand that this is license feature, make sure your are covered in terms of licensing. Here are the steps you can sue to enable encryption on 12.2 databases on Exadata Machine.

Step 1 : Location for TDE wallet ( All Nodes )

This is very important , you will probably have multiple Exadata nodes with multiple databases running on it.  In order to have multiple wallet , you need to choose Wallet location bases on either $ORACLE_SID or $UNIQUE_NAME. I will be using ORACLE_SID for my blog , since its set in most environments.   Once you have identified the Wallet location , you need to add following entry to SQLNET.ora file.

ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))

Step 2 : Create KETSTORE ( Node 1 Only )  

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/MART1/encryption_keystore/' IDENTIFIED BY Password!;

Step 3 : Open KETSTORE  (Node 1 Only)

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Password!;

Step 4 : Set KETSTORE Encryption Key (Node 1 only )

Administer key management set encryption key identified by Password! with backup;

Step 5 : Copy wallet to other nodes

Make sure you have directories created on all Exadata Compute Nodes

mkdir -p /u01/app/oracle/admin/MART2/encryption_keystore/

Step 6 : Close & Open Wallet from Node 1 Only

-- Close Wallet
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY Password!;

-- Open Wallet
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Password!;

Step 7 : Check Wallet Status for all nodes from Node 1 Only

SELECT status FROM Gv$encryption_wallet;

Step 8 : Create AUTO LOGIN for Wallet (Node 1 Only)

Optionally, you can all create auto logon for your Wallet so you don have to open wallet every time database is restarted.

ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN keystore from keystore '/u01/app/oracle/admin/MART1/encryption_keystore/' IDENTIFIED BY Password!;

Step 9 : Copy AUTO LOGIN files to other nodes

Since you just created new files on node 1 only, you need to copy them to rest of Exadata Nodes

Step 10 : Shutdown and Start Database using SRVCTL 

Srvctl stop database -d MART 

Srvctl start database -d MART

Step 11 : Check Wallet Status 

Once database is back online , Encryption Wallet should be open for all nodes.

 SELECT status FROM Gv$encryption_wallet;

 

Create Databases on Exadata Machine using DBCA silent mode

I strongly recommend creating databases on Exadata Machine using DBCA GUI but you can use following DBCA command to create databases if for some reason GUI is not working.

To Create Database 

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo msplex01dbadm01,msplex01dbadm02,msplex01dbadm03 -ignorePreReqs

To delete database 

dbca -silent -deleteDatabase -sourceDB mart -sysDBAUserName sys -sysDBAPassword OraPasswd1

Migrating Database from AIX to Exadata using Data Pump

There are many methods to migrated Oracle databases but this blog is focus on migrating large database from AIX to Exadata Machine. This methods can also be used to migrate and upgrade Oracle database from Linux to Exadata machine if needed.

There is another popular method called “transportable databases” that can be used to migrate databases from AIX operating system. But if for some reason, you cannot used database transportable method, data pump is your only option.  Using data pump, you can use upgrade, migrate and optimize database in one shot.

Following steps are focused on migrating large databases using data pump. Ideally, you want divide migration into 4 steps. 1) META DATA only, 2) DATA Only, 3) Indexes and Constraints, 4) Validate. Creating indexes and enabling constraints with Terabyte DATA import can take days, this is why it’s very important that you separate them from actual DATA import.  You can use network impdp feature only if your have 10G network between source and target systems.

Step 1 : Create shell Database on target Exadata Machine using DBCA

You can use DBCA silent option to create database too. I strongly recommended using DBCA GUI to create databases on Exadata Machine.

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo ex01dbadm01,ex01dbadm02,ex01dbadm03 –ignorePreReqs

Step 2 : Create tablespaces based on source database

I have to enable encryption and compression for my migration but I recommend that you should enable OLTP complete at least for Exadata Machine.

select 'create bigfile tablespace '||tablespace_name ||' DATAFILE ''+DATAC1'' SIZE '||sum(bytes)/1024/1024 ||'M ENCRYPTION

 USING ''AES256'' DEFAULT COMPRESS for OLTP STORAGE(ENCRYPT) ;'

       from dba_data_files

       group by tablespace_name

Step 3 : Create public database link to source database  

Note : – DB link be used for network data pump and compare objects between source and target databases.

CREATE public DATABASE LINK src_db CONNECT TO umair IDENTIFIED BY password! USING 'MART';

Step 4 : Create migration directory for logs and dump files

Create directory for Data Pump Usage both on Source and Target

create or replace directory migration as '/etc/migration/mart';

Step 5 : Import Full database META DATA only using DB Link

nohup impdp / full=y content=metadata_only network_link=src_db directory=migration PARALLEL=16 logfile=imp_full_mart.log &

Step 6 : Import Data schema by schema (FIN_MART)

Note : – I am using network option for my data migration about you don’t have network bandwidth or a lot of unsupported objects , you should first export and then import data using dump files.

nohup impdp / schemas=FIN_MART network_link=src_db exclude=index,constraint TABLE_EXISTS_ACTION=REPLACE directory=migration parallel=16 logfile=impdpFIN_MART.log &

 Step 7 : Generate Indexes & Constraints scripts

First create export dump file (META DATA Only)

expdp / schemas=FIN_MART content=metadata_only directory=migration dumpfile=FIN_MART.dmp logfile=expdpFIN_MART.log

Then generate seperate SQL scripts for Indexes and Constraints.

impdp / schemas=FIN_MART include=index sqlfile=FIN_MART_IDX.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_IDX.log
impdp / schemas=FIN_MART include=contraint sqlfile=FIN_MART_CONS.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_CONS.log

Step 8 : Create Indexes using SQLscript

Update FIN_MART_IDX.sql script and replace parallel 1 with 16 then execute it

nohup sqlplus umair/password! @FIN_MART_IDX.sql &

Step 9 : Enable Contraints using SQL script

Update FIN_MART_CONS.sql script and replace ENABLE with ENABLE NOVALIDATE then execute it.

nohup sqlplus umair/password! @FIN_MART_CONS.sql &

 Step 10 : Validate objects

Lastly Validate objects using following SQL and bring any missing objects.

select owner,object_type,count(*) MISSING_OBJECTS

from (

select owner,object_type,object_name

from dba_objects@src_dih

where owner =’FIN_MART’

minus

select owner, object_type, object_name

from dba_objects where owner = ‘FIN_MART’

)

group by owner,object_type

order by owner,object_type

 

Repeat Steps though 6 -10 for each Schema in your database.