Migrate Databases to Exadata using RMAN Duplicate

I am sure many of you have already migrated databases between different systems and know that migrating database to Exadata is not any different. There are many ways to migrate database to Exadata but for this blog, I will like to use RMAN duplicate method to migrate single instance database running Linux operating system to Exadata two node Rack.  I am planning to use RMAN duplicate from active database, but if your database size is too large and you have access backups, you can use existing RMAN backup to avoid putting strain on source system and network resources.

Steps to migrate database to Exadata Machine:

  1. Create Static Listener on Source
  2. Copy password file to Taret System (Exadata)
  3. Add TNS Names entries on both Systems (Source &   Target )
  4. Test Connections from Source & Target System
  5. Create pfile & make required changes
  6. Create required ASM / Local directories
  7. Startup Instance with nomoumt mode
  8. Connect to Target & AUX databases using RMAN
  9. Run RMAN Duplicate from Active Database
  10. Move spfile to ASM diskgroup
  11. Add Redo logs as needed
  12. Convert Single instance database to Cluster Database
  13. Register Database to CRS
  14. Database changes and enhancements
  15. Run Exachk report

 

  1. Login to Exadata machine node 1 only, configure static listener and reload.

 

LISTENER_duplica =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = EXADATA-HOST)(PORT = 1599))
)
)

SID_LIST_LISTENER_duplica=
(SID_LIST =
(SID_DESC =
(SID_NAME = DB_NAME)
(ORACLE_HOME =/u01/app/oracle/product/11.2.0.4/dbhome_1)
(GLOBAL_=duplica_DGMGRL)
)
)

lsnrctl reload  LISTENER_duplica

lsnrctl status  LISTENER_duplica

step-1

 

  1. Copy password file to Exadata machine
scp orapwXXXX* oracle@exadatanode1:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs

step-2

  1. Create following TNS Name entries on source / target system

 

dbname_source =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = SOURCE-HOST)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = source_db_service)
)
)


dbname_dup_target =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = EXADATA-HOST)(PORT = 1599))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = duplica_DGMGRL)(UR=A)
)
)


step-3

  1. Test connections from both source and target systems
sqlplus sys/XXXX@dbname_source as sysdba

sqlplus sys/XXXX@dbname_dup_target as sysdba

step-1b

 

  1. Create pfile from source database and make following parameter changes according to your target EXADATA environment.
*.control_files='+DATA/TARGET_DB/CONTROLFILE/current.397.920902581'
*.db_create_file_dest='+DATA/'
*.db_create_online_log_dest_1='+DATA/'
*.db_file_name_convert = '+DATA/DATAFILE/SOURCE_DB/','+DATA/DATAFILE/TARGET_DB/'
*.log_file_name_convert = '+DATA/ONLINELOG/SOURCE_DB/','+DATA/ONLINELOG/TARGET_DB/'
*.db_recovery_file_dest='USE_DB_RECOVERY_FILE_DEST'
*.db_recovery_file_dest_size=1932735283200

step5

  1. Create required directories (Local & ASM Diskgroups)
  • AUDIT & TRACE FILES
  • +DATA/DBNAME/DATAFILE
  • +DATA/DBNAME/ONLINELOG
  • +DATA/DBNAME/CONTROLFILE

step-6

 

  1. Startup the instance in nomount mode on Target System ( Exadata )
startup nomount

step-7

  1. Connect to target and auxiliary instances
rman target sys/XXX@dbname_source AUXILIARY sys/XXX@dbname_dup_target

step-8

 

  1. Duplicate database from active database
DUPLICATE TARGET DATABASE FROM ACTIVE DATABASE NOFILENAMECHECK;

step-9

  1. Move spfile to ASM disk group: Its best practice to move spfile to ASM. Maintaining spfiles locally for more than 1 instances can cause in consistence configuration between nodes.
create spfile='+DATA' from pfile='/tmp/initdbname.ora';

step-10

  1. Add more redo log groups as needed, as per Exadata best practices if you have ASM disk group with high redundancy level, place all your REDO logs on that group.
alter database add logfile thread 2 group 5 '+DATA' size 4294967296;

alter database add logfile thread 2 group 6 '+DATA' size 4294967296;

alter database add logfile thread 2 group 7 '+DATA' size 4294967296;

alter database add logfile thread 2 group 8 '+DATA' size 4294967296;

step-11

 

  1. Convert single instance database into cluster database: Most likely your database will have more than 1 instances on Exadata Machine. In my case i only have 2 nodes Exadata machine, but if you have half rack or full EXADATA rack, you will need to run some additional statements like below but concept will be the same.
alter system set instance_name='1' scope=spfile sid ='  1';
alter system set instance_name='  2' scope=spfile sid ='  2';
alter database enable public thread 2;
alter system set cluster_database_instances=2 scope=spfile sid ='*';
alter system set cluster_database=true scope=spfile sid ='*';
alter system set remote_listener='EXA-SCAN:1521' scope=spfile sid ='*';
alter system set instance_number=1 scope=spfile sid ='1';
alter system set instance_number=2 scope=spfile sid =' 2';
alter system set thread=1 scope=spfile sid ='1';
alter system set thread=2 scope=spfile sid ='2';
alter system set undo_tablespace='UNDOTBS1' scope=spfile sid ='1';
alter system set undo_tablespace='UNDOTBS2' scope=spfile sid =' 2';
alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='1';
alter system set cluster_interconnects = 'X.X.X.X:X.X.X.X' scope = spfile sid='2';

 

  1. Register Database with CRS: In order for CRS to restart the database automatically, you need to register database to CRS.
srvctl add database -d dbname -o '/u01/app/oracle/product/11.2.0.4/dbhome_1' –p '+DATA/DBANAME/PARAMETERFILE/spfile.256.924518361'

srvctl add instance -d dbname -i dbname1 -n EXANODE1

srvctl add instance -d dbname -i dbname2 -n EXANODE2

 

  1. Database Changes and Enchantments (Optional): If you really like to take full advantage to EXADATA machine capacity and achieve extreme performance, you should look into implementing following Database/Exadata features. I won’t go into details here but following features will require you to do some testing.
  • Index / Storage Indexes
  • Partitioning
  • Compression
  • Parallelism
  • Resource Management
  1. Run EXACHK report and apply recommended changes as needed. Make sure you get at least 90 or above score in your exachk report. You can ignore following recommendations if they go against your organization standards.
  • Primary database is NOT protected with Data Guard
  • USE_LARGE_PAGES is NOT set to recommended value
  • GLOBAL_NAMES is NOT set to recommended value
  • Flashback on PRIMARY is not configured
  • DB_UNIQUE_NAME on primary has not been modified

step-13

 

 

 

 

 

Installing Latest OPatch Utility on EXADATA using dcli

Anyone working with Exadata, probably have already used dcli (Distributed Command Line Utility) for their day to day administrative tasks. The dcli utility let you execute administrative commands on multiple Exadata nodes (Both Compute/Storage) simultaneously. You can use dcli command for various administrative and monitoring tasks from changing password to query storage cells. The dcli utility requires user equivalency being setup between all the target nodes and group file (a text file containing a list of target compute & Storage Nodes to which commands are sent). For this blog, I am going to use dcli utility to install latest OPatch utility on my two node Exadata machine.

  1. Checking User user-equivalence between all the target nodes, in my case I only have two compute nodes.
dcli -g dbs_group -l oracle 'hostname -i'

dcli-7

  1. If you don’t have group file containing all the Database/Compute nodes, you can create one using vi text editor.

dcli-1

  1. You can download latest OPatch utility from Oracle Metalink, you will need Oracle support ID for this download.

dcli-6

 

  1. Copy zip file to all the compute nodes, in my case there are only two nodes.
scp p6880880_112000_Linux-x86-64.zip \ oracle@NODE2:/u01/app/oracle/product/software/

dcli-2

  1. You can also use dcli utility to Check existing OPatch Version on all target nodes.
dcli -l oracle -g dbs_group /u01/app/oracle/product/11.2.0.4/dbhome_1/OPatch/opatch version

dcli-3

  1. Unzip latest OPatch utility to all compute nodes using dcli
dcli -l oracle -g dbs_group unzip -oq -d \  /u01/app/oracle/product/11.2.0.4/dbhome_1 \ /u01/app/oracle/product/software/p6880880_112000_Linux-x86-64.zip

dcli-4

  1. Check existing OPatch Version again to verify if latest OPatch utility has been installed on all compute nodes.
dcli -l oracle -g dbs_group \  /u01/app/oracle/product/11.2.0.4/dbhome_1/OPatch/opatch version

dcli-5

Oracle Now Offers Bare Metal Cloud Service for your most Critical Workload !

With the announcement of Bare Metal Cloud Service, Oracle takes a significant step toward providing complete cloud solution to its customers. With Bare Metal Cloud Service, Customers will be able to setup whatever operating system they want on top of the hardware. Oracle Bare Metal Cloud services offer many solutions but the guiding principle is that the servers and resources will be bare metal. Oracle will handle all of the network virtualization work and provide tenants, physical isolation of workloads from other cloud tenants and the provider itself.

As of now Oracle Bare Metal Cloud is offering following services:

Computer Service: Provides two compute offerings for flexibility to run your most demanding workloads, Bare Metal Instance (Fully dedicated bare metal servers on a software-defined network) and Virtual Machine Instance (Managed Virtual Machine (VM) instances for workloads not requiring dedicated physical servers)

Block Volume: Offers persistent IO-intensive block storage option (Block Volume). Block Volume Service provides high-speed storage capacity with seamless data protection and recovery.

Object Storage Service: The Oracle Bare Metal Cloud Object Storage Service is an internet-scale storage platform that offers reliable and cost-efficient data durability

Networking Service: With This offering, you will be able to extend your network from on-premise to the cloud with a Virtual Cloud Network

Identity and Access Management Service: The IAM Service helps you set up administrators, users, and groups and specify their permissions

Database Service: offer dedicated hardware for you Oracle databases in cloud environment.

 

 

 

Intelligent Data Mapping through Oracle Integration Cloud Service

Have you ever wondered, how is your on premise applications will interact or integrate with your cloud applications? If yes then you should look into new Oracle Integration Cloud Service. I have seen many customers hesitant to move some of their applications to public cloud because they are tightly integrated with their other applications. With Oracle Integration Cloud Service you can develop integrations between your applications in the cloud and between applications in the cloud and on premises.

Integration partly require you to map data between different applications. For example, you can have Gender Code, and Country Code field exist in different applications. Even though they represent same data/values but they can be presented differently. E.g. Gender can be presented as M/F or Male/Female, Country Code can US or USA. To map these codes, you create cross-reference tables called Lookups that define and store mappings for this type of data for a set of applications. You can then look up the data within the tables in the data mappings. Data mapping is a complex task and will require in-depth application and data architectural knowledge. But with Oracle Integration Cloud service (Data Mapper) you can create those mapping without writing any code. You will be able to easily create / define data mapping from simple to complex transformation.

 

data-mapping-service

 

With Oracle Integration cloud Service, you can

  • Connect securely to applications and services in the cloud and on premises
  • Point and click to create integrations between your applications with a powerful browser-based visual designer—it even runs on your favorite tablet
  • Select from a growing list of integrations built by Oracle and Oracle partners
  • Monitor and manage your integrations
  • Manage errors

Is there a compelling reason to virtualize Exadata Machine?

Introduction

Virtualizing Exadata machine has become an important deployment decision for many Exadata customers and most of them like to explore or at least discuss virtualization to see if there is any benefit for them. I believe you should have a good use case to virtualize Exadata machine and it should not be your standard install. Keeping that in mind I like to list following use cases where it makes sense to virtualize Exadata machine.

Cost Saving: With the introduction for Elastic configuration and Capacity on Demand (COD), you can already save significant amount of money on licensing and initial investment. With Exadata Elastic configuration option, you can build Exadata with almost any configuration of compute and storage servers. And Capacity on Demand (COD) option allows you but Oracle license in increment. With minimum of 40% must be license, you will be able buy 1/8th rack by only licensing 8 cores per server. So how will OVM with safe money on Licensing? Additional Cost Option Licensing. Virtual machines on Exadata are considered Trusted Partitions and therefore software can be licensed at the virtual machine level instead of the physical processor level. Without Trusted Partitions, database options and other Oracle software must be licensed at a server or cluster level even though all databases running on that server or cluster may not require a particular option. Even with Unlimited License Agreement (ULA) organizations don’t have unlimited licensing for everything (Golden Gate, Advance Security, Advance compression, etc). Some of the licensing options are very expensive and can end up playing key role in your decision to buy Exadata machine.

Compliance: Secondly, I see compliance as another reason to virtualize Exadata Machine. There are different types of compliance requirements HIPPA, PCI DSS and Certifications. We already have a clear definition of HIPPA, PCI DSS compliance requirements and none of them will require you to virtualize Exadata machine. But certification is different, different software and hardware vendors will have set of Software & Hardware requirements to certify their application.  You might be required to isolate your workload at database level or cluster level or operating system level. For Example, if your databases contain sensitive client data from different business partners you might be required to isolate data at operating system level or even physical level. You can achieve different level of isolation with Exadata Machine without using OVM. You can have additional Oracle RDBMS Homes to provide Oracle binary files isolation, you can also have different disk groups to provide storage isolation and it is also possible to have a separate physical cluster if you have half or full Exadata rack. But you won’t be able to have two separate physical Oracle Clusters on quarter or eight rack.  Using VM’s you will be able to install two or more VM Oracle clusters and achieve operating system level isolation.

Consolidation: Exadata is optimized for both OLAP and OLTP database workloads. Its balanced database server and storage grid infrastructure also makes it an ideal platform for database consolidation. Consolidated environments running on Exadata can use Oracle Virtual Machine (OVM) to deliver a high degree of isolation between workloads. This is a very desirable feature for hosted, shared, service provider, and test/dev environments. Using OVM, multiple software clusters can be deployed on the same Exadata Database Machine, which enables consolidation of applications that have specific clusterware/rdbms/maintenance needs. Not every organization have a separate Exadata machine for development and performance testing. Ideally you should have development and test environments on Exadata machine, so you can take full advantage of Exadata features like smart scan and offloading. You would also like to separate prod, pre-prod and test environments to define separate maintenance windows. For example, if mission critical applications sharing the same Exadata machine with development or test systems, then the frequent changes made in development and test systems will impact the availability and stability of the mission critical applications.

Conclusion

Don’t do it unless you have good use case for it.

Virtualized Exadata Machine (Isolation vs Efficiency)

Virtualizing Exadata machine has become an important deployment decision for many Exadata customers and most of them like to explore or at least discuss virtualization to see if there is any benefit for them. Since I have already been part of those conversations, I decided to share my thoughts on this topic to help my readers.

Oracle started supporting Exadata virtualization while ago and it’s free. You might want to virtualize your Exadata Machine for many reasons (Consolidation, Security, and Compliance) and end result will be to achieve some level of Isolation. Isolation is probably one of the main reason to virtualized Exadata machine. And if you are planning to virtualize your Exadata, keep in mind that everything (CPU, memory, disk) will be hard partitioned. Even though you can over-provision CPU’s, Oracle recommend strongly against over provisioning any resources. With dedicated CPU’s, memory and disks you will be able to achieve great isolation but it will not be an efficient use of Exadata machine resources. For instance, virtualization will provide you opportunity to have different patching cycles for each Exadata VM cluster but not without maintenance overhead. I have worked with Exadata Rack with up to 3 VM’s and it was not fun patching them, imagine if you have multiple virtualize Exadata machines. Remember Oracle releases around 4 bundle patches a year and you need to apply at least 2 of them to be in compliance for Oracle Platinum Service. Additionally, since everything is hard partitioned in virtualized Exadata machine, you will not be able to use IDLE hardware resources from Other VM’s. Hence you are wasting very expensive Hardware and Software resources.

It’s also important to understand that there are many levels of isolation like physical level, OS level, Storage Level, cluster level or RDBMS level and you can still achieve some level of isolation without virtualizing Exadata machine. For example you can have multiple RDBMS homes, different ASM disk groups and isolated network using VLAN.  I am not against virtualizing Exadata Machine but you should have very good use case for it. I would suggest combining above mentioned isolation strategies with 12c multitenant option to achieve excellent efficiency. But again if you are required to isolate everything at OS level, virtualizing Exadata Machine using OVM is your only option. Even though Exadata VM’s are also great for consolidation but best strategy would be to combine VM’s with database native consolidation options like multitenant. Exadata VMs have good Isolation but have poor efficiency and higher maintenance. Virtualizing Exadata Machine should not be your standard built, you should always consider bear metal install over Virtualized Exadata install.

Managing Virtualized Exadata Machine

First thing you should know about managing Exadata VMs is that you can migrate Bare Metal Oracle cluster to OVM cluster. Conversion of Bear Metal to OVM can be achieve with 0 down time or minimum downtime, depend on various migration methods.

Memory: You can decrease or increase amount of memory allocation to user domain with proper planning. For example: if you want to decrease memory allocated to user domain , you should consider instance memory parameter and make sure you still have enough memory left for user domain to support SGA/PGA for all the running databases. Memory changes to user domain is not dynamic and will require restart of user domain.

 CPU: Similarly, you can increase and decrease number of vCPU’s assign to a user domain. You can also dynamically increase or decrease number of vCPU’s assign to user domain as long as it will exceed number of vCPU’s assign to that domain. Overprovisioning is possible but not recommended, it will require full understanding of workload on all the user domains.

Storage:  Like CPU and memory you can also increase size of underline storage for any user domains. You can add new logical volume, increase size of root file system and increase size of Oracle Grid or RDBMS files system. . You can even add a new mount, if you like to add another Oracle RDBMS home

Backup: With addition to all other backups you need to backup both management and user domain in virtualized Exadata environment. As a best practice backup destination should reside outside of the local machine.

Virtualizing Exadata Machine Using OVM

It’s been a while since Oracle started supporting OVM with Exadata Machine. It means you can virtualize your Exadata Machine using Oracle VM technology with no additional cost. Ideally, Oracle VM with Exadata machine should be use for database consolidation or isolation only but you are not restricted. You can use Exadata VMs to install other products like management or ETL tools but it’s not recommend to run major applications like SAP or EBS. Currently Exadata VM are only certified to Oracle enterprise Linux, may be in future Oracle will start supporting other operating systems. Please see below diagram to compare quarter rack Virtualized Exadata Machine with Bare Metal.

Virtualized Exadata 01

Virtualized Exadata machine architecture can be significantly different than the Bare Metal one. With Bare Metal install, you have one Oracle cluster for whole Exadata machine, unless you physically partition your Exadata machine. With Virtualized Exadata Machine you have 1 management domain (domO) and at least 1 user domain (domU) on each node, depending on the number of VM clusters being deployed. Management Domain (domO) is automatically created with 4 vCPU, 8 GB RAM, with no Oracle GRID and RDBMS software installed in it. Each user domain (domU) should be sized carefully based on the databases you are planning to host in future. With virtualized Exadata install, you only virtualize compute nodes and share storage between them. This also mean that all the VM clusters will have their own dedicated ASM disks groups and physical disks.

 

Exadata Install Process Got Much Simpler With Elastic Configuration

Recently I had performed an Exadata X-6 Bear Metal install and I thought it will be a good idea to share my experience with all of you. Exadata deployment is itself a mini project. You need to work with database /network  team to run OEDA configuration tool to generate all the configuration files. But for this blog, I will focus on the install part only. Before you can this proceed with Exadata Install process, you need to make sure you have run the check-IP script to verify all the DNS entries and IP addresses. It’s very important that you don’t start with install process without verify network configuration.

At this point you should have all the configuration and software files ready on a USB drive or on your laptop. Now you can use following steps to compete Exadata install.

Step 1: You can use Ethernet port 48 to connect to Exadata network. Mostly likely, there is going to blue Ethernet cable you can use for this step. Modify your local area network settings as per following configuration.

Capture 1

Step 2: Connection to Exadata node8 (172.16.2.44) using ssh and run ibhosts. You should see all the nodes with their addresses and elasticNode keyword with all nodes.

Capture 2

Step 3: cd into /opt/oracle.supporTools and run reclaimdisks.sh –free –reclaim on node 8 (172.16.2.44) and node 10 (172.16.2.46)

Capture 4

Step 4: run reclaim disk check on both nodes. (reclaimdisks.sh –check) and make sure you get the following output.

Capture 5

Node: – Please make sure you run step 3, 4 on all Compute nodes (node8, node10).

Step 5: Plugin the USB and mount it using following procedure. This step is optional, if you are planning to copy software through ssh.

for x in `ls -1 /sys/block`; do udevadm info –attribute-walk 

–path=/sys/block/$x | grep \

 -iq ‘DRIVERS==”usb-storage”‘; if [ $? -eq 0 ] ; then echo /dev/${x}1; \

fi ; done

Capture 6

Step 6: Unzip OEDA tool to /opt/oracle.SupportTools/onecommand directory , then copy all the configuration files to /opt/oracle.SupportTools/onecommand/linux-x64.

Capture 11

Step 7 : Copy /opt/oracle.SupportTools/onecommand/ to node10 ( 2nd dbnode or all compute nodes )

Capture 19

Step 8: Apply Elastic configuration from node8. This is a very important step, you only need to run this step from 1 node only. This step will use your Exadata configuration file to assign new IP addresses and reboot all exadata nodes including storage nodes.

  • cd /opt/oracle.SupportTools/onecommand/linux-x64
  • ./applyElasticConfig.sh -cf customer_name-configFile.xml

Capture 12

Step 9: Connect to Exadata using new IP address. It’s important to understand that your machine has new ip addresses now. Change your local area settings and assign same IP/MASK as your PDU01 . You can find this information in your OEDA installation template.

Capture 13

Step 10: Connect through ssh and run ibhosts to see if new ip addresses has been assign. Its important that you don’t see any elasticNode keyword on the screen, otherwise your elastic configuration is not completed.

Capture 14

Step 11: Move onecommand directory under /u01 on both dbnodes.

Capture 15

Step 12: Copy all the required software and patches to (/u01/onecommand/linux-x64/WorkDir). You only have to do this on node1. Again you can find complete list of software in your Exadata installation template.

Capture 21

Step 13: Run check-IP script one more time to validate all the network settings.

/opt/oracle.SupportTools/onecommand/linux-x64/checkip.sh -cf customer_name-configFile.xml

Step 14: If you don’t get any error during check-IP process, you can proceed with Exadata install. At this point, there are 19 steps to complete Exadata software install. You can go thought with them 1 by 1 or you can run all of them together. I will strongly recommend doing it 1 by 1, it will help you troubleshoot any issue during the install process. You can using following command to list all the steps.

  • ./install.sh -cf customer_name-configFile.xml -l

Capture 16

Step 15: You can start going through each steps 1 by 1 using following command. I will only post screen shots for step 1, 19

(Step 1 Screen Shot)

Capture 17

(Step 19 Screen Shot)

Capture 19b

Final Step: Please login to all nodes (Compute & Storage) and change the root password.

Capture 20