EXADATA SL6 – Exadata with SPARC Processers

Oracle now offer Exadata with SPARC M7 with all the benefits of Exadata combined with Ultra-Fast SPARC Processors running on Linux. This is a good news for professionals or organizations, who prefers Linux Operating Systems. I hope Oracle will continue to expand this offering and extend this offer with other hardware and engineered systems.

This new addition to Exadata family is called SL6 (SPARC Linux), it nearly identical to existing Exadata machine except it uses Oracle SPARC T7-2 Database servers. Even though the database servers are based on SPARC processors, Exadata SL6 runs the exact same Linux Operating System as x86-based Exadata systems.

As some you already know that SPARC is not just a processor, it is the world’s most advanced processor to run Oracle databases, and uses a revolutionary technology called Software in Silicon. Software in Silicon technology enable databases to run faster with unprecedented security and reliability using three very unique technologies.

  1. SQL in Silicon : Incorporates 32 on-chip Data Analytics Accelerator (DAX) engines that are specifically designed to speed up analytic queries
  2. Capacity in Silicon : uses accelerators to offload in-memory query processing and perform real-time data decompression
  3. Security in Silicon : is a function of SPARC M7 continuously perform validity checks on every memory reference made by the processor without incurring performance overhead 

Exadata SL6 running with 32 core at 4.1 GHz will let you to run bigger workload with smaller configuration, saving you money on software licensing  and maintenance costs. Additionally, As per Oracle “Exadata SL6 also delivers 20-30% more IOs than an equivalent Exadata X6-2 configuration and hence further lowers the total cost of ownership of the solution”

 

Accelerate OLTP workloads on Exadata with Smart Fusion Block Transfer

If you have an OLTP application running on Exadata and frequently updating to adding rows to tables from multiple database blocks, you can take advantage of Smart Fusion Block Transfer capability which uniquely improves performance of a RAC configuration by eliminating the impact of redo log write latency. Especially DML queries running from multiple instances, can lead to hot blocks transfer between Oracle RAC Nodes. This feature can transfer hots blocks as soon as the I/O to the redo log is issued at the sending node, without waiting for it to complete. As per Oracle “It has been observed that Smart Block Transfer increases throughput (about 40% higher) and decreases response times (about 33% less) for communication intensive workloads”.

Without Exadata’s Smart Fusion Block Transfer feature, a hot block can be transferred from a sending node to a receiver node only after the sending node has made changes in its redo log buffer durable in its redo log. With Smart Fusion Block Transfer, this latency of redo log write at the sending node is eliminated. So if you have an OLTP workload where hot blocks need to updates frequently across multiple nodes in RAC, you should looking in enable this feature. This feature is disable by default.

To enable Smart Fusion Block Transfer:

  • Set the hidden static parameter “_cache_fusion_pipelined_updates” toTRUE on all Oracle RAC nodes. Because this is a 0 parameter, you need to restart your database for this change to take effect.
  • Set the “exafusion_enabled” parameter to1 on all Oracle RAC instances.

Caution :

This feature is only available on Lunix operating system of Exadata Machine, not supported on SPARC or non-engineered system. This feature also requires Oracle Database 12c Release 1 (12.1.0.2) Bundle Patch 11 (BP11). Enabling this feature on unsupported versions can prevent you starting Oracle instances

 

Overview of Exadata Reverse Offloading

Reverse offloading is one the least know feature of Exadata machine and once I described it to you, you will understand why. Remember one of the best feature of Exadata Machine is Offloading, where Exadata offload work to storage nodes. And we all worked very hard to make sure that offloading is enable and happening in our Exadata machine. But now things are different in term of CPU cores, Exadata Database servers comes with heavy cores. Each Exadata Storage Server has two 10- core x86 processors that are used to offload database processing. X6-2 database server has a substantial amount of compute capacity with two 22-core x86 processors (44 cores in total).

What if you offload to much work to storage servers and storage cell CPU’s are saturated?  Now you have Database server CPU’s sitting idle waiting for storage nodes to complete their work. To aviod this situation Oracle introduced reverse offloading feature, reverse offload feature enables a storage cell to push some offloaded work back to the database node when the storage cell’s CPU is saturated.

Reverse offloading to database servers provides uniform performance and make sure process are not waiting at storage nodes for CPU’s while database nodes are sitting idle. Different queries running at the same time need different rates of reverse offload to perform optimally with regard to elapsed time. Even the same query running in different instances may need different rates of reverse offload. With latest release, Oracle has made number of heuristic improvements for elapsed time improvement of up to 15% for multiple database instances and different queries running in parallel. Minimum software required: 12.1.0.2 BP11

Introduction of Quorum Disks in Exadata

Quorum disks enable users to leverage disks on database servers to achieve highest redundancy in quarter or eight rack configurations.  Since Quorum Disk is a new concept related to Oracle RAC voting disks, I like to begin with a brief description for Voting Disk. Oracle Real Application cluster is consist of 1 or more nodes and it is imperative for cluster to know its member any point in time. That where voting disks are used to keep track of active members in any given cluster so it can perform certain task like load balancing or recourse failover. Voting disk contain static and dynamic data, dynamic data logs heart beat activity of a cluster. Loss of voting disk can cause RAC to evict nodes, since it cannot communicate or see other nodes in a cluster.

Now Exadata :

In earlier releases, when HIGH redundancy was deployed, the voting disk for the cluster was created on a disk group with NORMAL redundancy. Starting with Exadata Software release 12.1.2.3.0, the voting disks can reside in a high redundancy disk group even with less than 5 storage cell using Quorum disk from Database servers. This can be achieve thought OEDA utility when deploying Exadata or can added later using Quorum Disk Manager Utility as long as you meet following software and hardware requirements.

Minimum software requirements

  • Oracle Exadata Storage Server Software release 12.1.2.3.0
  • Grid Infrastructure release 12.1.0.2.160119 with these patches: 22722476 and 22682752; or Grid Infrastructure release 12.1.0.2.160419 or later
  • Patch 23200778 for all Database homes

Minimum Hardware Requirements

  • The Exadata rack has fewer than five storage servers.
  • The Exadata rack has at least two database nodes.
  • The Exadata rack has at least one high redundancy disk group.

Migrating Databases to Oracle Cloud Using RMAN Backup

Migrating Oracle Databases to Cloud is not any different than migrating databases to any other Host. Oracle database cloud instance is like any other host running on your data center with few exceptions like they are elastic and scalable in nature. Still I thought it will be a good idea to write few blogs about different database migration methods and provide some clarity around this subject.

This blog will be focus on RMAN backup and restore method for migrating a 12c release 1 database. This method can also be used for migrating 11g databases. This method is useful when you have reasonable migration window and you are migrating from same database and same operating system. Please use following steps to migrate on-premise database to Oracle cloud.

Backup source Database

You should start by analyzing database size and arrange local storage to hold full database backup. I will strongly recommend using compress backup, it will not only help you with storage requirement on local and cloud machine, and it will also reduce time to copy backups to Oracle Cloud instance.

RMAN> run
{
allocate channel c1 type DISK FORMAT '/backup/rman/testdb1/%d_FULL_C_DISK_%T_%s_%p';

allocate channel c2 type DISK FORMAT '/backup/rman/testdb1/%d_FULL_C_DISK_%T_%s_%p';

allocate channel c3 type DISK FORMAT '/backup/rman/testdb1/%d_FULL_C_DISK_%T_%s_%p';

allocate channel c4 type DISK FORMAT '/backup/rman/testdb1/%d_FULL_C_DISK_%T_%s_%p';

BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

backup spfile format '/backup/rman/testdb1/spfile_%d_%s_%T_dbid%I.rman';

backup current controlfile format '/backup/rman/testdb1/Controlfile_%d_%T_dbid%I_s%s_p%p';
}
release channel c1;
release channel c2;
release channel c3;
release channel c4;

}2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>

Finished backup at 09-FEB-17

Create cloud instance with same dbname

Next you want to create an Oracle cloud instance using same database name, it will create the directory structure and parameter you need to restore your database. IF you are create a 12c instance, it will create a container database by default, so make sure either you are migrating a container database or disable (enable_pluggable_database) parameter in you parameter file.

[oracle@testdb1 ~]$ df -k

Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/xvdb3            26198448 14273624  10571012  58% /
tmpfs                  3698528      388   3698140   1% /dev/shm
/dev/xvdb1              487652   151198    306758  34% /boot
/dev/xvde1            61795324 10430960  48202304  18% /u01
/dev/mapper/dataVolGroup-lvol0
25667900  2968240  21372764  13% /u02
/dev/mapper/fraVolGroup-lvol0
7089656   214848   6491628   4% /u03
/dev/mapper/redoVolGroup-lvol0
26700028   446384  24874316   2% /u04
[oracle@testdb1 ~]$ ps -ef | grep smon
oracle    6296     1  0 16:50 ?        00:00:00 ora_smon_testdb1
oracle   16095 16046  0 19:00 pts/1    00:00:00 grep smon
[oracle@testdb1 ~]$

Copy backup to Oracle Cloud Instance

You can copy full RMAN backup to using scp or winscp uutility. Make sure to sure private key file, if you ssh is not open for public.

[oracle@oraclenode1]$ scp -i /backup/lunix_rsa_cloudkey  =/backup/rman/testdb1/* oracle@129.144.9.60:/u04/rman/testdb1/

Enter passphrase for key '/backup/lunix_rsa_cloudkey':

[oracle@testdb1 testdb1]$ pwd
/u04/rman/testdb1

[oracle@testdb1 testdb1]$ ls -ltr
total 371880
-rw-r--r-- 1 oracle oinstall   1278464 Feb 20 15:54 TESTDB1_FULL_C_DISK_20170220_1_1
-rw-r--r-- 1 oracle oinstall   1097728 Feb 20 15:54 TESTDB1_FULL_C_DISK_20170220_5_1
-rw-r--r-- 1 oracle oinstall     98304 Feb 20 15:54 TESTDB1_FULL_C_DISK_20170220_6_1
-rw-r--r-- 1 oracle oinstall  50708480 Feb 20 15:55 TESTDB1_FULL_C_DISK_20170220_2_1
-rw-r--r-- 1 oracle oinstall 106348544 Feb 20 15:56 TESTDB1_FULL_C_DISK_20170220_3_1
-rw-r--r-- 1 oracle oinstall 211050496 Feb 20 15:56 TESTDB1_FULL_C_DISK_20170220_4_1
-rw-r--r-- 1 oracle oinstall     26112 Feb 20 15:56 TESTDB1_FULL_C_DISK_20170220_7_1
-rw-r--r-- 1 oracle oinstall     98304 Feb 20 15:56 spfile_TESTDB1_8_20170220_dbid688186828.rman
-rw-r--r-- 1 oracle oinstall  10092544 Feb 20 15:56 Controlfile_TESTDB1_20170220_dbid688186828_s9_p1

Backup pfile for recovery

It is important to backup existing parameter file using pfile, you need this file for restore purposes. You can use parameter file from source database but then you will need to make changes according to new cloud environment.

SQL> create pfile='/tmp/inittestdb1.ora' from spfile;

File created.

SQL> exit

Delete testdb1 from cloud Database instance

You can manually delete database or use DBCA to delete database. Please note that it might drop audit file directory and spfile too.

[oracle@testdb1 testdb1]$ ps -ef | grep smon

oracle    6296     1  0 16:50 ?        00:00:00 ora_smon_testdb1
oracle   17876 16046  0 19:05 pts/1    00:00:00 grep smon

[oracle@testdb1 testdb1]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 20 19:05:59 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate;

Database closed.
Database dismounted.
ORACLE instance shut down.

SQL>  startup mount exclusive restrict

ORACLE instance started.
Total System Global Area 2667577344 bytes
Fixed Size                  2927960 bytes
Variable Size             687866536 bytes
Database Buffers         1962934272 bytes
Redo Buffers               13848576 bytes
Database mounted.

SQL> drop database;

Database dropped.

Disconnected from Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> exit

[oracle@testdb1 testdb1]$ ps -ef | grep smon
oracle   19166 16046  0 19:09 pts/1    00:00:00 grep smon

Startup database in nomount mode

As I mentioned it earlier that dropping database will delete audit file directory too. You need to create audit file directory and restore init.ora file before you can startup you instance for recovery purposes.

[oracle@testdb1 dbs]$ mkdir -p /u01/app/oracle/admin/testdb1/adump
[oracle@testdb1 dbs]$ clear
[oracle@testdb1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 20 16:33:57 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.
Total System Global Area 2667577344 bytes
Fixed Size                  2927960 bytes
Variable Size             687866536 bytes
Database Buffers         1962934272 bytes
Redo Buffers               13848576 bytes

SQL>

Restore control file and mount database

Restore control file using RMAN backup and mount database.

[oracle@testdb1 testdb1]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Mon Feb 20 16:50:26 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TESTDB1 (not mounted)

RMAN> RESTORE CONTROLFILE from '/u04/rman/testdb1/Controlfile_TESTDB1_20170220_dbid688186828_s9_p1';


Starting restore at 20-FEB-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u02/app/oracle/oradata/testdb1/control01.ctl
output file name=/u03/app/oracle/fast_recovery_area/testdb1/control02.ctl
Finished restore at 20-FEB-17

RMAN> alter database mount;
Statement processed
released channel: ORA_DISK_1

RMAN>

Catalog backup if location is different

If you have copied backup to different storage structure than source system, you need to either catalog the RMAN backup or create soft link, so controlfile can locate backup pieces.

RMAN> CATALOG START WITH '/u04/rman/testdb1/';

Starting implicit crosscheck backup at 20-FEB-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 20-FEB-17
Starting implicit crosscheck copy at 20-FEB-17
using channel ORA_DISK_1
Finished implicit crosscheck copy at 20-FEB-17
searching for all files in the recovery area
cataloging files...
no files cataloged
searching for all files that match the pattern /u04/rman/testdb1/
List of Files Unknown to the Database
=====================================
File Name: /u04/rman/testdb1/spfile_TESTDB1_8_20170220_dbid688186828.rman

File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_7_1
File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_6_1
File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_4_1
File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_5_1
File Name: /u04/rman/testdb1/Controlfile_TESTDB1_20170220_dbid688186828_s9_p1
File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_3_1
File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_1_1
File Name: /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_2_1
Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

Restore database

Once you have catalog the backup , you can go ahead and restore database files. You will need to use set newname parameter to directory RMAN restore new datafiles location.

RMAN> RUN {
SET NEWNAME FOR DATABASE TO '/u02/app/oracle/oradata/testdb1/%U';
RESTORE DATABASE;
SWITCH DATAFILE ALL;
}
2> 3> 4> 5>


executing command: SET NEWNAME
Starting restore at 20-FEB-17
using channel ORA_DISK_1
.
.
.
channel ORA_DISK_1: restore complete, elapsed time: 00:01:05
Finished restore at 20-FEB-17

datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=936464700 file name=/u02/app/oracle/oradata/testdb1/data_D-TESTDB1_TS-SYSTEM_FNO-1
datafile 3 switched to datafile copy

input datafile copy RECID=7 STAMP=936464700 file name=/u02/app/oracle/oradata/testdb1/data_D-TESTDB1_TS-SYSAUX_FNO-3
datafile 4 switched to datafile copy

input datafile copy RECID=8 STAMP=936464700 file name=/u02/app/oracle/oradata/testdb1/data_D-TESTDB1_TS-UNDOTBS1_FNO-4
datafile 5 switched to datafile copy

input datafile copy RECID=9 STAMP=936464700 file name=/u02/app/oracle/oradata/testdb1/data_D-TESTDB1_TS-SALES_FNO-5
datafile 6 switched to datafile copy

input datafile copy RECID=10 STAMP=936464701 file name=/u02/app/oracle/oradata/testdb1/data_D-TESTDB1_TS-USERS_FNO-6

RMAN>

Rename redo file location

Now go ahead and rename redo log file to new storage location, so you can recover your database.

SQL> select member from v$logfile;

MEMBER
-----------------------------------------------------------------------
/oradata/TESTDB1/onlinelog/o1_mf_3_dbb7nobl_.log
/oradata/fast_recovery_area/TESTDB1/onlinelog/o1_mf_3_dbb7nofb_.log
/oradata/TESTDB1/onlinelog/o1_mf_2_dbb7njhw_.log
/oradata/fast_recovery_area/TESTDB1/onlinelog/o1_mf_2_dbb7njnc_.log
/oradata/TESTDB1/onlinelog/o1_mf_1_dbb7ndky_.log
/oradata/fast_recovery_area/TESTDB1/onlinelog/o1_mf_1_dbb7ndpo_.log

SQL> ALTER DATABASE RENAME FILE '/oradata/TESTDB1/onlinelog/o1_mf_3_dbb7nobl_.log' to '/u04/app/oracle/redo/o1_mf_3_dbb7nobl_.log';

ALTER DATABASE RENAME FILE '/oradata/fast_recovery_area/TESTDB1/onlinelog/o1_mf_3_dbb7nofb_.log' to '/u04/app/oracle/redo/o1_mf_3_dbb7nofb_.log';

ALTER DATABASE RENAME FILE '/oradata/TESTDB1/onlinelog/o1_mf_2_dbb7njhw_.log' to '/u04/app/oracle/redo/o1_mf_2_dbb7njhw_.log';

ALTER DATABASE RENAME FILE '/oradata/fast_recovery_area/TESTDB1/onlinelog/o1_mf_2_dbb7njnc_.log' to '/u04/app/oracle/redo/o1_mf_2_dbb7njnc_.log';

ALTER DATABASE RENAME FILE '/oradata/TESTDB1/onlinelog/o1_mf_1_dbb7ndky_.log' to '/u04/app/oracle/redo/o1_mf_1_dbb7ndky_.log';

ALTER DATABASE RENAME FILE '/oradata/fast_recovery_area/TESTDB1/onlinelog/o1_mf_1_dbb7ndpo_.log' to '/u04/app/oracle/redo/o1_mf_1_dbb7ndpo_.log';


Database altered.
SQL>
Database altered.
SQL>
Database altered.
SQL>
Database altered.
SQL>
Database altered.
SQL>
Database altered.

SQL> select member from v$logfile;

MEMBER

-----------------------------------------------------------------------
/u04/app/oracle/redo/o1_mf_3_dbb7nobl_.log
/u04/app/oracle/redo/o1_mf_3_dbb7nofb_.log
/u04/app/oracle/redo/o1_mf_2_dbb7njhw_.log
/u04/app/oracle/redo/o1_mf_2_dbb7njnc_.log
/u04/app/oracle/redo/o1_mf_1_dbb7ndky_.log
/u04/app/oracle/redo/o1_mf_1_dbb7ndpo_.log

6 rows selected.

Recover database and open it with resetlogs

Recover database and once completed, you can open your database with resetlogs.

RMAN> recover database;

Starting recover at 20-FEB-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK

starting media recovery
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=40
channel ORA_DISK_1: reading from backup piece /u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_7_1

channel ORA_DISK_1: piece handle=/u04/rman/testdb1/TESTDB1_FULL_C_DISK_20170220_7_1 tag=TAG20170220T155636

channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u03/app/oracle/fast_recovery_area/TESTDB1/archivelog/2017_02_20/o1_mf_1_40_dbp8z3ot_.arc thread=1 sequence=40

channel default: deleting archived log(s)

archived log file name=/u03/app/oracle/fast_recovery_area/TESTDB1/archivelog/2017_02_20/o1_mf_1_40_dbp8z3ot_.arc RECID=3 STAMP=936465251
unable to find archived log

archived log thread=1 sequence=41
RMAN-03002: failure of recover command at 02/20/2017 17:14:13
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 41 and starting SCN of 2795088

RMAN> alter database open resetlogs;

Statement processed

Check database instance

Verify your database instance is up and running.

[oracle@testdb1 testdb1]$ ps -ef | grep smon

oracle    6296     1  0 16:50 ?        00:00:00 ora_smon_testdb1
oracle   17876 16046  0 19:05 pts/1    00:00:00 grep smon

[oracle@testdb1 testdb1]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 20 19:05:59 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

DBCA Template for Exadata Machine

It is best practice to use DBCA template for creating databases on Exadata. When Exadata is deployed using onecommand utility, it also uses Exadata DBCA template to create sample database during deployment. So, let me start by providing you brief explanation about DBCA templates then I will jump on to Exadata DBCA templates.

DBCA templates are XML files that contain information required to create a database. Templates are used in DBCA to create new databases and clone existing databases. The information in templates includes database options, initialization parameters, and storage attributes (for datafiles, tablespaces, control files, and online redo logs). DBCA templates are usually stored in (ORACLE_HOME/assistants/dbca/templates) directory. Even though you can create oracle database manually but there are some advantages of using template including time saving, easy duplication, easy editing and easy sharing. There are 2 types of DBCA templates.

 

Seed templates: This type of template contains both the structure and the physical datafiles of an existing (seed) database. Your database starts as a copy of the seed database, and requires only the following changes:

 

  • Name of the database
  • Destination of the datafiles
  • Number of control files
  • Number of redo log groups
  • Initialization parameters

Non-seed templates: This type of template is used to create a new database from scratch. It contains the characteristics of the database to be created. Non-seed templates are more flexible than their seed counterparts because all datafiles and online redo logs are created to your specification, and names, sizes, and other attributes can be changed as required.

Now for Exadata, as I mentioned it earlier that it is best practice to use DBCA template to create databases on Exadata machine. You will at least found 1 Exadata specific DBCA template on your Exadata machine which got created during your Exadata installation process and you should use that template to create all future databases. If you don’t use Exadata DBCA template to create databases, you will miss out many Exadata best practices and will get low score on your Exachk reports. Running Exachk can help you implement Exadata practices but in order to save time, I will strongly recommended use Exadata DBCA template to create databases on Exadata machine.

Recommended Redo Log File Size for Exadata Machine

Let me start by stating that Oracle recommend Redo log size on Exadata should be >= 4GB and less than 32GB. You should not have redo log size greater than 32GB unless instructed by Oracle Support. Oracle recommend minimum 4GB redo log size because Exadata machine can support extreme IO rate. With latest Exadata X-6 you can perform up to 5.6M 8K database read I/O operations, or 5.2M 8K flash write I/O operations per second per full rack.

If you application is generating a lot of redo logs, 4GB might not enough and consider increasing it. Undersize redo logs can lead to unnecessary log switches and check point activities causing performance bottlenecks. You can start with Oracle recommended 4GB size and observe log switch activities before you decided to increase to decrease redo log size.

Ideally online redo log files should be the same size and configured to switch approximately once an hour during normal activity. They should switch no more frequently than every 20 minutes during peak activity. There should be a minimum of four online log groups to prevent LGWR from waiting for a group to be available following a log switch. A group may be unavailable because a checkpoint has not yet completed or the group has not yet been archived.

 

 

 

 

Migrate Databases to Oracle Cloud using Data Pump

Migrating Oracle Databases to Cloud is not any different than migrating databases to any other Host. Oracle database cloud instance is like any other host running on your data center with few exceptions like they are elastic and scalable in nature. Still I thought it will be a good idea to write few blogs about different database migration methods and provide some clarity around this subject.

This blog will be focus on Oracle Data Pump utility. This method is useful when your data set is small (May be few GB) and there are no dependencies between schema’s.  I would say this method is good for schema level migrations and pretty much let you migrate any database version to latest oracle database version. Please follow below steps to migrate databases to Oracle cloud using Data Pump utility.

  1. Check data size and dependencies on source database using following query and make note of tablespace names and their sizes.
select substr(OWNER,1,10),substr(TABLESPACE_NAME,1,10) ,sum(bytes)/1024/1024/1000 "SIZE_IN_GB" from dba_segments where owner not in ('SYS','SYSTEM','APEX_04020','WMSYS','XDB','MDSYS','AUDSYS','DBSNMP','SCOTT') group by owner, TABLESPACE_NAME order by 3 desc,1;SQL>   2    3



SUBSTR(OWN SUBSTR(TAB SIZE_IN_GB

---------- ---------- ----------

SALES       USERS     .6370625
META7     USERS       .611875
APEX_04020 SYSAUX     .193875
ORDDATA    SYSAUX     .0160625
DVSYS       SYSAUX    .0044375
CTXSYS     SYSAUX     .0038125
GSMADMIN_I SYSAUX     .001375
OUTLN     SYSTEM      .0005625
ORDSYS    SYSAUX      .0004375
OJVMSYS    USERS      .000375
LBACSYS    SYSTEM     .0003125
  1. Export all the schema’s using oracle expdp utility. Please note that you also you need database directory , can use existing directory or create new one like i did. For this blog I will only be migrating one schema (Sales). It is best to use compress option, since you will uploading this dump file to Oracle Cloud.
CREATE OR REPLACE DIRECTORY migration AS '/oradata';
GRANT READ, WRITE ON DIRECTORY migration TO sales;

expdp sales/sales schemas=sales directory=migration dumpfile=sales.dmp logfile=sales_expdp.log compression=all
  1. Now on cloud, create target database and size it based on your requirement and make notes data file location using show parameter db_create_file_dest.
[oracle@META7DB META7DB]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvdb3             25G   14G   11G  58% /
tmpfs                 7.3G     0  7.3G   0% /dev/shm
/dev/xvdb1            477M  148M  300M  34% /boot
/dev/xvde1             59G   11G   46G  19% /u01
/dev/mapper/dataVolGroup-lvol0
25G  4.1G   20G  18% /u02
/dev/mapper/fraVolGroup-lvol0
6.8G  3.1G  3.4G  48% /u03
/dev/mapper/redoVolGroup-lvol0
26G  3.4G   21G  14% /u04

[oracle@META7DB META7DB]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 7 16:37:32 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show parameter db_create_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------

db_create_file_dest                  string      /u02/app/oracle/oradata
  1. Create required user, tablespaces and database directory  based on your requirement. Please note that if you are using container database, set you container before creating user or tablespace.
<ALTER SESSION SET CONTAINER = PDB1;>

create tablespace sales datafile size 1G;
create user sales identified by sales default tablespace sales temporary tablespace temp;
grant create session to sales;
ALTER USER sales QUOTA UNLIMITED ON sales;

CREATE OR REPLACE DIRECTORY migration AS '/u04/app/oracle/migration';
GRANT READ, WRITE ON DIRECTORY migration TO sales;
  1. Copy dump file to oracle clouds instance. This can be achieve many different ways, you can winscip or scp to copy export dump file to oracle cloud instance.
[oracle@oraclenode1 dmp]$ scp -i /backup/lunix_rsa_cloudkey  sales.dmp  oracle@129.144.9.60:/u04/app/oracle/migration
sales.dmp     100%  358MB   1.5MB/s   04:06
  1. Now run oracle import utility to load data into your Oracle Cloud Instance.
[oracle@testdb1 migration]$ impdp sales/sales@pdb1 directory=migration dumpfile=sales.dmp logfile=sales_expdp.log

Import: Release 12.1.0.2.0 - Production on Sat Feb 18 20:51:29 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SALES"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting "SALES"."SYS_IMPORT_FULL_01": sales/********@pdb1 directory=migration dumpfile=sales.dmp logfile=sales_expdp.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SALES" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SALES"."UTILIZATION" 12.19 MB 7679886 rows
. . imported "SALES"."UTILIZATION_NOPART" 9.578 MB 6401656 rows
. . imported "SALES"."UTILIZATION_FACT_STAR" 3.925 MB 2559962 rows
. . imported "SALES"."DOBJECTS" 1.718 MB 124714 rows
. . imported "SALES"."DDBA_TAB_PRIVS" 519.8 KB 51649 rows
. . imported "SALES"."DINDEX" 286.1 KB 9560 rows
. . imported "SALES"."REV_MARGIN_F" 132.8 KB 3766 rows
. . imported "SALES"."CALENDER_DIM" 29.25 KB 731 rows
. . imported "SALES"."CALENDER_DIM_BK" 39.48 KB 1096 rows
. . imported "SALES"."CALENDER" 30.87 KB 731 rows
. . imported "SALES"."HW_SALE_F" 38.63 KB 890 rows
. . imported "SALES"."TIME_CALENDAR_DIM" 13.58 KB 365 rows
. . imported "SALES"."CLIENT_DIM" 5.062 KB 10 rows
. . imported "SALES"."CUSTOMER" 5.015 KB 2 rows
. . imported "SALES"."DUSERS" 7.640 KB 50 rows
. . imported "SALES"."EMPLOYEES" 6.421 KB 21 rows
. . imported "SALES"."ORACLE_USERS" 7.625 KB 50 rows
. . imported "SALES"."PRODUCT_DIM" 5.156 KB 6 rows
. . imported "SALES"."PROJECT_DIM" 5.242 KB 12 rows
. . imported "SALES"."REV_MAR_SUM_F" 5.390 KB 2 rows
. . imported "SALES"."SKILLS" 5.312 KB 20 rows
. . imported "SALES"."UTILIZATION_FACT" 5.609 KB 63 rows
. . imported "SALES"."ZIP_DESC" 4.796 KB 2 rows
. . imported "SALES"."SALES_FACT" 0 KB 0 rows
. . imported "SALES"."SKILLS_F" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SALES"."SYS_IMPORT_FULL_01" completed with  error(s) at Sat Feb 18 20:52:24 2017 elapsed 0 00:00:48

  1. Count all the database object and compile any invalid objects in your cloud database to complete your migration.
SQL> SELECT owner,
 object_type,
 object_name,
 status
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner, object_type, object_name; 2 3 4 5 6 7

no rows selected

 

Top 10 new Features of Oracle 12c Release 2

Hello Everyone, As I was going to Oracle through Oracle 12c release 2 documentation, I came across following important new features. This is not the complete list, there are many more new features came out with Oracle release 12c release 2.

  1. Multi-Instance Redo Apply : With Oracle 12c Release 2, you can apply redo logs to multiple instance at the same time to improve recovery time objective.
  2. Oracle Database Sharding : With Oracle 12c Release 2, you can horizontally partitioned data across many databases and present them as a single logical database to application.
  3. Creating Data Guard Standbys With Database Creation Assistant : You can use DBCA to create Oracle Data Guard Standby Database from an existing primary database.
  4. Online Conversion of a Nonpartitioned Table to a Partitioned Table : Now you can partition large non-partitioned tables online while mainting indexes during this operation and no impact on DML operations.
  5. Automatically Synchronize Password Files in Oracle Data Guard Configurations : You no longer need to update password files on standby database, if you change passwords on primary database.
  6. I/O Rate Limits for PDBs : You can limits the rate of physical I/O issued by a pluggable database (PDB) by specifying I/O requests per second or as Mbps (megabytes of I/O per second).
  7. PDB Character Set : Oracle now have removed a barrier to consolidate databases using PDB’s by allowing a different character set for each pluggable database (PDB) in the same multitenant container database (CDB).
  8. Memory Resource Management :Now you can prevents one PDB from using too much buffer cache, library cache, or program global area (PGA) by managing memory usage between PDB’s within a Container.
  9. Partitioning: Multi-Column List Partitioning : List partitioning functionality is expanded to enable multiple partition key columns.
  10. TDE Tablespace Live Conversion : You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion.

Backup and Restore Oracle Databases using Oracle Database Backup Cloud Service

Oracle Database Cloud Service is been around for sometime now and its shaping out to be a true enterprise cloud backup service for oracle databases. It is a secure, scalable, on-demand storage solution for backing up Oracle databases to Oracle Cloud. Purpose of this blog to show you how
to use Oracle Database Cloud Service to backup or restore your Oracle databases. This blog will only cover backup and restore process and I am assuming that you have install Oracle Cloud Backup Module for Database backup service. if not please use following blog to configure Oracle Cloud Backup Module (http://blog.umairmansoob.com/configure-oracle-database-backup-cloud-service/).

Note : – Make sure to enable auto control backup on and save DBID, you will need them to restore databases. You will also need to provide password to encrypt backups, you will also need same password to restore that database.   

Let starts doing a full backup of a test database then we can drop target databases complete to test our restore process.

Perform FULL Database Backup with AUTO Control file backup enable 

SET ENCRYPTION ON IDENTIFIED BY passw0rd ONLY;
configure compression algorithm 'MEDIUM';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
run
{
allocate channel ch1 device type sbt parms='SBT_LIBRARY=/u01/app/oracle/OPC/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb1.ora)';
BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
release channel ch1;
}

Let delete target database  using DBCA, so we can test our restore process. 

Now we can start restore database process , as i mentioned it earlier you will need password and DBID to restore database from Database Backup service.

Startup database instance in nomount mode 

RMAN> startup nomount force
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/db_1/dbs/inittestdb1.ora'
starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area 1073741824 bytes
Fixed Size 2932632 bytes
Variable Size 281018472 bytes
Database Buffers 784334848 bytes
Redo Buffers 5455872 bytes

Set DBID and password to start restore process.

RMAN> set dbid 687787463;
executing command: SET DBID
RMAN> set decryption identified by passw0rd;
executing command: SET decryption

Restore spfile from backup 

RUN {
allocate channel ch1 device type sbt parms='SBT_LIBRARY=/u01/app/oracle/product/12.1.0/db_1/opc/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb1.ora)';
RESTORE SPFILE FROM AUTOBACKUP; 
}

Create audit dump directory and startup instance in nomount mode

mkdir -p /u01/app/oracle/admin/testdb1/adump
RMAN> shutdown 
RMAN> startup nomount

Restore Control file from AUTOBACKUP and mount database 

 RUN {
allocate channel ch1 device type sbt parms='SBT_LIBRARY=/u01/app/oracle/OPC/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb1.ora)';
RESTORE CONTROLFILE FROM AUTOBACKUP; 
Alter database mount;
}

Restore and Recover Database 

RUN {
allocate channel ch1 device type sbt parms='SBT_LIBRARY=/u01/app/oracle/OPC/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb1.ora)';
restore database;
recover database;
}

Open Database using resetlogs 

RMAN> alter database open resetlogs;
Statement processed