Delete Backups from Oracle Database Backup Cloud Service

There will be times when you like to delete old backups from Oracle Database cloud Service. This is a very simple task , but since we don’t have much visibility inside backup storage , we need to use Oracle RMAN backup utility to purge any backups. There is option to delete files on Oracle Cloud Storage Container holding backups from cloud management console but as of now you cannot map any files to RMAN backup pieces. May be that is something Oracle will change and provide more visibility to Oracle Database Backup Storage through Cloud management console. For now you can use following steps to delete backups from Oracle Database Cloud Storage.
Step 1 : list backup and find backups pieces you want to delete

RMAN> list backup;

List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
1 6.00M SBT_TAPE 00:00:11 11-FEB-17
 BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20170211T125618
 Handle: 01rsaebi_1_1 Media: a430291.storage.oraclecloud.com/v1/Storage-a430291/ORABACKUP

 List of Archived Logs in backup set 1
 Thrd Seq Low SCN Low Time Next SCN Next Time
 ---- ------- ---------- --------- ---------- ---------
 1 6 1600408 11-FEB-17 1605173 11-FEB-17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 328.00M SBT_TAPE 00:01:32 11-FEB-17
 BP Key: 2 Status: AVAILABLE Compressed: YES Tag: TAG20170211T125643
 Handle: 02rsaecb_1_1 Media: a430291.storage.oraclecloud.com/v1/Storage-a430291/ORABACKUP

....................................................................................................................................................................................................................
.
RMAN>

Step 2 : Configure Backup Channel to Oracle Database Backup Service 

RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/u01/app/oracle/OPC/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb1.ora)';

new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/u01/app/oracle/OPC/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb1.ora)';
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1

Step 3 : Delete select backup pieces to all backups 

RMAN> delete backup;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=57 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=36 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=3.16.9.21

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
1 1 1 1 AVAILABLE SBT_TAPE 01rsaebi_1_1
2 2 1 1 AVAILABLE SBT_TAPE 02rsaecb_1_1
3 3 1 1 AVAILABLE SBT_TAPE 03rsaegs_1_1
4 4 1 1 AVAILABLE SBT_TAPE 04rsaej7_1_1
5 5 1 1 AVAILABLE SBT_TAPE 05rsael9_1_1

Do you really want to delete the above objects (enter YES or NO)? yes
deleted backup piece
backup piece handle=01rsaebi_1_1 RECID=1 STAMP=935672178
deleted backup piece
backup piece handle=02rsaecb_1_1 RECID=2 STAMP=935672204
deleted backup piece
backup piece handle=03rsaegs_1_1 RECID=3 STAMP=935672348
deleted backup piece
backup piece handle=04rsaej7_1_1 RECID=4 STAMP=935672423
deleted backup piece
backup piece handle=05rsael9_1_1 RECID=5 STAMP=935672489
Deleted 5 objects

 

Changing an SSH Public Key for Oracle Database Cloud Service Instance

As many of you already know that to access Oracle Cloud instances securely , you need to generate at least one SSH key Pair upload the SSH public key that should be associated with the instance to Oracle Compute Cloud Service. There might be times you need to add or change SSH Key pair, then you can follow below steps to accomplished this task.

  1. Generate public and private key pair using put keyGen

  1. Go to Oracle Compute Database Service then SSH Access tab

  1. Click Add key for target database service name

  1. It is important to note that process of change or Adding SSH Keys will require restarting your VM

  1. Now go to putty auth tab and select new private key file.

  1. Now you shoud see that you are logged in using new rsa key (rsa-key-20170210)

  1. If you no longer need old SSH key , you can remove old SSH key from /home/oracle/.ssh or /home/opc/.ssh

 

Migrate Data between Oracle Cloud Instances Using Oracle Storage Cloud Serivce

While working on cloud migrations, I tried following Oracle Storage Cloud Service feature that I thought I should share with my readers. The Oracle Storage Cloud Service lets you dynamically provision and manage storage volumes. You can create, attach, connect and move volumes as needed to meet your storage and application requirements. If you are working on any type cloud project which involves migrating large sets between Oracle Cloud Instances (Compute or Database), this particular Oracle Cloud Storage Volume feature can be very useful to you. As per my testing, data will remain intact until you reformat or delete the volume. Here are the steps you can follow to migrate data using Oracle Cloud Storage Volumes.

If you need steps to create storage volumes for Oracle Cloud Instances, you can review my following blog. (http://blog.umairmansoob.com/adding-storage-volume-to-oracle-cloud-instance/ )

Step 1: Copy or create Files on storage volume, as you can see I have mounted 2 additional volumes (/clouddb2nfs, /clouddb2nfs) for my migration project.

[opc@cloudb1 testdb12c1]$ df -k

Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/xvdb3            26198448 14289412  10555224  58% /
tmpfs                  3698528      388   3698140   1% /dev/shm
/dev/xvdb1              487652   151198    306758  34% /boot
/dev/xvde1            61795324 10542648  48090616  18% /u01
/dev/mapper/dataVolGroup-lvol0
                      51470972  6070756  42762600  13% /u02
/dev/mapper/fraVolGroup-lvol0
                       7089656  2184668   4521808  33% /u03
/dev/mapper/redoVolGroup-lvol0
                      25667900  3496256  20844748  15% /u04
/dev/xvdg             25803068   542936  23949412   3% /clouddb1nfs
/dev/xvdh             25803068   643672  23848676   3% /clouddb2nfs

Step 2 : Select and verify storage volume, you want move to new Cloud Instance 

As you can see I have Oracle RMAN sitting on Oracle Cloud Storage Volume (/clouddb2nfs/backup/rman/testdb12c1) that I am planning to move this volume to cloudb2 cloud instance.

[opc@cloudb1 testdb12c1]$ pwd

/clouddb2nfs/backup/rman/testdb12c1

[opc@cloudb1 testdb12c1]$ ls -ltr
total 467464
-rw-r--r-- 1 oracle oinstall  24238592 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_8_1
-rw-r--r-- 1 oracle oinstall  21511168 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_7_1
-rw-r--r-- 1 oracle oinstall  23604736 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_6_1
-rw-r--r-- 1 oracle oinstall  21646848 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_5_1
-rw-r--r-- 1 oracle oinstall   1097728 Feb  9 17:45 TESTDB12_FULL_C_DISK_20170209_12_1
-rw-r--r-- 1 oracle oinstall     98304 Feb  9 17:45 TESTDB12_FULL_C_DISK_20170209_13_1
-rw-r--r-- 1 oracle oinstall  50995200 Feb  9 17:45 TESTDB12_FULL_C_DISK_20170209_9_1
-rw-r--r-- 1 oracle oinstall 110346240 Feb  9 17:46 TESTDB12_FULL_C_DISK_20170209_11_1
-rw-r--r-- 1 oracle oinstall 214048768 Feb  9 17:46 TESTDB12_FULL_C_DISK_20170209_10_1
-rw-r--r-- 1 oracle oinstall    364032 Feb  9 17:46 TESTDB12_FULL_C_DISK_20170209_14_1
-rw-r--r-- 1 oracle oinstall     98304 Feb  9 17:46 spfile_TESTDB12_15_20170209_dbid2927258729.rman
-rw-r--r-- 1 oracle oinstall  10092544 Feb  9 17:46 Controlfile_TESTDB12_20170209_dbid2927258729_s16_p1

Step 3 : Its very important to umount target volume storage or mount (clouddb2nfs)

[opc@cloudb1 home]$ sudo umount /clouddb2nfs
[opc@cloudb1 home]$ df -k
Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/xvdb3            26198448 14289620  10555016  58% /
tmpfs                  3698528      388   3698140   1% /dev/shm
/dev/xvdb1              487652   151198    306758  34% /boot
/dev/xvde1            61795324 10542648  48090616  18% /u01
/dev/mapper/dataVolGroup-lvol0
51470972  6070756  42762600  13% /u02
/dev/mapper/fraVolGroup-lvol0
7089656  2184668   4521808  33% /u03
/dev/mapper/redoVolGroup-lvol0
25667900  3496256  20844748  15% /u04
/dev/xvdg             25803068   542936  23949412   3% /clouddb1nfs

Step 4 : Detach & Attached to target Oracle Cloud Instance

First Detach from The Instance

Attached Storage Volume to target Instance

Verify if Storage Volume has been attached to target Instance.

Step 5 : verify storage volume has been presented to target instance

[opc@cloudb2 ~]$ ls -ltr /dev/xvd*
brw-rw---- 1 root disk 202,  16 Feb  7 22:19 /dev/xvdb
brw-rw---- 1 root disk 202,  18 Feb  7 22:19 /dev/xvdb2
brw-rw---- 1 root disk 202,  19 Feb  7 22:19 /dev/xvdb3
brw-rw---- 1 root disk 202,  17 Feb  7 22:19 /dev/xvdb1
brw-rw---- 1 root disk 202,  64 Feb  7 22:21 /dev/xvde
brw-rw---- 1 root disk 202,  65 Feb  7 22:24 /dev/xvde1
brw-rw---- 1 root disk 202,  80 Feb  7 22:24 /dev/xvdf
brw-rw---- 1 root disk 202,  81 Feb  7 22:24 /dev/xvdf1
brw-rw---- 1 root disk 202,  48 Feb  7 22:24 /dev/xvdd
brw-rw---- 1 root disk 202,  49 Feb  7 22:24 /dev/xvdd1
brw-rw---- 1 root disk 202,  32 Feb  7 22:24 /dev/xvdc
brw-rw---- 1 root disk 202,  33 Feb  7 22:24 /dev/xvdc1
brw-rw---- 1 root disk 202,  96 Feb  8 00:35 /dev/xvdg

Step 6: Mount Storage volume to target instance

[opc@cloudb1 ~]$ sudo mkdir /clouddb2nfs
[opc@cloudb1 ~]$ sudo mount /dev/xvdh /clouddb2nfs
[opc@cloudb1 ~]$ df -k
Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/xvdb3            26198448 14288592  10556044  58% /
tmpfs                  3698528      388   3698140   1% /dev/shm
/dev/xvdb1              487652   151198    306758  34% /boot
/dev/xvde1            61795324 10542640  48090624  18% /u01
/dev/mapper/dataVolGroup-lvol0
51470972  6070756  42762600  13% /u02
/dev/mapper/fraVolGroup-lvol0
7089656  2184668   4521808  33% /u03
/dev/mapper/redoVolGroup-lvol0
25667900  3496256  20844748  15% /u04
/dev/xvdg             25803068   542936  23949412   3% /clouddb1nfs
/dev/xvdh             25803068   176208  24316140   1% /clouddb2nfs

Step 7 : Now verify if you can see your data on target instance.

[opc@cloudb1 testdb12c1]$ pwd
/clouddb2nfs/backup/rman/testdb12c1
[opc@cloudb1 testdb12c1]$ ls -ltr
total 467464
-rw-r--r-- 1 oracle oinstall  24238592 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_8_1
-rw-r--r-- 1 oracle oinstall  21511168 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_7_1
-rw-r--r-- 1 oracle oinstall  23604736 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_6_1
-rw-r--r-- 1 oracle oinstall  21646848 Feb  9 17:44 TESTDB12_FULL_C_DISK_20170209_5_1
-rw-r--r-- 1 oracle oinstall   1097728 Feb  9 17:45 TESTDB12_FULL_C_DISK_20170209_12_1
-rw-r--r-- 1 oracle oinstall     98304 Feb  9 17:45 TESTDB12_FULL_C_DISK_20170209_13_1
-rw-r--r-- 1 oracle oinstall  50995200 Feb  9 17:45 TESTDB12_FULL_C_DISK_20170209_9_1
-rw-r--r-- 1 oracle oinstall 110346240 Feb  9 17:46 TESTDB12_FULL_C_DISK_20170209_11_1
-rw-r--r-- 1 oracle oinstall 214048768 Feb  9 17:46 TESTDB12_FULL_C_DISK_20170209_10_1
-rw-r--r-- 1 oracle oinstall    364032 Feb  9 17:46 TESTDB12_FULL_C_DISK_20170209_14_1
-rw-r--r-- 1 oracle oinstall     98304 Feb  9 17:46 spfile_TESTDB12_15_20170209_dbid2927258729.rman
-rw-r--r-- 1 oracle oinstall  10092544 Feb  9 17:46 Controlfile_TESTDB12_20170209_dbid2927258729_s16_p1

Step 8 : if you want this mount to be  persistent across instance restarts, edit the /etc/fstab file and add the mount as an entry in that file.

 

 

 

Configure Oracle Database Backup Cloud Service

Oracle started offering Oracle Database Backup Cloud Service for some time now and I believe it’s time for cloud customers to realize that Oracle will be leading cloud provider in coming months. Oracle Database Backup Cloud Service not only provide secure, scalable, on-demand storage solution for backing up Oracle databases to Oracle Cloud but also complements your existing backup strategy by providing an off-site storage location in the public cloud. Oracle Database Backup Cloud Service is highly available backup service which also support RMAN backup encryption and RMAN backup compression. Here are steps to configure and backup your database to Oracle Database Backup Cloud.

First you need to install oracle backup module. You can download backup module using following link

(http://www.oracle.com/technetwork/database/availability/oracle-cloud-backup-2162729.html ), once download and extract module to any directory.  I will be using $ORACLE_HOME/opc directory to install Oracle backup module. Before you can install the module, you should acquire following information.

  1. identityDomain
  2. Cloud User Name
  3. Cloud User Password
  4. Cloud Storage Rest Point

Please change highlighted items based on your environment

$ java -jar opc_install.jar -serviceName storagesvc -identityDomain <identitydomain> -opcId <USERNAME> -opcPass <password> -walletDir /u01/app/oracle/OPC/wallet -libDir /u01/app/oracle/OPC/lib -host <cloud Storage Rest Point>

Oracle Database Cloud Backup Module Install Tool, build 2016-10-07
Oracle Database Cloud Backup Module credentials are valid.
Oracle Database Cloud Backup Module wallet created in directory /u01/app/oracle/product/12.1.0/db_1/opc/wallet.
Oracle Database Cloud Backup Module initialization file /u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb12c1.ora created.
Downloading Oracle Database Cloud Backup Module Software Library from file opc_linux64.zip.
Downloaded 26528348 bytes in 10 seconds. Transfer rate was 2652834 bytes/second.
Download complete.

Verify install by checking $ORACLE_HOME/dbs and /u01/app/oracle/OPC/wallet directories and see if opcSID.ora and cwallet.sso have been created respectively. If you were able to find above mentioned files, you can start running backups to Oracle database backup cloud service. Here is the simple script to test to newly configure backup service. Please note that Oracle Database Backup cloud Service only support encrypted backups for security reasons and you have option to use TDE or RMAN encryption. I will be using RMAN encryption option for this blog which also don’t require any licensing.

Lets set encryption on before we run any backups

[oracle@oraclenode1 ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sat Feb 11 01:35:36 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.connected to target database: TESTDB12 (DBID=2927258729)

RMAN> SET ENCRYPTION ON IDENTIFIED BY passw0rd ONLY;

executing command: SET encryption
using target database control file instead of recovery catalog

Now lets run try datafile backup to Oracle Database Backup Cloud Service

RMAN> run
{
allocate channel ch1 device type sbt parms='SBT_LIBRARY=/u01/app/oracle/OPC/lib/libopc.so,ENV=(OPC_PFILE=/u01/app/oracle/product/12.1.0/db_1/dbs/opctestdb12c1.ora)';
BACKUP datafile 1;
release channel ch1;
}2> 3> 4> 5> 6>

allocated channel: ch1
channel ch1: SID=71 device type=SBT_TAPE
channel ch1: Oracle Database Backup Service Library VER=3.16.9.21

Starting backup at 11-FEB-17
channel ch1: starting full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=/oradata/TESTDB12C1/datafile/o1_mf_system_d9s6oxr3_.dbf
channel ch1: starting piece 1 at 11-FEB-17
Finished backup at 11-FEB-17

 

Using Oracle Bare Metal Cloud Object Storage Service for Big Data Workloads

The Oracle Bare Metal Cloud Object Storage Service is true cloud storage platform that offers reliable and cost-efficient data durability.  With the Object Storage Service, you can safely and securely store or retrieve data directly from the internet or from within the cloud platform. You can simply use REST-based storage management interface to easily manage storage at scale. The Object Storage Service support wide variety of data contents and use cases likes storing Oracle database backups and archive data offsite, store data for big data analytics.

As I just mentioned, you can use the Object Storage Service as the primary data repository for big data. This means you can run big data workloads on Oracle Bare Metal Cloud. The Object Storage HDFS connector provides the necessary connectivity to various big data analytic engines. This HDFS connectivity enables the analytics engines to work directly with data stored in the Object Storage. Using the HDFS connector, you can run Hadoop or Spark jobs against data in the Oracle Bare Metal Cloud Object Storage Service.

The connector has the following features:

  • Supports read and write data stored in Object Storage Service
  • It is compatible with existing buckets of data
  • It is compatible with Hadoop 2.7.2

Migrate Data between cloud instances using Oracle Bare Metal Cloud Block Volume Service

The Oracle Bare Metal Cloud Block Volume Service lets you dynamically provision and manage block storage volumes. You can create, attach, connect and move volumes as needed to meet your storage and application requirements.  Even though common usage of Block Volume Service is adding storage capacity to an Oracle Bare Metal Cloud Services instance, but Block Volume Service volume can also be detached from an instance and moved to a different instance without loss of data. This data persistence allows you to easily migrate data between instances and ensures that your data is safely stored, even when it is not connected to an instance. Any data will remain intact until you reformat or delete the volume.

You can user following simple steps to move data between instances. :

  • Unmount the drive from the initial instance
  • Terminate the iSCSI connection
  • Attach it to the second instance.
  • Simply mount the drive from that instance.

If you need detail steps, how to move volumes between Oracle Instances, you can review my blog (http://blog.umairmansoob.com/migrate-data-between-oracle-cloud-instances-using-oracle-storage-cloud-serivce/)

Note :- Block Volume Service volumes can be either 256 GB or 2 TB, with 2 TB volumes offering better overall performance. By default, Block Volume Service volumes are 256 GB. For 2 TB volumes, you must create a request using My Oracle Support.

Scale Your Cloud Application using Oracle Cloud Load Balancing Service

The Oracle Bare Metal Cloud Load Balancing Service provides automated traffic distribution from one entry point to multiple application servers while increasing capacity (concurrent users) and reliability of applications. Here is the brief description of benefits, limitations and load balancing policies of Oracle Cloud Load Balancing Service.

Benefits  

  • Load balancer improves resource utilization, facilitates scaling, and helps ensure high availability.
  • You can configure multiple load balancing policies
  • You can perform application-specific health checks to ensure that the load balancer directs traffic only to healthy instances.
  • The load balancer can reduce your maintenance window by draining traffic from an unhealthy application server before you take it offline for maintenance. 

Limitations

  • You cannot dynamically change the load balancer shape to handle additional incoming traffic.
  • The Load Balancing Service does not support IPv6 addresses.
  • The load balancer is limited to 100,000 concurrent connections.
  • Outbound traffic to the internet is limited to 2 Gbps.
  • Each load balancer has the following configuration limits:
    • One public IP address
    • 16 backend sets
    • 512 backend servers per backend set
    • 1024 backend servers total
    • 16 listeners

Load Balancing Policies

You can apply following policies to control traffic distribution to your backend application servers.

  • Least Connections: The Least Connections policy routes incoming non-sticky request traffic to the backend server with the fewest active connections.
  • Round Robin: Round Robin is the default load balancer policy. This policy distributes incoming traffic sequentially to each server in a backend set list.
  • IP Hash: The IP Hash policy uses an incoming request’s source IP address as a hashing key to route non-sticky traffic to the same backend server.

Accessing Your Oracle Database Cloud Service Instance

As you may already know that Oracle Cloud uses public and private key pair to access Oracle Cloud Instances. If you want to access Oracle Cloud database, you have options to open ports (not recommended) to public like 1521, 5500 or open ssh tunnel using private and public key pair. Most people who like to work with databases would like to be able to use following methods to access their databases.

  1. Login to Oracle Cloud Instance as Oracle user
  2. SQL Developer
  3. Oracle EM Express
  4. Dbaas_monitor

You can use following steps to access your database using above listed methods. You will need public IP address of target instance and private key file to following these methods.

Login to Oracle Cloud Instance as Oracle user

I will be using putty for this blog, I am sure you can use other ssh software’s. You will be using public IP address of your Oracle Cloud database instance to open ssh tunnel using private key. Please note that you will be following same steps to open tunnels for other three methods just with different port number.

Step 1. Open Putty and provide public IP address

Step 2: Go Data Section and provide user name (oracle or opc)

Step 3: Go to Auth section and private key file

Step 4: Go the tunnel Section and add ports (22, 1521, 5500, 443) using public IP address.

Step 5 : Save Putty session and click open.

Note : – Leave putty session open to access database using SQL Developer , Dbaas Monitor and Oracle EM Express

SQL Developer Access: Use following information to access database using SQL developer

Dbaas Monitor: Use following information to access database using Dbaas Monitor

Oracle EM Express: Use following information to access database using EM Express

 

Adding Storage Volume to Oracle Cloud Instance

Oracle Cloud instances comes with specific list of mounts. There might be a situation where you might only need more storage not CPU or memory. In that case you can use following steps to add storage volumes to Oracle cloud instance. This procedure will work for both compute and database cloud instances.

  1. Dashboard — > Oracle Compute Cloud Service

  1. Click Create Storage Volumes

  1. Click Attach Storage Volume to Instance

It is important to note disk # on this step, you will need this later to locate your volume on cloud instance.

  1. Login to Oracle Cloud Instance as opc

You can use putty with public and private keys to connect with Oracle cloud compute or database instance

  1. Find you volume using ls /dev/xvd* -ltr, this is where your disk number will be very important. Please use following table to map your disk, for this example our disk number was 6.
Disk #  Volumes
1 /dev/xvdb*
2 /dev/xvdc*
3 /dev/xvdd*
4 /dev/xvde*
5 /dev/xvdf*
6 /dev/xvdg*
7 /dev/xvdf*

 

 [opc@cloudb1 ~]$ ls /dev/xvd*  -ltr
brw-rw---- 1 root disk 202, 16 Feb  7 22:19 /dev/xvdb
brw-rw---- 1 root disk 202, 18 Feb  7 22:19 /dev/xvdb2
brw-rw---- 1 root disk 202, 19 Feb  7 22:19 /dev/xvdb3
brw-rw---- 1 root disk 202, 17 Feb  7 22:19 /dev/xvdb1
brw-rw---- 1 root disk 202, 64 Feb  7 22:21 /dev/xvde
brw-rw---- 1 root disk 202, 65 Feb  7 22:24 /dev/xvde1
brw-rw---- 1 root disk 202, 80 Feb  7 22:24 /dev/xvdf
brw-rw---- 1 root disk 202, 81 Feb  7 22:24 /dev/xvdf1
brw-rw---- 1 root disk 202, 48 Feb  7 22:24 /dev/xvdd
brw-rw---- 1 root disk 202, 49 Feb  7 22:24 /dev/xvdd1
brw-rw---- 1 root disk 202, 32 Feb  7 22:24 /dev/xvdc
brw-rw---- 1 root disk 202, 33 Feb  7 22:24 /dev/xvdc1
brw-rw---- 1 root disk 202, 96 Feb  8 00:21 /dev/xvdg
  1. Create a file system on storage volume using sudo mkfs -t ext3 /dev/xvdg
[opc@cloudb1 ~]$ sudo mkfs -t ext3 /dev/xvdg
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1638400 inodes, 6553600 blocks
327680 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
200 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
  1. Create directory and mount new volume
[opc@cloudb1 ~]$ sudo mkdir /clouddb1nfs
[opc@cloudb1 ~]$ sudo mount /dev/xvdg /clouddb1nfs
[opc@cloudb1 ~]$ df -k
Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/xvdb3            26198448 14051756  10792880  57% /
tmpfs                  3698528      388   3698140   1% /dev/shm
/dev/xvdb1              487652   151198    306758  34% /boot
/dev/xvde1            61795324 10453192  48180072  18% /u01
/dev/mapper/dataVolGroup-lvol0
51470972  3840160  44993196   8% /u02
/dev/mapper/fraVolGroup-lvol0
7089656  2181104   4525372  33% /u03
/dev/mapper/redoVolGroup-lvol0
25667900  3495940  20845064  15% /u04
/dev/xvdg             25803068   176196  24316152   1% /clouddb1nfs
  1. Grant oracle or any other application user ownership of new storage volume
[opc@cloudb1 ~]$ sudo chown oracle:oinstall /clouddb1nfs
[opc@cloudb1 ~]$ sudo chmod 755 /clouddb1nfs
  1. Add mount point entry in /etc/fstab , so storage volume can be automatically mounted when cloud instance restart.
[opc@cloudb1 ~]$ sudo vi /etc/fstab

LABEL=DB_BITS           /u01                    ext4    defaults,nodev        0 0
LABEL=DB_DATA           /u02                    ext4    defaults,nodev        0 0
LABEL=DB_FRA           /u03                    ext4    defaults,nodev        0 0
LABEL=DB_REDO           /u04                    ext4    defaults,nodev        0 0
LABEL=CLOUDDB1NFS      /clouddb1nfs                    ext4    defaults,nodev        0 0
  1. Login as Oracle and test if you can create files or directories
[oracle@cloudb1 ~]$ cd /clouddb1nfs
[oracle@cloudb1 clouddb1nfs]$ ls
lost+found
[oracle@cloudb1 clouddb1nfs]$ ls -ltr
total 16
drwx------ 2 root root 16384 Feb  8 00:35 lost+found
[oracle@cloudb1 clouddb1nfs]$ mkdir dump
[oracle@cloudb1 clouddb1nfs]$ mkdir rman
[oracle@cloudb1 clouddb1nfs]$ ls -ltr
total 24
drwx------ 2 root   root     16384 Feb  8 00:35 lost+found
drwxr-xr-x 2 oracle oinstall  4096 Feb  8 00:43 dump
drwxr-xr-x 2 oracle oinstall  4096 Feb  8 00:43 rman
[oracle@cloudb1 clouddb1nfs]$

 

Oracle New Bare Metal Cloud Database Service

As many of you already know, Oracle has been working on developing a Modern Cloud.  Oracle modern Cloud, let’s call it Oracle Cloud 2.0, designed to provide extreme performance for your critical applications which you might not be able to migrate on shared virtualized hardware. Oracle new bare metal cloud database service will let you quickly create an Oracle Database System with one or more databases on it. A DB System is a dedicated bare metal instance running Oracle Linux 6.8.

Supported Editions

Similar to Oracle Cloud 1.0, new bare metal cloud database service supports following editions.  It is also important to note that when you launch a DB System, you select a single Oracle Database Edition that applies to all the databases on that DB System. The selected edition cannot be changed later.

  • Standard Edition
  • Enterprise Edition
  • Enterprise Edition – High Performance
  • Enterprise Edition – Extreme Performance

Supported Versions

As of now, following database versions are support with all above editions but I am sure 12.2.0.2 will be available soon.

  • 11. 2.0.4
  • 12. 1.0.2

Available shapes:  

As of now, this service is only available in two shapes and only uses locally attached NVMe storage. This is good news for customers who are looking for cloud vendor, who can provide extreme performance for their applications. The available shapes are:

  • HighIO1.36:Adds 36 CPU cores, 512 GB memory, and four 3.2 TB locally attached NVMe drives (12.8 TB total) to the DB System. The Database Service stripes the NVMe drives with 3-way mirroring for redundancy, so the available storage is 4.2 TB.
  • DenseIO1.36:Adds 36 CPU cores, 512 GB memory, and nine 3.2 TB locally attached NVMe drives (28.8 TB total) to the DB System. The Database Service stripes the NVMe drives with 3-way mirroring for redundancy, so the available storage is 9.6 TB.