General guidelines for using ZFS storage appliance for Exadata Backups

  • It is recommended to utilize ZFS storage compression (LZ4) to reduce the amount of space required to store backups of the Oracle database and also disable RMAN level compression to increase backup throughput and reduce CPU overhead on Exadata Machine.
  • It is recommended to utilize all Exadata nodes to maximize backup throughput for both Traditional and Image copy backups with using below set of recommended channels.

  • It is recommended to set following ZFS project/share attributes to achieve optimal performance for both Traditional and Image copy backups

Best Practices for Traditional RMAN Backup Strategy

Record Size 1M
Sync Write Throughput
Read Cache Do not use Cache devices
Compression LZ4

Best Practices for Incrementally Updated Backup Strategy

Record Size 32K
Sync Write Latency
Read Cache Default
Compression LZ4

 

  • It is recommended to create dedicated database services for backups across all Exadata nodes to achieve optional performance by paralleling workload to all nodes and availability in case of instance or node failure.

 Sample Script for Creating Services

srvctl add service -d proddb -r proddb1 -a proddb2 -s proddb_bkup1
srvctl start service -d proddb -s proddb_bkup1

srvctl add service -d proddb -r proddb2 -a proddb3 -s proddb_bkup2
srvctl start service -d proddb -s proddb_bkup2

srvctl add service -d proddb -r proddb3 -a proddb4 -s proddb_bkup3
srvctl start service -d proddb -s proddb_bkup3

srvctl add service -d proddb -r proddb4 -a proddb5 -s proddb_bkup4
srvctl start service -d proddb -s proddb_bkup4

srvctl add service -d proddb -r proddb5 -a proddb6 -s proddb_bkup5
srvctl start service -d proddb -s proddb_bkup5

srvctl add service -d proddb -r proddb6 -a proddb7 -s proddb_bkup6
srvctl start service -d proddb -s proddb_bkup6

srvctl add service -d proddb -r proddb7 -a proddb8 -s proddb_bkup7
srvctl start service -d proddb -s proddb_bkup7

srvctl add service -d proddb -r proddb8 -a proddb1 -s proddb_bkup8
srvctl start service -d proddb -s proddb_bkup8
  • It is recommended to use Oracle Direct NFS (dNFS) for all database and RMAN workloads when using Oracle ZFS Storage appliance with Exadata Machines. It reduces CPU utilization by bypassing the operating system and boots parallel I/O throughput by opening an individual connection for each database process.
  • It is recommended to set RMAN parameter section Size to 100G and filesperset to 1 to achieve optimal performance and throughput.

Exadata Traditional RMAN backup with ZFS Storage Appliance

RMAN backup sets are logical entities create by RMAN backup which can be both encrypted and compressed at the same time. A traditional RMAN Backup strategy involve performing full backups or any combination of level 0, level 1 cumulative incremental, and differential backup to restore and recover the database in the event of a physical or logical failure. Basically, traditional Exadata backup strategy is full online backup of database which are performed weekly or daily and have at least one copy of database transactional archive logs stored on Oracle ZFS Appliance.  This full backup and archive logs can be used to recover full database up to the point of failover in case of a recovery. Additionality, If you have properly size your redo logs for max 3 archive switches per hour, your RPO should never be more than 20 mins. Recommended version retention objectives (VRO) should be at least 2 full backups retained on ZFS Appliance all the time with older backups schedule to delete automatically. Additionally, its good idea perform full database backups for small databases to achieve better RTO. As per Oracle MMA best practices, a common implementation is a tiered approach that combines incremental level 0 and level 1 backup. Level 0 incremental backups are often taken on a weekly basis with level 1 differential or cumulative incremental backups performed daily.  It is also important to enable RMAN block change tracking, it can drastically improve the performance of incremental backups.

Related Blog:

General guidelines for using ZFS storage appliance for Exadata Backups

Sample Traditional RMAN backup script : 
run

{

sql 'alter system set "_backup_disk_bufcnt"=64 scope=memory';

sql 'alter system set "_backup_disk_bufsz"=1048576 scope=memory';

allocate channel ch01 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/backup1/%U’;

allocate channel ch02 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/backup2/%U’;

allocate channel ch03 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/backup3/%U’;

allocate channel ch04 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/backup4/%U’;

allocate channel ch05 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/backup5/%U’;

allocate channel ch06 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/backup6/%U’;

allocate channel ch07 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/backup7/%U’;

allocate channel ch08 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/backup8/%U’;

allocate channel ch09 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/backup1/%U’;

allocate channel ch10 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/backup2/%U’;

allocate channel ch11 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/backup3/%U’;

allocate channel ch12 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/backup4/%U’;

allocate channel ch13 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/backup5/%U’;

allocate channel ch14 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/backup6/%U’;

allocate channel ch15 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/backup7/%U’;

allocate channel ch16 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/backup8/%U’;

BACKUP AS BACKUPSET SECTION SIZE 100G INCREMENTAL LEVEL 0 DATABASE FILESPERSET 1 TAG 'bkup_weekly_L0' plus ARCHIVELOG;

backup spfile format '/zfssa/proddb/backup1/spfile_%d_%s_%T_dbid%I.rman';

backup current controlfile format '/zfssa/proddb/backup1/Controlfile_%d_%T_dbid%I_s%s_p%p';

release channel ch01;

release channel ch02;

release channel ch03;

release channel ch04;

release channel ch05;

release channel ch06;

release channel ch07;

release channel ch08;

release channel ch09;

release channel ch10;

release channel ch11;

release channel ch12;

release channel ch13;

release channel ch14;

release channel ch15;

release channel ch16;

}

 

Exadata RMAN Image Copy backup with ZFS Storage Appliance

RMAN image copy backup is a block by block copy of target database consist of data files, archive logs, and control files. Block by block copy comes with an obvious flaw that it cannot be compressed, so storage requirements should be taken into consideration before opting for RMAN Image copy backups. If your target database size in Terabytes, it takes up significant storage space to hold image copy backup. Fortunately, if you are using ZFS appliance to store image copy backups, you can use Oracle ZFS Storage appliance native compression to save storage. Oracle ZFS storage appliance support many different types of compression for different types of workload but it is recommended to use LZ4 for Image copy backups.

Related Blog:

General guidelines for using ZFS storage appliance for Exadata Backups

Sample RMAN Image Copy Backup script:

run

{

sql 'alter system set "_backup_file_bufcnt"=64 scope=memory';

sql 'alter system set "_backup_file_bufsz"=1048576 scope=memory';

sql 'ALTER SYSTEM SWITCH ALL LOGFILE';

allocate channel ch01 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/imgbackup1/%U’;

allocate channel ch02 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/imgbackup2/%U’;

allocate channel ch03 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/imgbackup3/%U’;

allocate channel ch04 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/imgbackup4/%U’;

allocate channel ch05 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/imgbackup5/%U’;

allocate channel ch06 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/imgbackup6/%U’;

allocate channel ch07 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/imgbackup7/%U’;

allocate channel ch08 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/imgbackup8/%U’;

allocate channel ch09 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/imgbackup1/%U’;

allocate channel ch10 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/imgbackup2/%U’;

allocate channel ch11 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/imgbackup3/%U’;

allocate channel ch12 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/imgbackup4/%U’;

allocate channel ch13 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/imgbackup5/%U’;

allocate channel ch14 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/imgbackup6/%U’;

allocate channel ch15 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/imgbackup7/%U’;

allocate channel ch16 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/imgbackup8/%U’;

backup incremental level 1 for recover of copy with tag 'IMAGECOPY' database;

recover copy of database with tag 'IMAGECOPY';

sql "ALTER DATABASE BACKUP CONTROLFILE TO ''/zfssa/proddb/imgbackup1/proddb/control.bkp''";

release channel ch01;

release channel ch02;

release channel ch03;

release channel ch04;

release channel ch05;

release channel ch06;

release channel ch07;

release channel ch08;

release channel ch09;

release channel ch10;

release channel ch11;

release channel ch12;

release channel ch13;

release channel ch14;

release channel ch15;

release channel ch16;

}



Deleting Oracle ZFS Appliance Snapshots

The Oracle ZFS Storage Appliance features a snapshot data service, Snapshots are read-only copies of a filesystem at a given point-in-time. You can think of ZFS snapshots as a restore point of data set for project and shares, which can be used to rollback state of data set to point-in-time just like Oracle database restore points conceptually. ZFS Snapshots are only logical entities, so you can create virtually unlimited number of snapshots without taking up any space. Snapshots can be scheduled or taken manually, depending on usage and policies. We can manage snapshots using Oracle ZFS Appliance graphical interface BUI or through scripts. There are times when you want to manage snapshot using scripts especially when you want to integrate them with Oracle backups. SSH user equivalence might be required if you are looking to execute the following script without providing the root passwords. Following is an example, how to delete project snapshots using a shell script on both ZFS controllers (in case you are using active/active ZFS cluster).

Delete Project Snapshots

 

> cat delete_snap_project.sh

echo "Head 1"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller-1

script

{

 run('cd /');

 run('shares');

 run('set pool=pool1');

 run('select H1-dbname');

 run('snapshots select snap_20170924_1938');

 run('confirm destroy');

 printf("snapshot of the project has been delete..\n");

}

eof

echo "Head 2"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller-2

script

{

 run('cd /');

 run('shares');

 run('set pool=pool2');

 run('select H2-dbname');

 run('snapshots select snap_20170924_1938');

 run('confirm destroy');

 printf("snapshot of the project has been delete..\n");

}

eof

Script Output :

> ./delete_snap_project.sh

Head 1

snapshot of the project has been delete..

Head 2

snapshot of the project has been delete..

oracle@exa2node:/home/oracle

Listing Oracle ZFS Appliance Snapshots

The Oracle ZFS Storage Appliance features a snapshot data service, Snapshots are read-only copies of a filesystem at a given point-in-time. You can think of ZFS snapshots as a restore point of data set for project and shares, which can be used to rollback state of data set to point-in-time just like Oracle database restore points conceptually. ZFS Snapshots are only logical entities, so you can create virtually unlimited number of snapshots without taking up any space. Snapshots can be scheduled or taken manually, depending on usage and policies. We can manage snapshots using Oracle ZFS Appliance graphical interface BUI or through scripts. There are times when you want to manage snapshot using scripts especially when you want to integrate them with Oracle backups. SSH user equivalence might be required if you are looking to execute the following script without providing the root passwords. Following is an example, how to list project snapshots using shell a script for both ZFS controllers (in case you are using active/active ZFS cluster).

List Project SnapShots

> cat list_snapshots.sh

echo "Head 1"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller-1

script

run('shares');

run ('set pool=pool1');

run ('select H1-dbname');

run ('snapshots');

snapshots = list();

for (i = 0; i < snapshots.length; i++) {

  printf("%20s:", snapshots[i]);

  run ('select ' + snapshots[i]);

  printf("%-10s\n", run('get space_data').split(/\s+/)[3]);

  run('cd ..');

}

eof

echo "Head 2"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller -2

script

run('shares');

run ('set pool=pool2');

run ('select H2-dbhome');

run ('snapshots');

snapshots = list();

for (i = 0; i < snapshots.length; i++) {

  printf("%20s:", snapshots[i]);

  run ('select ' + snapshots[i]);

  printf("%-10s\n", run('get space_data').split(/\s+/)[3]);

  run('cd ..');

}

eof

Script Output:

> ./list_snapshots.sh

Head 1

  snap_20170921_1720:9.17T

  snap_20170924_1938:18.5T

Head 2

  snap_20170921_1720:8.09T

  snap_20170924_1938:16.2T

oracle@exa2node:/home/oracle

Creating Oracle ZFS Snapshots using Shell Script

The Oracle ZFS Storage Appliance features a snapshot data service, Snapshots are read-only copies of a filesystem at a given point-in-time. You can think of ZFS snapshots as a restore point of data set for project and shares, which can be used to rollback state of data set to point-in-time just like Oracle database restore points conceptually. ZFS Snapshots are only logical entities, so you can create virtually unlimited number of snapshots without taking up any space. Snapshots can be scheduled or taken manually, depending on usage and policies. We can manage snapshots using Oracle ZFS Appliance graphical interface BUI or through scripts. There are times when you want to manage snapshot using scripts especially when you want to integrate them with Oracle backups. SSH user equivalence might be required if you are looking to execute the following script without providing the root passwords. Following is an example, how to create project snapshots using a shell script for both ZFS controllers (in case you are using active/active ZFS cluster).

Creating Project Snapshots

 

> cat snapshotdt_project.sh

#ssh-agent bash

#ssh-add ~/.ssh/id_dsa

{

echo script

echo "{"

echo " run('cd /');"

echo " run('shares');"

echo " run('set pool=pool1');"

echo " run('select H1-dbname');"

dt=`date "+%Y%m%d_%H%M"`;

echo " run('snapshots snapshot snap_$dt');"

echo " printf('snapshot of the project H1-dbname completed..\n');"

echo "}"

echo "exit"

} | ssh -T -i ~/.ssh/id_rsa root@ZFSControler-1

{

echo script

echo "{"

echo " run('cd /');"

echo " run('shares');"

echo " run('set pool=pool2');"

echo " run('select H2-dbname');"

dt=`date "+%Y%m%d_%H%M"`;

echo " run('snapshots snapshot snap_$dt');"

echo " printf('snapshot of the project H2-dbname completed..\n');"

echo "}"

echo "exit"

} | ssh -T -i ~/.ssh/id_rsa root@ ZFSControler-2

Script Output : 

> ./ snapshotdt_project.sh

snapshot of the project H1-dbname completed..

snapshot of the project H2-dbname completed..