Extending u01 FileSystem on Exadata Machine

This will demonstrate, how to extend Exadata Volume /u01. Same can be applied to root (/) volume as long as you have space available. Extending Volume (/u01) will not require any downtime. I strongly recommend extending (/u01) to 500GB right after the deployment to avoid any storage issues during patching or any other maintenance activities.

Step 1 : df -h /u01  (Check Exisintg Mount)

[root@exa2 ~]# df -h /u01 .

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VGExaDb-LVDbOra1

                       99G   19G   75G  21% /u01

Step 2 : vgdisplay VGExaDb -s ( Check Avilable Stroage ) 

[root@exa2 ~]# vgdisplay VGExaDb -s

  "VGExaDb" 1.63 TiB  [185.00 GiB used / 1.45 TiB free]

Step 3 : lvextend -L +200G /dev/VGExaDb/LVDbOra1  ( Extend Volume ) 

[root@exa2 ~]# lvextend -L +200G /dev/VGExaDb/LVDbOra1

  Size of logical volume VGExaDb/LVDbOra1 changed from 100.00 GiB (25600 extents) to 300.00 GiB (76800 extents).

  Logical volume LVDbOra1 successfully resized.

Step 4 : resize2fs /dev/VGExaDb/LVDbOra1    ( Resize ) 

[root@exa2 ~]# resize2fs /dev/VGExaDb/LVDbOra1

resize2fs 1.43-WIP (20-Jun-2013)

Filesystem at /dev/VGExaDb/LVDbOra1 is mounted on /u01; on-line resizing required

old_desc_blocks = 7, new_desc_blocks = 19

The filesystem on /dev/VGExaDb/LVDbOra1 is now 78643200 blocks long.

Step 5 : Validate

[root@exa2 ~]# df -h /u01

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VGExaDb-LVDbOra1

                      296G   20G  264G   7% /u01

Upgrading Oracle ZFS Storage Appliance with Latest System Updates

A system update for Oracle ZFS Storage Appliance is a binary file that contains new management software as well as new hardware firmware for your storage controllers and disk shelves. Its purpose is to provide additional features, bug fixes, and security updates, allowing your storage environment to run at peak efficiency. Like Exadata ZFS storage appliance has quarterly updates and it is recommended to apply system updates 2 times a year. Updating ZFS storage appliance can be divided into following 3 major steps.

Step 1 : Pre-Upgrade

1.1 Upload Latest System Update Next to Software Updates, you can click “Check now,” or you can schedule the checks by selecting the checkbox and an interval of daily, weekly, or monthly. When a new update is found, “Update available for download” is displayed under STATUS, which is also a direct download link to My Oracle Support

 

1.2 Remove Older System Updates To avoid using too much space on the system disks, maintain no more than three updates at any given time.

 

1.3 Download Backup Configuration In the event of an unforeseen failure, it may be necessary to factory-reset a storage controller. To minimize the downtime, it is recommended to maintain an up-to-date backup copy of the management configuration.
1.4 Check Network Interfaces It is recommended that all data interfaces for clustered controllers be open, or unlocked, prior to upgrading. This ensures these interfaces migrate to the peer controller during a takeover or reboot. Failure to do so will result in downtime.

 

1.5 Verify No Disk Events To avoid unnecessary delays with the upgrade process, do not update your system whenever there are active disk resilvering events or scrub activities. Check if these activities are occurring, and allow them to complete if they are in progress.
1.6 Run Health Check Oracle ZFS Storage Appliance has a health check feature that examines the state of your storage controller and disk shelves prior to upgrading. It is automatically run as part of the upgrade process, but should also be run independently to check storage health prior to entering a maintenance window
1.7 Prepare Environment It is recommended to schedule a maintenance window for the upgrading of your storage controllers. You should inform your users that storage will be either offline or functioning in a limited capacity for the duration of the upgrade. The minimum length of time should be set at one hour. This does not mean your storage will be offline for the entire hour.

 

Step 2: Upgrade

2.1 Upgrade Controller 1 A clustered Oracle ZFS Storage Appliance has two storage controllers that ensure high availability during the upgrade process. Do not use the following procedures if you have a standalone controller.
2.2 Run Health Check on Controller 1 Run Health Check on the first controller.
2.3 Monitor Firmware Updates on Controller 1 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.4 Issue Failback on Controller 2 If the controllers were in an Active / Active configuration before updating, perform a failback operation to return them to that state. This is not necessary if you want an Active / Passive configuration.
2.5 Upgrade Controller 2 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.6 Run Health Check on Controller 2 Run Health Check on the second controller.
2.7 Monitor Firmware Updates on Controller 2 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.8 Issue Failback on Controller 1 If the controllers were in an Active / Active configuration before updating, perform a failback operation to return them to that state. This is not necessary if you want an Active / Passive configuration.

 

Step 3 : Post-Upgrade

 

3.1 Final Health Check (both controllers) Oracle ZFS Storage Appliance has a health check feature that examines the state of your storage controller and disk shelves prior to upgrading. It is automatically run as part of the upgrade process, but should also be run independently to check storage health prior to entering a maintenance window

 

3.2 Apply Deferred Updates (optional) If “Upon request” was chosen during the initial system update sequence, deferred updates can be applied after upgrade
3.3 Restart Environment Data Services Regardless of whether you have exclusively disruptive or non-disruptive protocols in your environment, you should check each attached device for storage connectivity at the conclusion of an upgrade. It may be necessary to remount network shares and restart data services on these hosts

 

 

Latest Exadata releases and updates

Last Update Date: 03/24/18

Hello All,

I thought it will be a good idea to create a dynamic block to keep everyone updated to Oracle Exadata releases, patches, and news.  I will try my best to update following table.

 

Product Version Comments
Exadata Machine X7
Latest Bundle Patch Jan 2018 – 12.2.0.1.0 Patch 27011122
Latest OEDA Utility v180216 – Patch 27465661
Database server bare metal 18.1.4.0.0.180125.3 Patch 27391002
Database server dom0 ULN 18.1.4.0.0.180125.3 Patch 27391003
Storage server software 18.1.4.0.0.180125.3 Patch 27347059
InfiniBand switch software 2.2.7-1 Patch 27347059
Latest Grid Infrastructure Rel 18.0.0.0.0, Ver 18.1.0.0.0
Latest Database Rel 18.0.0.0.0, Ver 18.1.0.0.0
Latest Disk drives 1.2TB HP , 4TB HC
Latest Opatch Utility 12.2.0.1.12 Patch 6880880
Latest Exachk Version 12.2.0.1.4_20171212
DB Server patch Utility 5.180120

Important Characteristics of Oracle Autonomous Data Warehouse Cloud

Oracle Autonomous Data Warehouse Cloud Service is based on applied machine-learning to automatically tune and optimize performance. Built on the next generation Oracle Autonomous Database technology using artificial intelligence to deliver unprecedented reliability, performance and highly elastic data management to enable data warehouse deployment in seconds. Here are some important characteristics of Oracle Autonomous Data Warehouse Cloud.

init.ora parameters

Autonomous Data Warehouse Cloud automatically configures the database initialization parameters based on the compute and storage capacity you provision. You do not need to set any initialization parameters to start using your service. But, you can modify some parameters if you need to.

  • Parameters optimized for DW workloads
  • Memory, parallelism, sessions configured based on number of CPUs
  • Users can modify a limited set of parameters, e.g. NLS settings

Tablespace management

The default data and temporary tablespaces for the database are configured automatically. Adding, removing, or modifying tablespaces is not allowed.

  • Pre-defined data and temporary tablespaces
  • Users cannot create/modify tablespaces

Compression

Compression is enabled by default. Autonomous Data Warehouse Cloud uses Hybrid Columnar Compression for all tables by default, changing the compression method is not allowed.

  • All tables compressed using Hybrid Columnar Compression
  • Users cannot change compression method or disable compression

Optimizer stats gathering

Autonomous Data Warehouse Cloud gathers optimizer statistics automatically for tables loaded with direct-path load operations. For example, for loads using the DBMS_CLOUD package the database gathers optimizer statistics automatically.

  • Stats gathered automatically during direct load operations
  • Users can gather stats manually if they want

Optimizer hints

Autonomous Data Warehouse Cloud ignores optimizer hints and PARALLEL hints in SQL statements by default. If your application relies on hints you can enable optimizer hints by setting the parameter OPTIMIZER_IGNORE_HINTS to FALSE at the session or system level using ALTER SESSION or ALTER SYSTEM. You can also enable PARALLEL hints in your SQL statements by setting OPTIMIZER_IGNORE_PARALLEL_HINTS to FALSE at the session or system level using ALTER SESSION or ALTER SYSTEM.

– Hints ignored by default

– Users can enable hints explicitly

Result cache configuration

Oracle Database Result Cache is enabled by default for all SQL statements. Changing the result cache mode is not allowed.  

  • Result Cache is enabled by default
  • Changing the result cache mode is not allowed.

Parallelism enabled by default

Parallelism is enabled by default. Degree of parallelism for SQL statements is set based on the number of OCPUs in the system and the database service the user is connecting to.

  • Degree of parallelism for SQL statements = OCPU
  • Parallel DML is enabled by default
  • you can disable parallel DML in your session

Monitoring

The Overview and Activity tabs in the Service Console provide information about the performance of the service. The Activity tab also shows past and current monitored SQL statements and detailed information about each statement.

  • Simplified monitoring using the web-based service console
  • Historical and real-time performance charts
  • Real-Time SQL Monitoring to monitor running and past SQL statements
  • Historical data load monitoring

Data Loading

You need to use Oracle Data Pump Export to export your existing Oracle Database schemas to migrate them to Autonomous Data Warehouse Cloud using Oracle Data Pump Import.

  • Partitioned tables are converted into non-partitioned tables.
  • Storage attributes for tables are ignored.
  • Index-organized tables are converted into regular tables.
  • Constraints are converted into rely disable novalidate constraints.
  • Indexes, clusters, indextypes, materialized views, materialized view logs, and zone maps are excluded during Data Pump Import.

Scaling Resources

You can scale your Autonomous Data Warehouse Cloud on demand by adding CPU cores or storage capacity (TB). From CLOUD My Services, access the Autonomous Data Warehouse Cloud you want to scale.

  • Type of change, increase or decrease select Scale Up
  • Enter a value for CPU Core Count Change. The default is 0, for no change
  • Enter a value for Storage Capacity (TB) Change. The default is 0, for no change

Backing Up and Restoring

Autonomous Data Warehouse Cloud automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point-in-time in this retention period.

  • Provide weekly full backups and daily incremental backups.
  • Autonomous Data Warehouse Cloud backs up your database automatically.
  • You can do manual backups using the cloud console.
  • You can initiate recovery for your ADWC
  • ADWC automatically restores and recovers your database to the point-in-time

 

Oracle Exadata OEM Plug-in 13.2.0.1.0 support Patch Automation

The Oracle Exadata plug-in provides a consolidated view of the Exadata Database Machine within Oracle Enterprise Manager, including a consolidated view of all the hardware components and their physical location with indications of status. Oracle recently released the latest version of Exadata plug-in 13.2.0.1.0 which include verity of new features and bug fixes. But the feature which caught my attention was that it supports additional patching features for Exadata entire stack.  Exadata plug-in 13.2.0.1.0 support following additional patching features to enhance Exadata patching effort:

– A comprehensive overview of the maintenance status and needs.

– Proactive patch recommendations for the quarterly full stack patches.

– Supports auto patch download, ability to patch either in rolling or non-rolling modes.

– Ability to schedule runs.

– Proactive notification of the status updates.

– Granular step-level status tracking with real-time updates.

– Log monitoring and aggregation, supporting the quick filing of support issues with pre-packaged log dumps

General guidelines for using ZFS storage appliance for Exadata Backups

  • It is recommended to utilize ZFS storage compression (LZ4) to reduce the amount of space required to store backups of the Oracle database and also disable RMAN level compression to increase backup throughput and reduce CPU overhead on Exadata Machine.
  • It is recommended to utilize all Exadata nodes to maximize backup throughput for both Traditional and Image copy backups with using below set of recommended channels.

  • It is recommended to set following ZFS project/share attributes to achieve optimal performance for both Traditional and Image copy backups

Best Practices for Traditional RMAN Backup Strategy

Record Size 1M
Sync Write Throughput
Read Cache Do not use Cache devices
Compression LZ4

Best Practices for Incrementally Updated Backup Strategy

Record Size 32K
Sync Write Latency
Read Cache Default
Compression LZ4

 

  • It is recommended to create dedicated database services for backups across all Exadata nodes to achieve optional performance by paralleling workload to all nodes and availability in case of instance or node failure.

 Sample Script for Creating Services

srvctl add service -d proddb -r proddb1 -a proddb2 -s proddb_bkup1
srvctl start service -d proddb -s proddb_bkup1

srvctl add service -d proddb -r proddb2 -a proddb3 -s proddb_bkup2
srvctl start service -d proddb -s proddb_bkup2

srvctl add service -d proddb -r proddb3 -a proddb4 -s proddb_bkup3
srvctl start service -d proddb -s proddb_bkup3

srvctl add service -d proddb -r proddb4 -a proddb5 -s proddb_bkup4
srvctl start service -d proddb -s proddb_bkup4

srvctl add service -d proddb -r proddb5 -a proddb6 -s proddb_bkup5
srvctl start service -d proddb -s proddb_bkup5

srvctl add service -d proddb -r proddb6 -a proddb7 -s proddb_bkup6
srvctl start service -d proddb -s proddb_bkup6

srvctl add service -d proddb -r proddb7 -a proddb8 -s proddb_bkup7
srvctl start service -d proddb -s proddb_bkup7

srvctl add service -d proddb -r proddb8 -a proddb1 -s proddb_bkup8
srvctl start service -d proddb -s proddb_bkup8
  • It is recommended to use Oracle Direct NFS (dNFS) for all database and RMAN workloads when using Oracle ZFS Storage appliance with Exadata Machines. It reduces CPU utilization by bypassing the operating system and boots parallel I/O throughput by opening an individual connection for each database process.
  • It is recommended to set RMAN parameter section Size to 100G and filesperset to 1 to achieve optimal performance and throughput.

Exadata Traditional RMAN backup with ZFS Storage Appliance

RMAN backup sets are logical entities create by RMAN backup which can be both encrypted and compressed at the same time. A traditional RMAN Backup strategy involve performing full backups or any combination of level 0, level 1 cumulative incremental, and differential backup to restore and recover the database in the event of a physical or logical failure. Basically, traditional Exadata backup strategy is full online backup of database which are performed weekly or daily and have at least one copy of database transactional archive logs stored on Oracle ZFS Appliance.  This full backup and archive logs can be used to recover full database up to the point of failover in case of a recovery. Additionality, If you have properly size your redo logs for max 3 archive switches per hour, your RPO should never be more than 20 mins. Recommended version retention objectives (VRO) should be at least 2 full backups retained on ZFS Appliance all the time with older backups schedule to delete automatically. Additionally, its good idea perform full database backups for small databases to achieve better RTO. As per Oracle MMA best practices, a common implementation is a tiered approach that combines incremental level 0 and level 1 backup. Level 0 incremental backups are often taken on a weekly basis with level 1 differential or cumulative incremental backups performed daily.  It is also important to enable RMAN block change tracking, it can drastically improve the performance of incremental backups.

Related Blog:

General guidelines for using ZFS storage appliance for Exadata Backups

Sample Traditional RMAN backup script : 
run

{

sql 'alter system set "_backup_disk_bufcnt"=64 scope=memory';

sql 'alter system set "_backup_disk_bufsz"=1048576 scope=memory';

allocate channel ch01 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/backup1/%U’;

allocate channel ch02 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/backup2/%U’;

allocate channel ch03 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/backup3/%U’;

allocate channel ch04 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/backup4/%U’;

allocate channel ch05 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/backup5/%U’;

allocate channel ch06 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/backup6/%U’;

allocate channel ch07 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/backup7/%U’;

allocate channel ch08 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/backup8/%U’;

allocate channel ch09 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/backup1/%U’;

allocate channel ch10 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/backup2/%U’;

allocate channel ch11 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/backup3/%U’;

allocate channel ch12 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/backup4/%U’;

allocate channel ch13 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/backup5/%U’;

allocate channel ch14 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/backup6/%U’;

allocate channel ch15 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/backup7/%U’;

allocate channel ch16 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/backup8/%U’;

BACKUP AS BACKUPSET SECTION SIZE 100G INCREMENTAL LEVEL 0 DATABASE FILESPERSET 1 TAG 'bkup_weekly_L0' plus ARCHIVELOG;

backup spfile format '/zfssa/proddb/backup1/spfile_%d_%s_%T_dbid%I.rman';

backup current controlfile format '/zfssa/proddb/backup1/Controlfile_%d_%T_dbid%I_s%s_p%p';

release channel ch01;

release channel ch02;

release channel ch03;

release channel ch04;

release channel ch05;

release channel ch06;

release channel ch07;

release channel ch08;

release channel ch09;

release channel ch10;

release channel ch11;

release channel ch12;

release channel ch13;

release channel ch14;

release channel ch15;

release channel ch16;

}

 

Exadata RMAN Image Copy backup with ZFS Storage Appliance

RMAN image copy backup is a block by block copy of target database consist of data files, archive logs, and control files. Block by block copy comes with an obvious flaw that it cannot be compressed, so storage requirements should be taken into consideration before opting for RMAN Image copy backups. If your target database size in Terabytes, it takes up significant storage space to hold image copy backup. Fortunately, if you are using ZFS appliance to store image copy backups, you can use Oracle ZFS Storage appliance native compression to save storage. Oracle ZFS storage appliance support many different types of compression for different types of workload but it is recommended to use LZ4 for Image copy backups.

Related Blog:

General guidelines for using ZFS storage appliance for Exadata Backups

Sample RMAN Image Copy Backup script:

run

{

sql 'alter system set "_backup_file_bufcnt"=64 scope=memory';

sql 'alter system set "_backup_file_bufsz"=1048576 scope=memory';

sql 'ALTER SYSTEM SWITCH ALL LOGFILE';

allocate channel ch01 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/imgbackup1/%U’;

allocate channel ch02 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/imgbackup2/%U’;

allocate channel ch03 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/imgbackup3/%U’;

allocate channel ch04 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/imgbackup4/%U’;

allocate channel ch05 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/imgbackup5/%U’;

allocate channel ch06 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/imgbackup6/%U’;

allocate channel ch07 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/imgbackup7/%U’;

allocate channel ch08 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/imgbackup8/%U’;

allocate channel ch09 device type disk connect 'sys/********@proddb_bkup1' FORMAT '/zfssa/proddb/imgbackup1/%U’;

allocate channel ch10 device type disk connect 'sys/********@proddb_bkup2' FORMAT '/zfssa/proddb/imgbackup2/%U’;

allocate channel ch11 device type disk connect 'sys/********@proddb_bkup3' FORMAT '/zfssa/proddb/imgbackup3/%U’;

allocate channel ch12 device type disk connect 'sys/********@proddb_bkup4' FORMAT '/zfssa/proddb/imgbackup4/%U’;

allocate channel ch13 device type disk connect 'sys/********@proddb_bkup5' FORMAT '/zfssa/proddb/imgbackup5/%U’;

allocate channel ch14 device type disk connect 'sys/********@proddb_bkup6' FORMAT '/zfssa/proddb/imgbackup6/%U’;

allocate channel ch15 device type disk connect 'sys/********@proddb_bkup7' FORMAT '/zfssa/proddb/imgbackup7/%U’;

allocate channel ch16 device type disk connect 'sys/********@proddb_bkup8' FORMAT '/zfssa/proddb/imgbackup8/%U’;

backup incremental level 1 for recover of copy with tag 'IMAGECOPY' database;

recover copy of database with tag 'IMAGECOPY';

sql "ALTER DATABASE BACKUP CONTROLFILE TO ''/zfssa/proddb/imgbackup1/proddb/control.bkp''";

release channel ch01;

release channel ch02;

release channel ch03;

release channel ch04;

release channel ch05;

release channel ch06;

release channel ch07;

release channel ch08;

release channel ch09;

release channel ch10;

release channel ch11;

release channel ch12;

release channel ch13;

release channel ch14;

release channel ch15;

release channel ch16;

}



Deleting Oracle ZFS Appliance Snapshots

The Oracle ZFS Storage Appliance features a snapshot data service, Snapshots are read-only copies of a filesystem at a given point-in-time. You can think of ZFS snapshots as a restore point of data set for project and shares, which can be used to rollback state of data set to point-in-time just like Oracle database restore points conceptually. ZFS Snapshots are only logical entities, so you can create virtually unlimited number of snapshots without taking up any space. Snapshots can be scheduled or taken manually, depending on usage and policies. We can manage snapshots using Oracle ZFS Appliance graphical interface BUI or through scripts. There are times when you want to manage snapshot using scripts especially when you want to integrate them with Oracle backups. SSH user equivalence might be required if you are looking to execute the following script without providing the root passwords. Following is an example, how to delete project snapshots using a shell script on both ZFS controllers (in case you are using active/active ZFS cluster).

Delete Project Snapshots

 

> cat delete_snap_project.sh

echo "Head 1"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller-1

script

{

 run('cd /');

 run('shares');

 run('set pool=pool1');

 run('select H1-dbname');

 run('snapshots select snap_20170924_1938');

 run('confirm destroy');

 printf("snapshot of the project has been delete..\n");

}

eof

echo "Head 2"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller-2

script

{

 run('cd /');

 run('shares');

 run('set pool=pool2');

 run('select H2-dbname');

 run('snapshots select snap_20170924_1938');

 run('confirm destroy');

 printf("snapshot of the project has been delete..\n");

}

eof

Script Output :

> ./delete_snap_project.sh

Head 1

snapshot of the project has been delete..

Head 2

snapshot of the project has been delete..

oracle@exa2node:/home/oracle

Listing Oracle ZFS Appliance Snapshots

The Oracle ZFS Storage Appliance features a snapshot data service, Snapshots are read-only copies of a filesystem at a given point-in-time. You can think of ZFS snapshots as a restore point of data set for project and shares, which can be used to rollback state of data set to point-in-time just like Oracle database restore points conceptually. ZFS Snapshots are only logical entities, so you can create virtually unlimited number of snapshots without taking up any space. Snapshots can be scheduled or taken manually, depending on usage and policies. We can manage snapshots using Oracle ZFS Appliance graphical interface BUI or through scripts. There are times when you want to manage snapshot using scripts especially when you want to integrate them with Oracle backups. SSH user equivalence might be required if you are looking to execute the following script without providing the root passwords. Following is an example, how to list project snapshots using shell a script for both ZFS controllers (in case you are using active/active ZFS cluster).

List Project SnapShots

> cat list_snapshots.sh

echo "Head 1"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller-1

script

run('shares');

run ('set pool=pool1');

run ('select H1-dbname');

run ('snapshots');

snapshots = list();

for (i = 0; i < snapshots.length; i++) {

  printf("%20s:", snapshots[i]);

  run ('select ' + snapshots[i]);

  printf("%-10s\n", run('get space_data').split(/\s+/)[3]);

  run('cd ..');

}

eof

echo "Head 2"

cat <<eof |ssh -T -i ~/.ssh/id_rsa root@zfscontroller -2

script

run('shares');

run ('set pool=pool2');

run ('select H2-dbhome');

run ('snapshots');

snapshots = list();

for (i = 0; i < snapshots.length; i++) {

  printf("%20s:", snapshots[i]);

  run ('select ' + snapshots[i]);

  printf("%-10s\n", run('get space_data').split(/\s+/)[3]);

  run('cd ..');

}

eof

Script Output:

> ./list_snapshots.sh

Head 1

  snap_20170921_1720:9.17T

  snap_20170924_1938:18.5T

Head 2

  snap_20170921_1720:8.09T

  snap_20170924_1938:16.2T

oracle@exa2node:/home/oracle