Upgrading Oracle ZFS Storage Appliance with Latest System Updates

A system update for Oracle ZFS Storage Appliance is a binary file that contains new management software as well as new hardware firmware for your storage controllers and disk shelves. Its purpose is to provide additional features, bug fixes, and security updates, allowing your storage environment to run at peak efficiency. Like Exadata ZFS storage appliance has quarterly updates and it is recommended to apply system updates 2 times a year. Updating ZFS storage appliance can be divided into following 3 major steps.

Step 1 : Pre-Upgrade

1.1 Upload Latest System Update Next to Software Updates, you can click “Check now,” or you can schedule the checks by selecting the checkbox and an interval of daily, weekly, or monthly. When a new update is found, “Update available for download” is displayed under STATUS, which is also a direct download link to My Oracle Support

 

1.2 Remove Older System Updates To avoid using too much space on the system disks, maintain no more than three updates at any given time.

 

1.3 Download Backup Configuration In the event of an unforeseen failure, it may be necessary to factory-reset a storage controller. To minimize the downtime, it is recommended to maintain an up-to-date backup copy of the management configuration.
1.4 Check Network Interfaces It is recommended that all data interfaces for clustered controllers be open, or unlocked, prior to upgrading. This ensures these interfaces migrate to the peer controller during a takeover or reboot. Failure to do so will result in downtime.

 

1.5 Verify No Disk Events To avoid unnecessary delays with the upgrade process, do not update your system whenever there are active disk resilvering events or scrub activities. Check if these activities are occurring, and allow them to complete if they are in progress.
1.6 Run Health Check Oracle ZFS Storage Appliance has a health check feature that examines the state of your storage controller and disk shelves prior to upgrading. It is automatically run as part of the upgrade process, but should also be run independently to check storage health prior to entering a maintenance window
1.7 Prepare Environment It is recommended to schedule a maintenance window for the upgrading of your storage controllers. You should inform your users that storage will be either offline or functioning in a limited capacity for the duration of the upgrade. The minimum length of time should be set at one hour. This does not mean your storage will be offline for the entire hour.

 

Step 2: Upgrade

2.1 Upgrade Controller 1 A clustered Oracle ZFS Storage Appliance has two storage controllers that ensure high availability during the upgrade process. Do not use the following procedures if you have a standalone controller.
2.2 Run Health Check on Controller 1 Run Health Check on the first controller.
2.3 Monitor Firmware Updates on Controller 1 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.4 Issue Failback on Controller 2 If the controllers were in an Active / Active configuration before updating, perform a failback operation to return them to that state. This is not necessary if you want an Active / Passive configuration.
2.5 Upgrade Controller 2 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.6 Run Health Check on Controller 2 Run Health Check on the second controller.
2.7 Monitor Firmware Updates on Controller 2 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.8 Issue Failback on Controller 1 If the controllers were in an Active / Active configuration before updating, perform a failback operation to return them to that state. This is not necessary if you want an Active / Passive configuration.

 

Step 3 : Post-Upgrade

 

3.1 Final Health Check (both controllers) Oracle ZFS Storage Appliance has a health check feature that examines the state of your storage controller and disk shelves prior to upgrading. It is automatically run as part of the upgrade process, but should also be run independently to check storage health prior to entering a maintenance window

 

3.2 Apply Deferred Updates (optional) If “Upon request” was chosen during the initial system update sequence, deferred updates can be applied after upgrade
3.3 Restart Environment Data Services Regardless of whether you have exclusively disruptive or non-disruptive protocols in your environment, you should check each attached device for storage connectivity at the conclusion of an upgrade. It may be necessary to remount network shares and restart data services on these hosts

 

 

Latest Exadata releases and updates

Last Update Date: 03/24/18

Hello All,

I thought it will be a good idea to create a dynamic block to keep everyone updated to Oracle Exadata releases, patches, and news.  I will try my best to update following table.

 

Product Version Comments
Exadata Machine X7
Latest Bundle Patch Jan 2018 – 12.2.0.1.0 Patch 27011122
Latest OEDA Utility v180216 – Patch 27465661
Database server bare metal 18.1.4.0.0.180125.3 Patch 27391002
Database server dom0 ULN 18.1.4.0.0.180125.3 Patch 27391003
Storage server software 18.1.4.0.0.180125.3 Patch 27347059
InfiniBand switch software 2.2.7-1 Patch 27347059
Latest Grid Infrastructure Rel 18.0.0.0.0, Ver 18.1.0.0.0
Latest Database Rel 18.0.0.0.0, Ver 18.1.0.0.0
Latest Disk drives 1.2TB HP , 4TB HC
Latest Opatch Utility 12.2.0.1.12 Patch 6880880
Latest Exachk Version 12.2.0.1.4_20171212
DB Server patch Utility 5.180120

Important Characteristics of Oracle Autonomous Data Warehouse Cloud

Oracle Autonomous Data Warehouse Cloud Service is based on applied machine-learning to automatically tune and optimize performance. Built on the next generation Oracle Autonomous Database technology using artificial intelligence to deliver unprecedented reliability, performance and highly elastic data management to enable data warehouse deployment in seconds. Here are some important characteristics of Oracle Autonomous Data Warehouse Cloud.

init.ora parameters

Autonomous Data Warehouse Cloud automatically configures the database initialization parameters based on the compute and storage capacity you provision. You do not need to set any initialization parameters to start using your service. But, you can modify some parameters if you need to.

  • Parameters optimized for DW workloads
  • Memory, parallelism, sessions configured based on number of CPUs
  • Users can modify a limited set of parameters, e.g. NLS settings

Tablespace management

The default data and temporary tablespaces for the database are configured automatically. Adding, removing, or modifying tablespaces is not allowed.

  • Pre-defined data and temporary tablespaces
  • Users cannot create/modify tablespaces

Compression

Compression is enabled by default. Autonomous Data Warehouse Cloud uses Hybrid Columnar Compression for all tables by default, changing the compression method is not allowed.

  • All tables compressed using Hybrid Columnar Compression
  • Users cannot change compression method or disable compression

Optimizer stats gathering

Autonomous Data Warehouse Cloud gathers optimizer statistics automatically for tables loaded with direct-path load operations. For example, for loads using the DBMS_CLOUD package the database gathers optimizer statistics automatically.

  • Stats gathered automatically during direct load operations
  • Users can gather stats manually if they want

Optimizer hints

Autonomous Data Warehouse Cloud ignores optimizer hints and PARALLEL hints in SQL statements by default. If your application relies on hints you can enable optimizer hints by setting the parameter OPTIMIZER_IGNORE_HINTS to FALSE at the session or system level using ALTER SESSION or ALTER SYSTEM. You can also enable PARALLEL hints in your SQL statements by setting OPTIMIZER_IGNORE_PARALLEL_HINTS to FALSE at the session or system level using ALTER SESSION or ALTER SYSTEM.

– Hints ignored by default

– Users can enable hints explicitly

Result cache configuration

Oracle Database Result Cache is enabled by default for all SQL statements. Changing the result cache mode is not allowed.  

  • Result Cache is enabled by default
  • Changing the result cache mode is not allowed.

Parallelism enabled by default

Parallelism is enabled by default. Degree of parallelism for SQL statements is set based on the number of OCPUs in the system and the database service the user is connecting to.

  • Degree of parallelism for SQL statements = OCPU
  • Parallel DML is enabled by default
  • you can disable parallel DML in your session

Monitoring

The Overview and Activity tabs in the Service Console provide information about the performance of the service. The Activity tab also shows past and current monitored SQL statements and detailed information about each statement.

  • Simplified monitoring using the web-based service console
  • Historical and real-time performance charts
  • Real-Time SQL Monitoring to monitor running and past SQL statements
  • Historical data load monitoring

Data Loading

You need to use Oracle Data Pump Export to export your existing Oracle Database schemas to migrate them to Autonomous Data Warehouse Cloud using Oracle Data Pump Import.

  • Partitioned tables are converted into non-partitioned tables.
  • Storage attributes for tables are ignored.
  • Index-organized tables are converted into regular tables.
  • Constraints are converted into rely disable novalidate constraints.
  • Indexes, clusters, indextypes, materialized views, materialized view logs, and zone maps are excluded during Data Pump Import.

Scaling Resources

You can scale your Autonomous Data Warehouse Cloud on demand by adding CPU cores or storage capacity (TB). From CLOUD My Services, access the Autonomous Data Warehouse Cloud you want to scale.

  • Type of change, increase or decrease select Scale Up
  • Enter a value for CPU Core Count Change. The default is 0, for no change
  • Enter a value for Storage Capacity (TB) Change. The default is 0, for no change

Backing Up and Restoring

Autonomous Data Warehouse Cloud automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point-in-time in this retention period.

  • Provide weekly full backups and daily incremental backups.
  • Autonomous Data Warehouse Cloud backs up your database automatically.
  • You can do manual backups using the cloud console.
  • You can initiate recovery for your ADWC
  • ADWC automatically restores and recovers your database to the point-in-time

 

Oracle Exadata OEM Plug-in 13.2.0.1.0 support Patch Automation

The Oracle Exadata plug-in provides a consolidated view of the Exadata Database Machine within Oracle Enterprise Manager, including a consolidated view of all the hardware components and their physical location with indications of status. Oracle recently released the latest version of Exadata plug-in 13.2.0.1.0 which include verity of new features and bug fixes. But the feature which caught my attention was that it supports additional patching features for Exadata entire stack.  Exadata plug-in 13.2.0.1.0 support following additional patching features to enhance Exadata patching effort:

– A comprehensive overview of the maintenance status and needs.

– Proactive patch recommendations for the quarterly full stack patches.

– Supports auto patch download, ability to patch either in rolling or non-rolling modes.

– Ability to schedule runs.

– Proactive notification of the status updates.

– Granular step-level status tracking with real-time updates.

– Log monitoring and aggregation, supporting the quick filing of support issues with pre-packaged log dumps