CLSRSC-180: An error occurred while executing the command ‘/bin/rpm -qf /sbin/init’

Hello,

I recently encountered following error during Exadata GI home upgrade from 12.1.0.1 to 12.2.0.1, We encountered this error during the execution of rootupgrade.sh script on node 1 itself.

2018/06/10 02:59:18 CLSRSC-180: An error occurred while executing the command '/bin/rpm -qf /sbin/init' 
Died at /u01/app/12.2.0.1/grid/crs/install/s_crsutils.pm line 2372. 
The command '/u01/app/12.2.0.1/grid/perl/bin/perl -I/u01/app/12.2.0.1/grid/perl/lib -I/u01/app/12.2.0.1/grid/crs/install /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl -upgrade' execution failed 

I thought of furthur investigating this issue by running target command manually and i got following error. These errors were also logged in installation logfile.

[root@dm01dbadm01 ~]# /bin/rpm -qf /sbin/init 
rpmdb: Thread/process 261710/140405403039488 failed: Thread died in Berkeley DB library 
error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery 
error: cannot open Packages index using db3 - (-30974) 
error: cannot open Packages database in /var/lib/rpm 
rpmdb: Thread/process 261710/140405403039488 failed: Thread died in Berkeley DB library 
error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery 
error: cannot open Packages database in /var/lib/rpm 
rpmdb: Thread/process 261710/140405403039488 failed: Thread died in Berkeley DB library 
error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery 
error: cannot open Packages database in /var/lib/rpm 
file /sbin/init is not owned by any package 
You have new mail in /var/spool/mail/root 

 

Issue was related corruption of OS level database RPM. We can validate this issue by running following command.

# /bin/rpm -qa | more

 

We had to fix RPM corruption issue by using following and so we can continue our Exadata upgrade.

As root OS user run the following: 
# rm -f /var/lib/rpm/__* 
# /bin/rpm --rebuilddb 
# echo $?

 

After rebuilding corrupted RPMs , using following command to validate them.

# /bin/rpm -qa | more

 

 

 

 

 

CheckIP ERROR : 192.168.1.1 is responding to ping request

I recently ran into a following error while running Exadata checkip script during Exadata deployment process.

Processing section FACTORY
ERROR : 192.168.1.1 is responding to ping request

I checked and realized above IP is being used by another device on the network. Good news ! As per Oracle Exadata Manual this is a factory default IP used by older Exadata Machines and we can ignore this error.

As Per Oracle Exadata Manual (2.5 Default IP addresses) ,  In earlier releases, Oracle Exadata Database Machine had default IP addresses set at the factory, and the range of IP addresses was 192.168.1.1 to 192.168.1.203.

./ggsci: error while loading shared libraries: libnnz11.so

This issue is related environment variable. Please fix them using the following.

[ggate@dbadm01 oradb11]$ ./ggsci
./ggsci: error while loading shared libraries: libnnz11.so: cannot open shared object file: No such file or directory

export LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.4a/dbhome_1/lib

[ggate@dm02dbadm01 oradb11]$ export LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.4a/dbhome_1/lib
[ggate@dm02dbadm01 oradb11]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (dm02dbadm01.abcfinancial.net) 1>

or

[ggate@dm02dbadm01 oradb11]$ ln -s /u01/app/oracle/product/11.2.0.4a/dbhome_1/lib/libnnz11.so libnnz11.so
[ggate@dm02dbadm01 oradb11]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (dm02dbadm01.abcfinancial.net) 1>

 

ORA-46238: Database user or role does not exist during Upgrade to 12c using dbua

I recently ran into “ORA-46238: Database user or role does not exist ” issue while trying to upgrade Oracle database from 11g to 12c using DBCA.

You will see something like this in your logfile 

ERROR at line 1: 
ORA-46238: Database user or role '"BETADATASECURE"' does not exist 
ORA-06512: at "SYS.XS_ACL", line 93 
ORA-06512: at "SYS.XS_ADMIN_UTIL", line 53 
ORA-06512: at "SYS.XS_ACL_INT", line 126 
ORA-01403: no data found 
ORA-06512: at "SYS.XS_ACL_INT", line 122 
ORA-06512: at "SYS.XS_ACL_INT", line 493 
ORA-06512: at "SYS.XS_ACL", line 83 
ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 190 
ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 190 
ORA-06512: at line 56 
ORA-06512: at line 104

Reason : – You have dropped the user but there are still some permission lingering out there for that user. You can using the following.

SQL> SELECT a.object_id ACL_ID, b.principal, b.privilege
2 FROM xdb.xdb$acl a,
3 xmltable(xmlnamespaces(DEFAULT 'http://xmlns.oracle.com/xdb/acl.xsd'),
4 '/acl/ace' passing a.object_value
5 columns
6 principal VARCHAR2(30) path '/ace/principal',
privilege xmltype path '/ace/privilege') b
7 8 WHERE b.principal = 'BETADATASECURE';

ACL_ID PRINCIPAL
-------------------------------- ------------------------------
PRIVILEGE
--------------------------------------------------------------------------------
6013F2CBD4F65F5CE040007F01001457 BETADATASECURE
<privilege xmlns="http://xmlns.oracle.com/xdb/acl.xsd">
<plsql:connect xmlns:p

Drop permission : – 

connect / as sysdba
BEGIN
DBMS_NETWORK_ACL_ADMIN.delete_privilege (
acl => '/sys/acls/qualdatasecure.xml',
principal => 'BETADATASECURE',
is_grant => TRUE,
privilege => 'connect');
COMMIT;
END;
/

You do not have sufficient permissions to access the inventory ‘/u01/app/oraInventory/locks’

Sometime back i got the following error while trying to install Oracle GoldenGate on Exadata. This issue can be resovled by changing ” inventory.lock” permission using following method.

Error : – 

[ggate@dbadm01 Disk1]$ ./runInstaller
You do not have sufficient permissions to access the inventory ‘/u01/app/oraInventory/locks’. Installation cannot continue. It is required that the primary group of the install user is same as the inventory owner group. Make sure that the install user is part of the inventory owner group and restart the installer.: Permission denied

[ggate@dm02dbadm01 Disk1]$ ls -ltr
total 43
-rwxr-xr-x+ 1 oracle oinstall 918 Oct 22 2016 runInstaller
drwxr-xr-x+ 11 oracle oinstall 21 Oct 22 2016 stage
drwxr-xr-x+ 2 oracle oinstall 3 Oct 22 2016 response
drwxr-xr-x+ 4 oracle oinstall 11 Oct 22 2016 install

Solution : –

Chmod 770 locks
backup the already existing inventory.lock file
mv inventory.lock inventory.lock_<date>

And restart of the ./runinstaller using the response file fixed the issue.

 

[ggate@dm02dbadm01 Disk1]$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB. Actual 8704 MB Passed
Checking swap space: must be greater than 150 MB. Actual 23584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-05-02_02-16-39PM. Please wait …[ggate@dbadm01 Disk1]$

 

Reference : –

 

Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 or later.

Please locate  prvtlmpg.plb script in GG home installation directory and execute it as sysdba as work around for “Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 or later.”

[oracle@OGGR2-1 ogg]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 20 12:00:42 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @prvtlmpg.plb

SQL> @prvtlmpg.plb

Oracle GoldenGate Workaround prvtlmpg

This script provides a temporary workaround for bug 17030189.
It is strongly recommended that you apply the official Oracle
Patch for bug 17030189 from My Oracle Support instead of using
this workaround.

This script must be executed in the mining database of Integrated
Capture. You will be prompted for the username of the mining user.
Use a double quoted identifier if the username is case sensitive
or contains special characters. In a CDB environment, this script
must be executed from the CDB$ROOT container and the mining user
must be a common user.

=========================== WARNING ==========================
You MUST stop all Integrated Captures that belong to this mining
user before proceeding!
================================================================

Enter Integrated Capture mining user: ggs

Installing workaround…
No errors.
No errors.
No errors.
Installation completed.

Flashback Oracle Database on Exadata Machine

There are times when you need to flashback Oracle databases running on Exadata Machine. Database restore point is commonly  used during database upgrade or GoldenGate replication. You can using following steps to flashback Oracle database running on Exadata Machine.

Step  1 : Check Database Status using srvctl
[oracle@dm02dba01 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node dm02dba01
Instance orcl2 is running on node dm02dba02
Step 2 : Stop database using srvctl
[oracle@dm02dba01 ~]$ srvctl stop database -d orcl
Step 3 : Start only 1 instance in mount mode
[oracle@dm02dba01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu May 10 12:40:27 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1.0737E+11 bytes
Fixed Size 29888776 bytes
Variable Size 2.8369E+10 bytes
Database Buffers 7.8920E+10 bytes
Redo Buffers 55226368 bytes
Database mounted.

Step 4 : Check list of existing database restore points
SQL> select name,time from v$restore_point;

NAME
--------------------------------------------------------------------------------
TIME
---------------------------------------------------------------------------
upgrade
06-MAY-18 01.53.29.000000000 PM

Step 5 : Flashback database to target restore point
SQL> flashback database to restore point upgrade;

Flashback complete.
Step 6 : Open database instance with resetlogs
SQL> alter database open resetlogs;

Database altered.
Step 7: Shutdown database instance
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
Step 8 : Start database using srvctl
[oracle@dm02dba01 ~]$ srvctl start database -d orcl
[oracle@dm02dba01 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node dm02dba01
Instance orcl2 is running on node dm02dba02

 

Extending u01 FileSystem on Exadata Machine

This will demonstrate, how to extend Exadata Volume /u01. Same can be applied to root (/) volume as long as you have space available. Extending Volume (/u01) will not require any downtime. I strongly recommend extending (/u01) to 500GB right after the deployment to avoid any storage issues during patching or any other maintenance activities.

Step 1 : df -h /u01  (Check Exisintg Mount)

[root@exa2 ~]# df -h /u01 .

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VGExaDb-LVDbOra1

                       99G   19G   75G  21% /u01

Step 2 : vgdisplay VGExaDb -s ( Check Avilable Stroage ) 

[root@exa2 ~]# vgdisplay VGExaDb -s

  "VGExaDb" 1.63 TiB  [185.00 GiB used / 1.45 TiB free]

Step 3 : lvextend -L +200G /dev/VGExaDb/LVDbOra1  ( Extend Volume ) 

[root@exa2 ~]# lvextend -L +200G /dev/VGExaDb/LVDbOra1

  Size of logical volume VGExaDb/LVDbOra1 changed from 100.00 GiB (25600 extents) to 300.00 GiB (76800 extents).

  Logical volume LVDbOra1 successfully resized.

Step 4 : resize2fs /dev/VGExaDb/LVDbOra1    ( Resize ) 

[root@exa2 ~]# resize2fs /dev/VGExaDb/LVDbOra1

resize2fs 1.43-WIP (20-Jun-2013)

Filesystem at /dev/VGExaDb/LVDbOra1 is mounted on /u01; on-line resizing required

old_desc_blocks = 7, new_desc_blocks = 19

The filesystem on /dev/VGExaDb/LVDbOra1 is now 78643200 blocks long.

Step 5 : Validate

[root@exa2 ~]# df -h /u01

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VGExaDb-LVDbOra1

                      296G   20G  264G   7% /u01

Upgrading Oracle ZFS Storage Appliance with Latest System Updates

A system update for Oracle ZFS Storage Appliance is a binary file that contains new management software as well as new hardware firmware for your storage controllers and disk shelves. Its purpose is to provide additional features, bug fixes, and security updates, allowing your storage environment to run at peak efficiency. Like Exadata ZFS storage appliance has quarterly updates and it is recommended to apply system updates 2 times a year. Updating ZFS storage appliance can be divided into following 3 major steps.

Step 1 : Pre-Upgrade

1.1 Upload Latest System Update Next to Software Updates, you can click “Check now,” or you can schedule the checks by selecting the checkbox and an interval of daily, weekly, or monthly. When a new update is found, “Update available for download” is displayed under STATUS, which is also a direct download link to My Oracle Support

 

1.2 Remove Older System Updates To avoid using too much space on the system disks, maintain no more than three updates at any given time.

 

1.3 Download Backup Configuration In the event of an unforeseen failure, it may be necessary to factory-reset a storage controller. To minimize the downtime, it is recommended to maintain an up-to-date backup copy of the management configuration.
1.4 Check Network Interfaces It is recommended that all data interfaces for clustered controllers be open, or unlocked, prior to upgrading. This ensures these interfaces migrate to the peer controller during a takeover or reboot. Failure to do so will result in downtime.

 

1.5 Verify No Disk Events To avoid unnecessary delays with the upgrade process, do not update your system whenever there are active disk resilvering events or scrub activities. Check if these activities are occurring, and allow them to complete if they are in progress.
1.6 Run Health Check Oracle ZFS Storage Appliance has a health check feature that examines the state of your storage controller and disk shelves prior to upgrading. It is automatically run as part of the upgrade process, but should also be run independently to check storage health prior to entering a maintenance window
1.7 Prepare Environment It is recommended to schedule a maintenance window for the upgrading of your storage controllers. You should inform your users that storage will be either offline or functioning in a limited capacity for the duration of the upgrade. The minimum length of time should be set at one hour. This does not mean your storage will be offline for the entire hour.

 

Step 2: Upgrade

2.1 Upgrade Controller 1 A clustered Oracle ZFS Storage Appliance has two storage controllers that ensure high availability during the upgrade process. Do not use the following procedures if you have a standalone controller.
2.2 Run Health Check on Controller 1 Run Health Check on the first controller.
2.3 Monitor Firmware Updates on Controller 1 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.4 Issue Failback on Controller 2 If the controllers were in an Active / Active configuration before updating, perform a failback operation to return them to that state. This is not necessary if you want an Active / Passive configuration.
2.5 Upgrade Controller 2 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.6 Run Health Check on Controller 2 Run Health Check on the second controller.
2.7 Monitor Firmware Updates on Controller 2 Each update event will be held in either a Pending, In Progress, or Failed state. Contact Oracle Support if a Failed state is reported. These firmware updates can be monitored using the browser user interface or the command-line interface
2.8 Issue Failback on Controller 1 If the controllers were in an Active / Active configuration before updating, perform a failback operation to return them to that state. This is not necessary if you want an Active / Passive configuration.

 

Step 3 : Post-Upgrade

 

3.1 Final Health Check (both controllers) Oracle ZFS Storage Appliance has a health check feature that examines the state of your storage controller and disk shelves prior to upgrading. It is automatically run as part of the upgrade process, but should also be run independently to check storage health prior to entering a maintenance window

 

3.2 Apply Deferred Updates (optional) If “Upon request” was chosen during the initial system update sequence, deferred updates can be applied after upgrade
3.3 Restart Environment Data Services Regardless of whether you have exclusively disruptive or non-disruptive protocols in your environment, you should check each attached device for storage connectivity at the conclusion of an upgrade. It may be necessary to remount network shares and restart data services on these hosts

 

 

Latest Exadata releases and updates

Last Update Date: 03/24/18

Hello All,

I thought it will be a good idea to create a dynamic block to keep everyone updated to Oracle Exadata releases, patches, and news.  I will try my best to update following table.

 

Product Version Comments
Exadata Machine X7
Latest Bundle Patch Jan 2018 – 12.2.0.1.0 Patch 27011122
Latest OEDA Utility v180216 – Patch 27465661
Database server bare metal 18.1.4.0.0.180125.3 Patch 27391002
Database server dom0 ULN 18.1.4.0.0.180125.3 Patch 27391003
Storage server software 18.1.4.0.0.180125.3 Patch 27347059
InfiniBand switch software 2.2.7-1 Patch 27347059
Latest Grid Infrastructure Rel 18.0.0.0.0, Ver 18.1.0.0.0
Latest Database Rel 18.0.0.0.0, Ver 18.1.0.0.0
Latest Disk drives 1.2TB HP , 4TB HC
Latest Opatch Utility 12.2.0.1.12 Patch 6880880
Latest Exachk Version 12.2.0.1.4_20171212
DB Server patch Utility 5.180120