How to clear Exadata Storage Alerts

There are times you need to clear Exadata storage alerts. Its very important that you investigate and resolve the issue before clearing any storage  alers.  Additionally, you want to make a note of storage alert before you clear that alert. You can follow below steps to clear storage alert on one or all storage cells

Step 1 : Login to cellcli utility 

[root@cell01 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Wed Jun 27 19:32:28 EDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.


Step 2 : Validate cell configuration 

CellCLI> ALTER CELL VALIDATE CONFIGURATION ;
Cell exceladm01 successfully altered

Step 3 : List Exadata Storage Alerts 

CellCLI> list alerthistory
1 2018-06-13T11:09:48-04:00 critical "ORA-00700: soft internal error, arguments: [main_21], [11], [Not enough open file descriptors], [], [], [], [], [], [], [], [], []"
2 2018-06-13T11:35:06-04:00 critical "RS-700 [No IP found in Exadata config file] [Check cellinit.ora] [] [] [] [] [] [] [] [] [] []"
3_1 2018-06-25T13:26:17-04:00 critical "Configuration check discovered the following problems: Verify network configuration: 

3_2 2018-06-26T13:25:17-04:00 clear "The configuration check was successful."

Step 4 : Drop all Storage alerts 

CellCLI> drop alerthistory all
Alert 1 successfully dropped
Alert 2 successfully dropped

Step 5 : List storage alerts to validate they are gone

CellCLI> list alerthistory

CellCLI> exit
quitting

Step 6 : Repeat above steps on all storage cells 

Enabling SSH User Equivalency on Exadata Machine

Passwordless SSH configuration is a mandatory installation requirement. SSH is used during installation to configure cluster member nodes, and SSH is used after installation by configuration assistants, Oracle Enterprise Manager, OPatch, and other features.

In the examples that follow, i used the Root user but same can be done for Oracle or Grid user

Step 1 : Create all_group file

[root@node01 oracle.SupportTools]# pwd
/opt/oracle.SupportTools

[root@node01 oracle.SupportTools]# cat all_group
node01
node02
cell01
cell02
cell03

Step 2 : Generate ssh keys

[root@node01 oracle.SupportTools]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
e1:51:4b:ba:7c:c3:48:e8:e9:5f:2b:f4:3c:11:ea:65 
root@node1
The key's randomart image is:
+--[ RSA 2048]----+
| o |
| . + . |
| . = . |
| . = *. |
| o S.+. |
| . o.E. |
| .o =.. |
| .o.+. |
| .... |
+-----------------+

Step 3 : Copy ssh keys to all nodes 

[root@node01 oracle.SupportTools]# dcli -g ./all_group -l root -k -s '-o StrictHostKeyChecking=no'
root@node01's password:
root@node02's password:
root@cell01's password:
root@cell02's password:
root@cell03's password:
node01: ssh key added
node02: ssh key added
cell01: ssh key added
cell02: ssh key added
cell03: ssh key added

 

 Step 4 : Validate passwordless is working 

[root@node1 oracle.SupportTools]# dcli -g all_group -l root hostname
node01: XXXXXXX
node02: XXXXXXX
cell01: XXXXXXX
cell02: XXXXXXX
cell03: XXXXXXX