ClusterwareOCRVoting Disk

How to Restore 10g Clusterware OCR Voting Disk

For 12c clusterware, you may check: How to Restore 12c Clusterware

Suppose we have lost the shared storage as well as OCR data, voting disk and disk groups in a disaster. Later, a new shared storage for 10g RAC database has been established.

It looks like the worst case, but we still have backups of OCR and voting disk on server, plus a full database backup. That is to say, we need to rebuild RAC environment almost from the ground, except installed clusterware and database software.

In this post, I will start from OCR recovery, and then voting disk recovery. Since the disk groups we created previously are all gone, we need to recreate them back.

  1. Restore OCR Data
  2. Restore Voting Disk
  3. Recreate Disk Groups

1. Restore OCR Data

We can manually change the location of OCR from one to another on both nodes.

Node 1:

[root@primary01 ~]# . /home/oracle/.bash_profile
[root@primary01 ~]# crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
[root@primary01 ~]# vi /etc/oracle/ocr.loc
#ocrconfig_loc=/dev/raw/raw1
ocrconfig_loc=/dev/raw/raw6
local_only=FALSE

Node 2:

[root@primary02 ~]# . /home/oracle/.bash_profile
[root@primary02 ~]# crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
[root@primary02 ~]# vi /etc/oracle/ocr.loc
#ocrconfig_loc=/dev/raw/raw1
ocrconfig_loc=/dev/raw/raw6
local_only=FALSE

Then we restore OCR configuration by importing the backup dump on node 1.

[root@primary01 ~]# ocrconfig -import ocr_backup_20190612.dmp

Let’s see current configuration of OCR.

[root@primary01 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :   18874264
         Used space (kbytes)      :       3788
         Available space (kbytes) :   18870476
         ID                       :  559990555
         Device/File Name         : /dev/raw/raw6
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

2. Restore Voting Disk

You can add a new voting disk and delete the old one while CRS is not working.

[root@primary01 ~]# crsctl add css votedisk /dev/raw/raw7 -force
Now formatting voting disk: /dev/raw/raw7
successful addition of votedisk /dev/raw/raw7.
[root@primary01 ~]# crsctl delete css votedisk /dev/raw/raw2 -force
successful deletion of votedisk /dev/raw/raw2.

Then we restore the voting disk by dd command, just like we did in backing up voting disk.

[root@primary01 ~]# dd if=vod_backup_20190612.dmp of=/dev/raw/raw7 bs=1k count=500k
512000+0 records in
512000+0 records out
524288000 bytes (524 MB) copied, 103.76 seconds, 5.1 MB/s

Let’s see current configuration of voting disk on both nodes.

Node 1:

[root@primary01 ~]# crsctl query css votedisk
 0.     0    /dev/raw/raw7

located 1 votedisk(s).

Node 2:

[root@primary02 ~]# crsctl query css votedisk
 0.     0    /dev/raw/raw7

located 1 votedisk(s).

Restart both nodes to verify the results.

[root@primary01 ~]# init 6 [root@primary02 ~]# init 6

Check all CRS resources.

[oracle@primary01 ~]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....B1.inst application    ONLINE    OFFLINE
ora....B2.inst application    ONLINE    OFFLINE
ora.PRIMDB.db  application    ONLINE    OFFLINE
ora....SM1.asm application    ONLINE    ONLINE    primary01
ora....01.lsnr application    ONLINE    ONLINE    primary01
ora....y01.gsd application    ONLINE    ONLINE    primary01
ora....y01.ons application    ONLINE    ONLINE    primary01
ora....y01.vip application    ONLINE    ONLINE    primary01
ora....SM2.asm application    ONLINE    ONLINE    primary02
ora....02.lsnr application    ONLINE    ONLINE    primary02
ora....y02.gsd application    ONLINE    ONLINE    primary02
ora....y02.ons application    ONLINE    ONLINE    primary02
ora....y02.vip application    ONLINE    ONLINE    primary02

So far, we at least get CRS back.

3. Recreate Disk Groups

We switch to ASM mode.

[oracle@primary01 ~]$ export ORACLE_SID=+ASM1
[oracle@primary01 ~]$ sqlplus / as sysdba
...
SQL> column disk_path format a30;
SQL> select g.name diskgroup, d.path disk_path from v$asm_diskgroup g, v$asm_disk d where g.group_number = d.group_number;

no rows selected

We have nothing here. Let’s recreate two disk groups back.

SQL> create diskgroup ora_data external redundancy disk '/dev/raw/raw8';

Diskgroup created.

SQL> create diskgroup fra_data external redundancy disk '/dev/raw/raw9';

Diskgroup created.

Check disk group again.

SQL> select g.name diskgroup, d.path disk_path from v$asm_diskgroup g, v$asm_disk d where g.group_number = d.group_number;

DISKGROUP                      DISK_PATH
------------------------------ ------------------------------
ORA_DATA                       /dev/raw/raw8
FRA_DATA                       /dev/raw/raw9

Disk groups are back. In fact, using DBCA in 10 g can be an alternative to create disk groups, which is somewhat equivalent to ASMCA in 11g.

[oracle@primary01 ~]$ srvctl config database
PRIMDB
[oracle@primary01 ~]$ srvctl config database -d primdb
primary01 PRIMDB1 /u01/app/oracle/product/10.2.0/db_1
primary02 PRIMDB2 /u01/app/oracle/product/10.2.0/db_1
[oracle@primary01 ~]$ srvctl status database -d primdb
Instance PRIMDB1 is not running on node primary01
Instance PRIMDB2 is not running on node primary02

Next, we should restore the entire 10g RAC database back by a backup set.

Leave a Reply

Your email address will not be published. Required fields are marked *