Skip to content
Home » Oracle » How to Build Asymmetric Standby Database for Data Guard

How to Build Asymmetric Standby Database for Data Guard

Asymmetric Data Guard

Usually we build data guard environments with the symmetric architecture design on primary and standby servers, that is, single-instance to single-instance or RAC to RAC. Practically, we can downgrade hardware specifications on standby servers. For example, we may use 16 cores per instance on the primary side but use 8 cores per instance on the standby side.

In some extreme cases, we might see a very downgraded design which is to build a single-instance standby database for a primary RAC database. The asymmetric design is workable and you'll see how to build it in this post. However, it is difficult to maintenance, rebuild and switchover.

Prerequisites

Before we start to build an asymmetric and single-instance standby database, there're some prerequisites in our case.

1. Prepare Standby Database

You should have created an empty standby database with the same database version and the same database name as the primary database.

As for directory structure, you don't have to care about it, because the directory structure of the standby database server is totally different from the ASM on the primary side essentially, except that you use ACFS for the primary RAC.

2. Prepare a Consistent Full Backup

The consistent full backup of the primary database is for restoring data files to the standby database. I skip the step for simplifying this post.

Build Asymmetric Data Guard

1. Modify /etc/hosts

You have to add an entry in /etc/hosts for the standby server on both primary servers for later TNS connect identifier. For example:

[root@primary01 ~]# vi /etc/hosts
#Public
10.1.1.1    primary01.example.com   primary01
10.1.1.2    primary02.example.com   primary02
10.2.1.1    standby.example.com   standby

#Private
10.1.2.1    primary01-priv.example.com      primary01-priv
10.1.2.2    primary02-priv.example.com      primary02-priv

#Virtual
10.1.1.11   primary01-vip.example.com       primary01-vip
10.1.1.12   primary02-vip.example.com       primary02-vip

Then copy the file to primary02 and standby server.

2. Add Entries to tnsnames.ora

On both primary servers, you should add one connect identifier to tnsnames.ora:

STANDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = standby)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = compdb)(UR=A)
    )
  )

On the standby, you should add three connect identifiers:

PRIMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = primary-cluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = compdb)(UR=A)
    )
  )

PRIMDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = primary01-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = compdb)(UR=A)
    )
  )

PRIMDB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = primary02-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = compdb)(UR=A)
    )
  )

You can use tnsping to test each other's connectivities. If the standby's listener is offline, please start it up.

As for UR=A in the connect descriptor, it's for connecting an idle, nomount or restricted Database.

As you have noticed, our service name is the same as database name COMPDB.

3. Copy Password File out of ASM

You have to copy the password file by grid.

[grid@primary01 ~]$ srvctl config database -d compdb
...
Password file: +DATA/COMPDB/PASSWORD/pwdcompdb.295.1012059079
...
[grid@primary01 ~]$ asmcmd cp +DATA/COMPDB/PASSWORD/pwdcompdb.295.1012059079 /tmp/orapwstandb
copying +DATA/COMPDB/PASSWORD/pwdcompdb.295.1012059079 -> /tmp/orapwstandb

4. Enable Force Logging

FORCE LOGGING mode prevents any load operation from being performed in a nonlogged manner. This can slow down the load process because the loaded data must be copied into the redo logs.

[oracle@primary01 ~]$ sqlplus / as sysdba
...
SQL> alter database force logging;

Database altered.

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
YES

5. Create Control File for Standby

You can create a control file for the standby database by oracle.

SQL> alter database create standby controlfile as '/tmp/standby.ctl';

Database altered.

6. Copy Necessary Files to Standby

The files required to copy to the standby database includes password file, control file and full backup set.

The password file

[oracle@primary01 ~]$ scp /tmp/orapwstandb standby:/u01/app/oracle/product/12.1.0/db_1/dbs/
oracle@standby's password:
orapwstandb                                   100% 7680     7.5KB/s   00:00

The control file

[oracle@primary01 ~]$ scp /tmp/standby.ctl standby:/u01/app/oracle/oradata/compdb/control01.ctl
oracle@standby's password:
standby.ctl                                   100%   18MB  18.1MB/s   00:00
[oracle@primary01 ~]$ scp /tmp/standby.ctl standby:/u01/app/oracle/fast_recovery_area/compdb/control02.ctl
oracle@standby's password:
standby.ctl                                   100%   18MB  18.1MB/s   00:00

The backup set

[oracle@primary01 ~]$ scp /backup/* standby:/backup/
oracle@standby's password:
COMPDB_968055037_0cu653f3_1_1                 100% 1512MB  52.1MB/s   00:29
COMPDB_968055037_0du653h4_1_1                 100%   18MB  18.2MB/s   00:01
COMPDB_968055037_0eu653h7_1_1                 100% 6656     6.5KB/s   00:00

7. Modify Standby's Parameter File

Add some data guard parameters on the standby database.

[oracle@standby ~]$ vi $ORACLE_HOME/dbs/initstandb.ora
...
# For Data Guard Configuration
LOG_ARCHIVE_CONFIG='DG_CONFIG=(standb,primdb)' LOG_ARCHIVE_DEST_1='LOCATION="USE_DB_RECOVERY_FILE_DEST" VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=compdb'
LOG_ARCHIVE_DEST_2='SERVICE=primdb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=compdb'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc

# For Standby Role Configuration
FAL_SERVER=primdb1,primdb2

Then create SPFILE by the modified PFILE.

[oracle@standby ~]$ sqlplus / as sysdba
...
Connected to an idle instance.

SQL> create spfile from pfile;

File created.

8. Restore Data Files on Standby

Startup to mount.

RMAN> startup mount;

Oracle instance started
database mounted

Total System Global Area    2181038080 bytes

Fixed Size                     2926568 bytes
Variable Size               1325402136 bytes
Database Buffers             838860800 bytes
Redo Buffers                  13848576 bytes

RMAN> select name, open_mode, database_role from v$database;

using target database control file instead of recovery catalog
NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
COMPDB    MOUNTED              PHYSICAL STANDBY

Catalog the full backup set.

RMAN> catalog start with '/backup/';

Starting implicit crosscheck backup at 11-JUL-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 device type=DISK
Crosschecked 7 objects
Finished implicit crosscheck backup at 11-JUL-19

Starting implicit crosscheck copy at 11-JUL-19
using channel ORA_DISK_1
Finished implicit crosscheck copy at 11-JUL-19

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/COMPDB/archivelog/2019_07_04/o1_mf_1_8_gkw3xloh_.arc
File Name: /u01/app/oracle/fast_recovery_area/COMPDB/archivelog/2019_07_04/o1_mf_1_7_gkw3w73z_.arc

searching for all files that match the pattern /backup/

List of Files Unknown to the Database
=====================================
File Name: /backup/COMPDB_968055037_0bu5t6ir_1_1
File Name: /backup/COMPDB_968055037_0du5t6lg_1_1
File Name: /backup/COMPDB_968055037_0cu5t6l7_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /backup/COMPDB_968055037_0bu5t6ir_1_1
File Name: /backup/COMPDB_968055037_0du5t6lg_1_1
File Name: /backup/COMPDB_968055037_0cu5t6l7_1_1

Restore and recover the data files.

RMAN> run {
  allocate channel d1 type disk;
  allocate channel d2 type disk;
  set newname for datafile 5 to '/u01/app/oracle/oradata/compdb/example.dbf';
  set newname for datafile 1 to '/u01/app/oracle/oradata/compdb/system.dbf';
  set newname for datafile 3 to '/u01/app/oracle/oradata/compdb/sysaux.dbf';
  set newname for datafile 4 to '/u01/app/oracle/oradata/compdb/undotbs1.dbf';
  set newname for datafile 2 to '/u01/app/oracle/oradata/compdb/undotbs2.dbf';
  set newname for datafile 6 to '/u01/app/oracle/oradata/compdb/users.dbf';
  restore database;
  recover database;
  release channel d1;
  release channel d2;
}
2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>
released channel: ORA_DISK_1
allocated channel: d1
channel d1: SID=29 device type=DISK

allocated channel: d2
channel d2: SID=272 device type=DISK

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 11-JUL-19

channel d1: starting datafile backup set restore
channel d1: specifying datafile(s) to restore from backup set
channel d1: restoring datafile 00001 to /u01/app/oracle/oradata/compdb/system.dbf
channel d1: restoring datafile 00002 to /u01/app/oracle/oradata/compdb/undotbs2.dbf
channel d1: restoring datafile 00003 to /u01/app/oracle/oradata/compdb/sysaux.dbf
channel d1: restoring datafile 00004 to /u01/app/oracle/oradata/compdb/undotbs1.dbf
channel d1: restoring datafile 00005 to /u01/app/oracle/oradata/compdb/example.dbf
channel d1: restoring datafile 00006 to /u01/app/oracle/oradata/compdb/users.dbf
channel d1: reading from backup piece /backup/COMPDB_968055037_0bu5t6ir_1_1
channel d1: piece handle=/backup/COMPDB_968055037_0bu5t6ir_1_1 tag=TAG20190705T142739
channel d1: restored backup piece 1
channel d1: restore complete, elapsed time: 00:03:45
Finished restore at 11-JUL-19

Starting recover at 11-JUL-19
datafile 1 switched to datafile copy
input datafile copy RECID=8 STAMP=1013370705 file name=/u01/app/oracle/oradata/compdb/system.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=10 STAMP=1013370705 file name=/u01/app/oracle/oradata/compdb/undotbs2.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=12 STAMP=1013370706 file name=/u01/app/oracle/oradata/compdb/sysaux.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=14 STAMP=1013370706 file name=/u01/app/oracle/oradata/compdb/undotbs1.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=16 STAMP=1013370706 file name=/u01/app/oracle/oradata/compdb/example.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=18 STAMP=1013370706 file name=/u01/app/oracle/oradata/compdb/users.dbf

starting media recovery

media recovery complete, elapsed time: 00:00:00
channel d1: starting archived log restore to default destination
channel d1: restoring archived log
archived log thread=2 sequence=15
channel d1: restoring archived log
archived log thread=1 sequence=15
channel d1: reading from backup piece /backup/COMPDB_968055037_0du5t6lg_1_1
channel d1: piece handle=/backup/COMPDB_968055037_0du5t6lg_1_1 tag=TAG20190705T142903
channel d1: restored backup piece 1
channel d1: restore complete, elapsed time: 00:00:01
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/COMPDB/archivelog/2019_07_11/o1_mf_1_15_glg8zsxx_.arc RECID=2 STAMP=1013371001
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/COMPDB/archivelog/2019_07_11/o1_mf_2_15_glg8zsxr_.arc RECID=1 STAMP=1013371001
media recovery complete, elapsed time: 00:00:00
Finished recover at 11-JUL-19

released channel: d1

released channel: d2

In the above run block, I SET NEWNAME to remap original path of data files in ASM to the current file system for our single-instance standby database.

If you'd like to use DUPLICATE to restore data files, here is an example of duplication.

run {
  allocate channel d1 type disk;
  allocate channel d2 type disk;
  allocate auxiliary channel a1 type DISK;
  allocate auxiliary channel a2 type DISK;
  set newname for datafile 5 to '/u01/app/oracle/oradata/compdb/example.dbf';
  set newname for datafile 1 to '/u01/app/oracle/oradata/compdb/system.dbf';
  set newname for datafile 3 to '/u01/app/oracle/oradata/compdb/sysaux.dbf';
  set newname for datafile 4 to '/u01/app/oracle/oradata/compdb/undotbs1.dbf';
  set newname for datafile 2 to '/u01/app/oracle/oradata/compdb/undotbs2.dbf';
  set newname for datafile 6 to '/u01/app/oracle/oradata/compdb/users.dbf';
  set newname for tempfile 1 to '/u01/app/oracle/oradata/compdb/temp01.dbf';
  DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER;
  release channel d1;
  release channel d2;
  release channel a1;
  release channel a2;
}

9. Modify Parameters on Primary

Add some data guard parameters on the primary database.

[oracle@primary01 ~]$ sqlplus / as sysdba
...
SQL> alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(primdb,standb)' scope=both sid='*';

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_1='LOCATION="USE_DB_RECOVERY_FILE_DEST" VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=compdb' scope=both sid='*';

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2='SERVICE=standb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=compdb' scope=both sid='*';

System altered.

SQL> alter system set fal_server=standb scope=both sid='*';

System altered.

10. Add Standby Redo Logs on Both Sides

Although this step is optional, it'd better to do it on both primary and standby databases.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 SIZE 50M;

Database altered.

11. Start MRP on Standby

Start managed recovery process (MRP) for applying changes from the primary database.

SQL> alter database recover managed standby database disconnect from session;

Database altered.

After starting up MRP, we have to wait for several minutes for data guard communication on both sides.

SQL> select process, status, thread#, sequence#, block#, blocks from v$managed_standby where process in ('RFS','LNS','MRP0');

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
MRP0      WAIT_FOR_LOG          1         10          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  2         10       6433          1
RFS       IDLE                  0          0          0          0
RFS       IDLE                  0          0          0          0

The communication is established.

12. Perform Log Switch on Both Primary Nodes

We switch log files on any primary nodes.

SQL> alter system archive log current;

System altered.

13. Check Data Guard Status

Check receiving (RFS) and applying (MRP0) processes.

SQL> select process, status, thread#, sequence#, block#, blocks from v$managed_standby where process in ('RFS','LNS','MRP0');

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
MRP0      WAIT_FOR_LOG          2         12          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  2         12         12          1
RFS       IDLE                  0          0          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  1         12         12          1

7 rows selected.

SQL> /

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
MRP0      WAIT_FOR_LOG          2         12          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  2         12         24          1
RFS       IDLE                  0          0          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  1         12         24          1

7 rows selected.

Please note that, since we have standby redo logs, the log block# should be dynamically changing.

Now we check apply lag.

SQL> select name, value, datum_time, time_computed from v$dataguard_stats where name like 'apply lag';

NAME       VALUE           DATUM_TIME           TIME_COMPUTED
---------- --------------- -------------------- --------------------
apply lag  +00 00:00:00    07/11/2019 21:18:04  07/11/2019 21:18:38

The apply lag is 0 second. We succeed our mission.

As I said earlier, such asymmetric architecture is hard to maintenance, especially you're using Oracle-Managed Files (OMF). For example, if you add a data file on the primary database, it cannot automatically convert the data file path for the standby database by DB_FILE_NAME_CONVERT. That's why we left STANDBY_FILE_MANAGEMENT to the default value MANUAL.

Some better designs that I can think of so far are:

  • ACFS
  • Create a primary RAC database on ACFS then create symmetric directory structure of a standby database to pair it. That could mitigate some risks.

  • Single-Node RAC
  • Create a single-node standby RAC database to pair a two-node primary RAC database. It's lot easier to build data guard than file system based standby databases. Please note that, single-node RAC is not RAC one node, they are different things.

  • GoldenGate
  • You can build GoldenGate for the solution of synchronization, this would avoid asymmetric problems because GoldenGate doesn't care about physical structures.

4 thoughts on “How to Build Asymmetric Standby Database for Data Guard”

  1. Hi Ed,
    Can you please explain why the DB_UNIQUE_NAME on primary and standby can be the same (compdb)?

    LOG_ARCHIVE_CONFIG=’DG_CONFIG=(standb,primdb)’ LOG_ARCHIVE_DEST_1=’LOCATION=”USE_DB_RECOVERY_FILE_DEST” VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=compdb’
    LOG_ARCHIVE_DEST_2=’SERVICE=primdb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=compdb’

    1. As I can remember, the old-school data guard configuration (i.e. without broker) allows the same DB_UNIQUE_NAME on both sides as long as the service name is different. But this is not a good practice, we should have made them different.

Leave a Reply

Your email address will not be published. Required fields are marked *