Written by KH Wissem 

Introduction

In the previous article called Oracle Database backup using AWS Volume Snapshots, we talked about the leader Cloud provider; Amazon Web Services (AWS); we have explained a bit of the components of the AWS, the AWS client installation and commands. We also showed how to back up the Oracle databases using volume snapshots.

In this article, we will provide the steps to restore a production Oracle 12c database using AWS volume snapshots to a new AWS instance (new VM). This scenario is useful when the DBA has to restore an Oracle database where its AWS instance is completely lost. In this article we will explain a bit the AWS components and nomenclature, but for any detailed explanation please refer to my previous article: Oracle Database backup using AWS Volume Snapshots.

AWS Snapshot Backup Overview

We have an Oracle 12c a physical standby database backed up using AWS volume snapshots:

  • We have some available Oracle ASM disk snapshots.
  • We have both Oracle database home and Oracle grid infrastructure 12c disk snapshots. The Oracle database home and Oracle grid infrastructure 12c are installed under a single /jv01 mount point.

All the steps can be reproduced in a Redhat Linux operating system.

The volume snapshots are tagged correctly: Tags are a key value pairs of text that allow you to add description, label to easy identify your AWS resources. As an example you tag your instances whatever it is a Linux or a Windows running. Remember we are about to tag volumes belonging to the database server and files. We need to tag the volumes to ensure including them in the snapshots jobs. We use the Tag key: tag4snap-<DBNAME> and a value database name concatenated to the mount name. In this example, dbstby is the name of our physical standby database and DB_DG_0000 is the ASM disk name containing the control files, data files and all database related files. When we make the Oracle Home snapshot, we use value of jv01 mount point concatenated with the key “OracleHome”; this is useful to identify the snapshot IDs from the AWS repository. We use the following AWS Cli command to tag the volumes:

aws ec2 create-tags --region eu-west-1 --resources vol-3etsaf34 --profile PVolSnapshot --tags Key=tag4snap-dbstby, Value='"{\"dbstby_DB_DG_0000\"}"'

aws ec2 create-tags --region eu-west-1 --resources vol-2dbwsf21 --profile PVolSnapshot --tags Key=tag4snap-dbstby, Value='"{\"dbstby_ARCHIVE_DG_0000\"}"'

aws ec2 create-tags --region eu-west-1 --resources vol-9fgdaf69 --profile PVolSnapshot --tags Key=tag4snap-dbstby, Value='"{\"OracleHome_jv01\"}"'

 

PS: refer to the Oracle Database backup using AWS Volume Snapshots for more details about profiling, and AWS command line tool.

When we want to retrieve the snapshot volumes, we can query the AWS snapshot repository using the following command:

aws ec2 describe-snapshots   --region eu-west-1 --profile PVolSnapshot --output text   --filters Name=tag-value,Values="tag4snap-dbstby"

In the following sections of the article, we will work exclusively on a new AWS instance. This is to simulate a complete loss of the production database server.

New AWS Server Instance Prerequisites

In order to follow correctly the steps some pre requisites have to be done in the new server.

  • Oracle-rdbms-server-12cR1-preinstall.x86_64 rpm installed along with Oracle rpm packages.
  •  Install Oracle ASM rpm packages:

 

Install kmod-oracleasm package with YUM:

yum install kmod-oracleasm

Install oracleasm-support package with YUM:

yum install oracleasm-support

The oracleasmlib package cannot be downloaded from YUM repository:

http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html

wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm

rpm -iv oracleasmlib-2.0.4-1.el6.x86_64.rpm

AWS Volume creation and restores

1.     Checking Server Hardware and Memory Configuration:

We need the swap space configured correctly in the new server. As we have more than 16GB RAM exposed, we need at least a 16GB of swap configured. Let’s create an AWS volume with 20GB of size and attach it to the new AWS instance. Follow the steps:

  • Run the following command to get the AWS instance id:

curl -s http://169.254.169.254/latest/meta-data/instance-id

è i-3216f0dc

  • Get the AWS availability zone:

curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone

► eu-west-1c

  • Get the AWS region:

curl http://169.254.169.254/latest/dynamic/instance-identity/document|grep region|awk -F\" '{print $4}'

► eu-west-1

  • Create an AWS Volume for Swap size 20GB:

aws ec2 create-volume \
    --size 20 \
    --availability-zone eu-west-1c --region eu-west-1

{
    "AvailabilityZone": "eu-west-1c",
    "Encrypted": false,
    "VolumeType": "standard",
    "VolumeId": "vol-43abc11b",
    "State": "creating",
   "SnapshotId": "",
    "CreateTime": "2016-07-05T14:53:15.433Z",
    "Size": 20
}

  • Get the last available device ID:

[root@LRESIPO1000DIT bin]# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda   202:0     0 20G 0 disk
├─xvda1 202:1   0     1M 0 part

└─xvda2 202:2   0   20G 0 part /
[root@LRESIPO1000DIT bin]#

[root@LRESIPO1000DIT bin]#

  • Attach the AWS volume:

aws ec2 attach-volume \
    --volume-id vol-43abc11b \

    --instance-id i-3216f0dc\
    --device /dev/xvdb

{
    "AttachTime": "2016-07-05T14:54:54.995Z",

    "InstanceId": "i-4716f0cc",
    "VolumeId": "vol-98abc11b",

    "State": "attaching",
    "Device": "/dev/xvdb"

}

  • Create the Swap space:

pvcreate /dev/xvdb
vgcreate vg_misc /dev/xvdb

lvcreate -n swap -L19G vg_misc
mkswap /dev/vg_misc/swap

free -g
swapon /dev/vg_misc/swap

free -g
[root@LRESIPO1000DIT bin]# grep SwapTotal /proc/meminfo

SwapTotal:     19922940 kB
[root@LRESIPO1000DIT bin]#

 2.     Identify which snapshots to restore:

In this step, we need to identify the snapshots from the AWS snapshot repository. Run the following commands to get the snapshot list.

aws ec2 describe-snapshots   --region eu-west-1 --profile PVolSnapshot --output text   --filters Name=tag-value,Values="tag4snap-dbstby"

The output looks like below:

 SNAPSHOTS       Created by WISSEM       True     arn:aws:kms:eu-west-1:*********:key/**********   224421145216   100%     snap-038f6357   2016-06-22T07:30:42.000Z       completed       vol-2dbwsf21   50

TAGS   Name     ACTIVE

TAGS   tag-key tag4snap-dbstby

SNAPSHOTS       Created by WISSEM       True     arn:aws:kms:eu-west-1:*********:key/*********   224421145216   100%     snap-45d7a365   2016-06-24T07:31:16.000Z       completed       vol-3etsaf34   250

TAGS   tag-key tag4snap-dbstby

TAGS   Name    ACTIVE

SNAPSHOTS       Created by WISSEM       True     arn:aws:kms:eu-west-1:*********:key/**********   224421145216   100%     snap-b922a954   2016-06-22T07:30:48.000Z       completed       vol-9fgdaf69   40

TAGS   Name     ACTIVE

TAGS   tag-key tag4snap-dbstby

We got the following snapshot identifiers:

  • snap-45d7a365
  • snap-038f6357
  • snap-b922a954

 3.     Restore the oracle database and grid infrastructure 12c homes:

We already mentioned that both Oracle database and grid infrastructure homes are under the jv01 single mount point. We first need to create the jv01 in the new server.

mkdir -p /jv01

  • Create an AWS volume from the jv01 snapshot:

aws ec2 create-volume --region eu-west-1 --availability-zone eu-west-1c   --snapshot-id snap-b922a954

{
    "AvailabilityZone": "eu-west-1c",

    "Encrypted": true,
    "VolumeType": "standard",

    "VolumeId": "vol-f8a1cb7b",
    "State": "creating",

    "SnapshotId": "snap-b922a954",
    "CreateTime": "2016-07-05T15:13:08.616Z",

    "Size": 40
}

  • Attach the AWS volume:

aws ec2 attach-volume     --volume-id vol-f8a1cb7b --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id)     --device /dev/xvdc

{
    "AttachTime": "2016-07-11T12:44:33.474Z",

    "InstanceId": "i-4716f0cc",
    "VolumeId": "vol-f8a1cb7b",

    "State": "attaching",
    "Device": "/dev/xvdc"

}

  • Mount the jv01 and change the ownership to the Oracle software owner:

[root@LRESIPO1000DIT ~]# ls /dev/vg_jv01/
jv01

[root@LRESIPO1000DIT ~]#
[root@LRESIPO1000DIT ~]# mount /dev/vg_jv01/jv01 /jv01

[root@LRESIPO1000DIT ~]# df -h
Filesystem               Size Used Avail Use% Mounted on

/dev/xvda2                 20G 3.9G     17G 20% /
devtmpfs                 3.8G     0   3.8G   0% /dev

tmpfs                     3.7G     0   3.7G   0% /dev/shm
tmpfs                     3.7G   33M   3.7G   1% /run

tmpfs                     3.7G     0   3.7G   0% /sys/fs/cgroup
tmpfs                    757M     0   757M   0% /run/user/1008

/dev/mapper/vg_jv01-jv01   40G     18G   20G 48% /jv01
[root@LRESIPO1000DIT ~]# chown -R oracle:dba /jv01

 4.     Restore the ASM disks:

You may notice we will use almost the same steps to restore the ASM disk from AWS snapshots, here are the steps to follow:

  • Create an AWS volume from the AWS snapshot:
aws ec2 create-volume --region eu-west-1 --availability-zone eu-west-1c  --snapshot-id snap-45d7a365
{
    "AvailabilityZone": "eu-west-1c",
    "Encrypted": true,
    "VolumeType": "standard",
    "VolumeId": "vol-c3dfa040",
    "State": "creating",
    "SnapshotId": "snap-45d7a365",
    "CreateTime": "2016-07-11T12:59:12.855Z",
    "Size": 50
}

aws ec2 create-volume --region eu-west-1 --availability-zone eu-west-1c  --snapshot-id snap-038f6357
{
    "AvailabilityZone": "eu-west-1c",
    "Encrypted": true,
    "VolumeType": "standard",
    "VolumeId": "vol-f7dfa074",
    "State": "creating",
    "SnapshotId": "snap-038f6357",
    "CreateTime": "2016-07-11T12:59:19.251Z",
    "Size": 250
}
  • Attach the AWS volumes:
aws ec2 attach-volume     --volume-id  vol-c3dfa040 --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id)     --device /dev/xvdd
{
    "AttachTime": "2016-07-11T13:01:44.130Z",
    "InstanceId": "i-4716f0cc",
    "VolumeId": "vol-c3dfa040",
    "State": "attaching",
    "Device": "/dev/xvdd"
}


aws ec2 attach-volume     --volume-id  vol-f7dfa074 --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id)     --device /dev/xvde

{
    "AttachTime": "2016-07-11T13:02:22.823Z",
    "InstanceId": "i-4716f0cc",
    "VolumeId": "vol-f7dfa074",
    "State": "attaching",
    "Device": "/dev/xvde"
}

 5.     Reconfigure the grid infrastructure home:

[root@LRESIPO1000DIT ~]# /jv01/app/oracle/product/12.1.0/grid/perl/bin/perl /jv01/app/oracle/product/12.1.0/grid/crs/install/roothas.pl -deconfig -force

[root@LRESIPO1000DIT ~]#  /jv01/app/oracle/product/12.1.0/grid/perl/bin/perl -I /jv01/app/oracle/product/12.1.0/grid/perl/lib -I /jv01/app/oracle/product/12.1.0/grid/crs/install  /jv01/app/oracle/product/12.1.0/grid/crs/install/roothas.pl

As oracle software owner run:

srvctl add listener -l LISTENER -s -p 1521
srvctl start listener
srvctl status listener
srvctl add asm -d '/dev/oracleasm/disks/*'
srvctl start asm
srvctl status asm

  • Discover the ASM disks:
[root@LRESIPO1000DIT ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ARCHIVE_0"
Instantiating disk "DB_0"
[root@LRESIPO1000DIT ~]#

 7.     Find out the device ASM disk mapping:

[root@LRESIPO1000DIT ~]# oracleasm listdisks

ARCHIVE_0

DB_0

[root@LRESIPO1000DIT ~]# oracleasm listdisks
ARCHIVE_0
DB_0
[root@LRESIPO1000DIT ~]#

[root@LRESIPO1000DIT ~]# oracleasm querydisk -d  ARCHIVE_0
Disk "ARCHIVE_0" is a valid ASM disk on device [202,49]
[root@LRESIPO1000DIT ~]# oracleasm querydisk -d  DB_0
Disk "DB_0" is a valid ASM disk on device [202,65]
[root@LRESIPO1000DIT ~]# ls -l /dev/* | grep 202, | grep 49
brw-rw---- 1 root disk    202,  49 Jul 11 09:11 /dev/xvdd1
[root@LRESIPO1000DIT ~]# ls -l /dev/* | grep 202, | grep 65
brw-rw---- 1 root disk    202,  65 Jul 11 09:11 /dev/xvde1
[root@LRESIPO1000DIT ~]#

 8.     Find out the disk group name:

strings -a /dev/xvde1|head -4
strings -a /dev/xvdd1 |head -4

[root@LRESIPO1000DIT ~]# strings -a /dev/xvde1|head -4
ORCLDISKDPB_0
DB_DG_0000
DB_DG
DB_DG_0000
[root@LRESIPO1000DIT ~]# strings -a /dev/xvdd1 |head -4
ORCLDISKARCHIVE_0
ARCHIVE_DG_0000
ARCHIVE_DG
ARCHIVE_DG_0000
[root@LRESIPO1000DIT ~]#

 9.     Mount the Oracle ASM disk groups:

Alter diskgroup DB_DG mount;
Alter diskgroup ARCHIVE_DG mount;

 10.   Open the physical standby database in read write mode:

startup mount
alter database activate standby database;
alter database open;

 Conclusion

In this article, we have seen all the steps to restore a production Oracle 12c physical standby database to a new AWS instance using the AWS volume snapshots. We have detailed every step in the restore procedure. The restore procedure is easy, straightforward and can be scripted. In our environments we scripted in shell Linux all the steps to restore an Oracle database in order to automate the task and make it available anytime. AWS volume snapshots are not for incremental backups and do not guarantee single data file restore. The best practice is to use it along with Oracle Recover manager. For more details about backing up an Oracle database in the Cloud check out the article Oracle Database Backup to Amazon Simple Storage Service published on Toad World.