Tuesday, 2 May 2017

Transportable Tablespace

Transporting Tablespaces


You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size.

Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are simply copied to the destination location, and you use an import utility to transfer only the metadata of the tablespace objects to the new database.

Starting with Oracle Database 10g, you can transport tablespaces across platforms. This functionality can be used to Allow a database to be migrated from one platform to another. However not all platforms are supported. To see which platforms are supported give the following query.

SQL> COLUMN PLATFORM_NAME FORMAT A30
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT
----------- ---------------------------------------- --------------
          1 Solaris[tm] OE (32-bit)                  Big
          2 Solaris[tm] OE (64-bit)                  Big
          7 Microsoft Windows IA (32-bit)            Little
         10 Linux IA (32-bit)                        Little
          6 AIX-Based Systems (64-bit)               Big
          3 HP-UX (64-bit)                           Big
          5 HP Tru64 UNIX                            Little
          4 HP-UX IA (64-bit)                        Big
         11 Linux IA (64-bit)                        Little
         15 HP Open VMS                              Little
          8 Microsoft Windows IA (64-bit)            Little

PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT
----------- ---------------------------------------- --------------
          9 IBM zSeries Based Linux                  Big
         13 Linux x86 64-bit                         Little
         16 Apple Mac OS                             Big
         12 Microsoft Windows x86 64-bit             Little
         17 Solaris Operating System (x86)           Little
         18 IBM Power Based Linux                    Big
         19 HP IA Open VMS                           Little
         20 Solaris Operating System (x86-64)        Little
         21 Apple Mac OS (x86-64)                    Little

20 rows selected.

Operating system platforms for transporting tablespaces

If the source platform and the target platform are of different endian, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endian, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.

Important: Before a tablespace can be transported to a different platform, the datafile header must identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or higher, you can accomplish this by making the datafile read/write at least once.

SQL> alter tablespace test read only;

Then,

SQL> alter tablespace test read write;

Procedure for transporting tablespaces

To move or copy a set of tablespaces, perform the following steps.

For cross-platform transport, check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view.
If you are transporting the tablespace set to a platform different from the source platform, then determine if the source and target platforms are supported and their endian . If both platforms have the same endian, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database.

Ignore this step if you are transporting your tablespace set to the same platform.

Pick a self-contained set of tablespaces.
Generate a transportable tablespace set.
A transportable tablespace set consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces.

If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 4.

Transport the tablespace set.
Copy the datafiles and the export file to the target database. You can do this using any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_COPY package, or publishing on CDs).

If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now.

Plug in the tablespace.
Invoke the Export utility to plug the set of tablespaces into the target database.

Transporting Tablespace Example

These steps are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist:

Tablespace

Datafile:

test_1   :- /u01/oracle/oradata/testdb/test_101.dbf

test_2   :- /u01/oracle/oradata/testdb/test_201.dbf


Step 1: Determine if Platforms are Supported and Endianness

This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. If test_1 and test_2 were being transported to a different platform, you can execute the following query on both platforms to determine if the platforms are supported and their endian formats:


SELECT d.PLATFORM_NAME, ENDIAN_FORMAT 
    FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
       WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
The following is the query result from the source platform:

PLATFORM_NAME             ENDIAN_FORMAT
------------------------- --------------
Solaris[tm] OE (32-bit)   Big

The following is the result from the target platform:

PLATFORM_NAME             ENDIAN_FORMAT
------------------------- --------------
Microsoft Windows NT      Little

You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.

Step 2: Pick a Self-Contained Set of Tablespaces

There may be logtestl or phystestl dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. That is it should not have tables with foreign keys referring to primary key of tables which are in other tablespaces. It should not have tables with some partitions in other tablespaces. To find out whether the tablespace is self contained do the following

EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('test_1,test_2', TRUE);

After executing the above give the following query to see whether  any violations are there.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

VIOLATIONS
---------------------------------------------------------------------------
no rows selected

Step 3: Generate a Transportable Tablespace Set

After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions:

Make all tablespaces in the set you are copying read-only.

SQL> ALTER TABLESPACE test_1 READ ONLY;

Tablespace altered.

SQL> ALTER TABLESPACE test_2 READ ONLY;

Tablespace altered.

Invoke the Export utility on the host system and specify which tablespaces are in the transportable set.

SQL> HOST

$ exp system/password FILE=/u01/oracle/expdat.dmp 
TRANSPORT_TABLESPACES = test_1,test_2

If test_1 and test_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the test_1 and test_2 tablespaces. You have to use RMAN utility to convert datafiles

$ RMAN TARGET /

Recovery Manager: Release 10.1.0.0.0 
Copyright (c) 1995, 2003, Oracle Corporation.  All rights reserved.

connected to target database: testdb (DBID=3295731590)

Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system.

RMAN> CONVERT TABLESPACE test_1,test_2  
       TO PLATFORM 'Microsoft Windows NT' FORMAT '/temp/%U';

Starting backup at 08-APR-03
using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=11 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00005 name=/u01/oracle/oradata/testdb/test_101.dbf
converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO-5_05ek24v5
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00004 name=/u01/oracle/oradata/testdb/test_101.dbf
converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl

channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45

Finished backup at 08-APR-07

Step 4: Transport the Tablespace Set

Transport both the datafiles and the export file of the tablespaces to a place accessible to the target database. You can use any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_TRANSFER package, or publishing on CDs).

Step 5: Plug In the Tablespace Set

Plug in the tablespaces and integrate the structural information using the Import utility, imp:

IMP system/password FILE=expdat.dmp
  DATAFILES=/testdb/test_101.dbf,/testdb/test_201.dbf

   REMAP_SCHEMA=(smith:sami) REMAP_SCHEMA=(williams:john)

The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by smith in the source database will be owned by sami in the target database after the tablespace set is plugged in. Similarly, objects owned by williams in the source database will be owned by john in the target database. In this case, the target database is not required to have users smith and williams, but must have users sami and john.

After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred.

Now, put the tablespaces into read/write mode as follows:

ALTER TABLESPACE test_1 READ WRITE;

ALTER TABLESPACE test_2 READ WRITE;

Friday, 28 April 2017

Creating and Managing Pluggable Databases

Creating and Managing Pluggable Databases using SQL Statements

We can create pluggable databases by either one of the following methods
1. Creating Pluggable Database from Seed database
2. Cloning an Existing Pluggable Database
3. Unplugging and Plugging a database from one CDB to another CDB

Creating Pluggable Database from Seed.


Let’s create a pluggable database icapdb2. We will place the datafiles in the /u02/oracle/icacdb/icapdb2 directory

Create the directory
$mkdir /u02/oracle/cdb1/pdb1

$sqlplus
Enter User:/ as sysdba
SQL>
Give the following command
SQL>CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY tiger
STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
DEFAULT TABLESPACE users
DATAFILE '/u02/oracle/cdb1/pdb1/users01.dbf'
SIZE 250M AUTOEXTEND ON
FILE_NAME_CONVERT=('/u02/oracle/cdb1/pdbseed/','/u02/oracle/cdb1/pdb1/')

Cloning an Existing Pluggable Database


Let us clone the local pluggable database pdb1 to  pdb3.

First, create the directory to hold icapdb3 datafiles
$mkdir /u02/oracle/cdb1/pdb3
Start SQL Plus and connect to root
$ sqlplus
Enter Username: / as sysdba
SQL>
First we need to close the source pluggable database icapdb1 and open it in Read Only mode.
SQL> alter pluggable database  pdb1 close;
Pluggable database altered.
SQL> alter pluggable database  pdb1 open read only;
Pluggable database altered.
Now give the following command
SQL>CREATE PLUGGABLE DATABASE  pdb3 FROM  pdb1
FILE_NAME_CONVERT = ('/u02/oracle/cdb1/pdb1/',
'/u02/oracle/cdb/pdb3/')
STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M);

Pluggable database created.

Unplugging and Plugging a database from one CDB to another CDB


Let’s us see an example of unplugging a database from CDB and plugging it into another CDB.

Let’s assume we have a Pluggable database pdb3 in ICACDB database it’s files are located in ‘/u02/oracle/cdb1/pdb3’ directory.

Now we want to unplug this database from cdb1 database and plug it into another CDB whose SID is ‘orcl’ .
We want to move the files from ‘/u02/oracle/cdb1/pdb3’ to ‘/u01/oracle/pdb_moved’ directory.
Step 1:- Connect to ICACDB database
$ export ORACLE_SID=cdb1
$ sqlplus
Enter Username: / as sysdba
SQL>
Step2:- Close the pluggable database icapdb3
SQL> alter pluggable database  pdb3 close;

Pluggable database altered.
Step 3:- Unplug the database by specifying XML file. This XML file will be used to  plug this database into the target CDB
SQL> alter pluggable database  pdb3 unplug into '/u02/oracle/pdb3.xml';

Pluggable database altered.
Step 4:- Create target directory
$mkdir /u01/oracle/pdb_moved
Step 5:- Connect to Target CDB ‘orcl’
$ export ORACLE_SID=orcl
$ sqlplus
Enter User: / as sysdba
Step 6:- Start the target CDB
SQL> startup
Step 7:- Give the following command to plug the database
 CREATE PLUGGABLE DATABASE pdb_moved
USING '/u02/oracle/pdb3.xml' MOVE
FILE_NAME_CONVERT = ('/u02/oracle/cdb1/pdb3/',
'/u01/oracle/pdb_moved/')
STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M);

Pluggable database created.

SQL> show pdbs

CON_ID CON_NAME                       OPEN MODE RESTRICTED
------ ------------------------------ ---------- ----------
2      PDB$SEED                        READ ONLY NO
3      MYSALESPDB                      MOUNTED
4      PDB_MOVED                       MOUNTED
Step 8:- Open the pluggable database ICAPDB_MOVED
SQL> alter pluggable database pdb_moved open;

Pluggable database altered.

Wednesday, 26 April 2017

Creating Multi-tenant Database in Oracle 12c Using SQL (Manual)

11gR1 RAC Administration (OCR,Votedisks)


Checking CRS Status:


The below two commands are generally used to check the status of CRS. The first command lists the status of CRS on the local node where as the other command shows the CRS status across all the nodes in Cluster.

crsctl check crs <<-- for the local node

crsctl check cluster <<-- for remote nodes in the cluster



[root@node1-pub ~]# crsctl check crs

Cluster Synchronization Services appears healthy

Cluster Ready Services appears healthy

Event Manager appears healthy

[root@node1-pub ~]#



For the below command to run, CSS needs to be running on the local node. The "ONLINE" status for remote node says that CSS is running on that node. When CSS is down on the remote node, the status of "OFFLINE" is displayed for that node.

[root@node1-pub ~]# crsctl check cluster

node1-pub    ONLINE

node2-pub    ONLINE

Viewing Cluster name:


The similar information can be retrieved from the dump file.

ocrdump -stdout -keyname SYSTEM | grep -A 1 clustername | grep ORATEXT | awk '{print $3}'


OR


ocrconfig -export /tmp/ocr_exp.dat -s online

for i in `strings /tmp/ocr_exp.dat | grep -A 1 clustername` ; do if [ $i != 'SYSTEM.css.clustername' ]; then echo $i; fi; done

OR

Oracle creates a directory with the same name as Cluster under the $ORA_CRS_HOME/cdata.

No. Of Nodes configured in Cluster:

The below command can be used to find out the number of nodes registered into the cluster. It also displays the node's Public name, Private name and Virtual name along with their numbers.


olsnodes -n -p -i


[root@node1-pub ~]# olsnodes -n -p -i

node1-pub       1       node1-prv       node1-vip

node2-pub       2       node2-prv       node2-vip




Viewing Votedisk Information:

The below command is used to view the no. of Voting disks configured in the Cluster.


crsctl query css votedisk


Viewing OCR Information:


The ocrcheck command displays the no. of OCR files configured in the Cluster. It is primarily used to chck the integrity of the OCR files. It also displays the version of OCR as well as storage space information. You can only have 2 OCR files at max.

[root@node1-pub ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          2

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3848

         Available space (kbytes) :     258272

         ID                       :  744414276

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0

                                    Device/File integrity check succeeded

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_1

                                    Device/File integrity check succeeded



         Cluster registry integrity check succeeded


Various Timeout Settings in Cluster:


Disktimeout: Disk Latencies in seconds from node-to-Votedisk. Default Value is 200. (Disk IO)

Misscount:     Network Latencies in second from node-to-node (Interconnect). Default Value is 60 Sec (Linux) and 30 Sec in Unix platform. (Network IO) Misscount < Disktimeout


IF

  (Disk IO Time > Disktimeout) OR (Network IO time > Misscount)

THEN

   REBOOT NODE

ELSE

   DO NOT REBOOT

END IF


crsctl get css disktimeout

crsctl get css misscount

crsctl get css  reboottime



[root@node1-pub ~]# crsctl get css disktimeout

200

[root@node1-pub ~]# crsctl get css misscount

Configuration parameter misscount is not defined.

 


The above message indicates that the Misscount is not set manually and it is set to its default Value which is 60 seconds on Linux. It can be changed as below.



[root@node1-pub ~]# crsctl set css misscount 100

Configuration parameter misscount is now set to 100.

[root@node1-pub ~]# crsctl get css misscount

100



The below command sets the value of misscount back to its default value.


crsctl unset css misscount

[root@node1-pub ~]# crsctl unset css misscount

[root@node1-pub ~]# crsctl get css  reboottime

 

Add/Remove OCR file in Cluster:

Removing OCR File

 

(1) Get the Existing OCR file information by running ocrcheck utility.


[root@node1-pub ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          2

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3852

         Available space (kbytes) :     258268

         ID                       :  744414276

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0 <-- OCR

                                    Device/File integrity check succeeded

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_1 <-- OCR Mirror

                                    Device/File integrity check succeeded



         Cluster registry integrity check succeeded


(2) The First command removes the OCR mirror (/u02/ocfs2/ocr/OCRfile_1). If you want to remove the OCR file (/u02/ocfs2/ocr/OCRfile_1) run the next command.

ocrconfig -replace ocrmirror

ocrconfig -replace ocr



[root@node1-pub ~]# ocrconfig -replace ocrmirror

[root@node1-pub ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          2

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3852

         Available space (kbytes) :     258268

         ID                       :  744414276

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0 <<-- OCR File

                                    Device/File integrity check succeeded



                                    Device/File not configured  <-- OCR Mirror not existed any more



         Cluster registry integrity check succeeded


Adding OCR



You need to add OCR or OCR mirror file in a case where you want to move the existing OCR file location to the different devices. The below command add the OCR mirror file if OCR file already exists.


(1) Get the Current status of OCR:

[root@node1-pub ~]# ocrconfig -replace ocrmirror

[root@node1-pub ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          2

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3852

         Available space (kbytes) :     258268

         ID                       :  744414276

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0 <<-- OCR File

                                    Device/File integrity check succeeded



                                    Device/File not configured  <-- OCR Mirror does not exist



         Cluster registry integrity check succeeded

 


As it can be seen, there is only one OCR file but not the second file (OCR Mirror). Below command adds the second OCR file.


ocrconfig -replace ocrmirror <File name>


[root@node1-pub ~]# ocrconfig -replace ocrmirror /u02/ocfs2/ocr/OCRfile_1

[root@node1-pub ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          2

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3852

         Available space (kbytes) :     258268

         ID                       :  744414276

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_0

                                    Device/File integrity check succeeded

         Device/File Name         : /u02/ocfs2/ocr/OCRfile_1

                                    Device/File integrity check succeeded



         Cluster registry integrity check succeeded



You can have at most 2 OCR devices (OCR itself and its single Mirror) in a cluster. Adding extra Mirror gives you below error message


[root@node1-pub ~]# ocrconfig -replace ocrmirror /u02/ocfs2/ocr/OCRfile_2

PROT-21: Invalid parameter

[root@node1-pub ~]#


Add/Remove Votedisk file in Cluster:

Adding Votedisk:

Get the existing Vote Disks associated into the cluster. To be safe, Bring crs cluster stack down on all the nodes but one on which you are going to add votedisk from.


(1)    Stop CRS on all the nodes in cluster but one.

[root@node2-pub ~]# crsctl stop crs

(2)    Get the list of Existing Vote Disks


crsctl query css votedisk


[root@node1-pub ~]# crsctl query css votedisk

 0.     0    /u02/ocfs2/vote/VDFile_0

 1.     0    /u02/ocfs2/vote/VDFile_1

 2.     0    /u02/ocfs2/vote/VDFile_2

Located 3 voting disk(s).


(3)    Backup the Votedisk file



Backup the existing votedisks as below as oracle:


dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0


[root@node1-pub ~]# su - oracle

[oracle@node1-pub ~]$ dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0

41024+0 records in

41024+0 records out

[oracle@node1-pub ~]$



(4)     Add an Extra Votedisk into the Cluster:



  If it is a OCFS, then touch the file as oracle. On raw devices, initialize the raw devices using "dd" command



touch /u02/ocfs2/vote/VDFile_3 <<-- as oracle

crsctl add css votedisk /u02/ocfs2/vote/VDFile_3 <<-- as oracle

crsctl query css votedisks



[root@node1-pub ~]# su - oracle

[oracle@node1-pub ~]$ touch /u02/ocfs2/vote/VDFile_3

[oracle@node1-pub ~]$ crsctl add css votedisk /u02/ocfs2/vote/VDFile_3

Now formatting voting disk: /u02/ocfs2/vote/VDFile_3.

Successful addition of voting disk /u02/ocfs2/vote/VDFile_3.



(5)     Confirm that the file has been added successfully:



[root@node1-pub ~]# ls -l /u02/ocfs2/vote/VDFile_3

-rw-r-----  1 oracle oinstall 21004288 Oct  6 16:31 /u02/ocfs2/vote/VDFile_3

[root@node1-pub ~]# crsctl query css votedisks

Unknown parameter: votedisks

[root@node1-pub ~]# crsctl query css votedisk

 0.     0    /u02/ocfs2/vote/VDFile_0

 1.     0    /u02/ocfs2/vote/VDFile_1

 2.     0    /u02/ocfs2/vote/VDFile_2

 3.     0    /u02/ocfs2/vote/VDFile_3

Located 4 voting disk(s).

Removing Votedisk:



Removing Votedisk from the cluster is very simple. The below command removes the given votedisk from cluster configuration.


crsctl delete css votedisk /u02/ocfs2/vote/VDFile_3


[root@node1-pub ~]# crsctl delete css votedisk /u02/ocfs2/vote/VDFile_3

Successful deletion of voting disk /u02/ocfs2/vote/VDFile_3.

[root@node1-pub ~]#


[root@node1-pub ~]# crsctl query css votedisk

 0.     0    /u02/ocfs2/vote/VDFile_0

 1.     0    /u02/ocfs2/vote/VDFile_1

 2.     0    /u02/ocfs2/vote/VDFile_2

Located 3 voting disk(s).

[root@node1-pub ~]#


Backing up OCR:



Oracle performs physical backup of OCR devices every 4 hours under the default backup directory $ORA_CRS_HOME/cdata/<CLUSTER_NAME>  and then it rolls that forward to Daily, weekly and monthly backup. You can get the backup information by executing below command.


ocrconfig -showbackup


[root@node1-pub ~]# ocrconfig -showbackup

node2-pub     2007/09/03 17:46:47     /u03/app/crs/cdata/test-crs/backup00.ocr

node2-pub     2007/09/03 13:46:45     /u03/app/crs/cdata/test-crs/backup01.ocr

node2-pub     2007/09/03 09:46:44     /u03/app/crs/cdata/test-crs/backup02.ocr

node2-pub     2007/09/03 01:46:39     /u03/app/crs/cdata/test-crs/day.ocr

node2-pub     2007/09/03 01:46:39     /u03/app/crs/cdata/test-crs/week.ocr

[root@node1-pub ~]#



Manually backing up the OCR



ocrconfig -manualbackup <<--Physical Backup of OCR




The above command backs up OCR under the default Backup directory. You can export the contents of the OCR using below command (Logical backup).


ocrconfig -export /tmp/ocr_exp.dat -s online <<-- Logical Backup of OCR

Restoring OCR:


The below command is used to restore the OCR from the physical backup. Shutdown CRS on all nodes.

ocrconfig -restore <file name>


Locate the available Backups

[root@node1-pub ~]# ocrconfig -showbackup

node2-pub     2007/09/03 17:46:47     /u03/app/crs/cdata/test-crs/backup00.ocr

node2-pub     2007/09/03 13:46:45     /u03/app/crs/cdata/test-crs/backup01.ocr

node2-pub     2007/09/03 09:46:44     /u03/app/crs/cdata/test-crs/backup02.ocr

node2-pub     2007/09/03 01:46:39     /u03/app/crs/cdata/test-crs/day.ocr

node2-pub     2007/09/03 01:46:39     /u03/app/crs/cdata/test-crs/week.ocr

node1-pub     2007/10/07 13:50:41     /u03/app/crs/cdata/test-crs/backup_20071007_135041.ocr



Perform Restore from previous Backup



[root@node2-pub ~]# ocrconfig -restore /u03/app/crs/cdata/test-crs/week.ocr



The logical backup of OCR (taken using export option) can be imported using the below command.


ocrconfig -import /tmp/ocr_exp.dat

Restoring Votedisks

·         Shutdown CRS on all the nodes in Cluster.

·         Locate the current location of the Votedisks

·         Restore each of the votedisks using "dd" command from the previous good backup of Votedisk taken using the same "dd" command.

·         Start CRS on all the nodes.


crsctl stop crs

crsctl query css votedisk

dd if=<backup of Votedisk> of=<Votedisk file> <<-- do this for all the votedisks

crsctl start crs



Friday, 21 April 2017

Oracle GoldenGate Technology and Architecture

1. Golden Gate Architecture

Oracle GoldenGate can be configured for the following purposes

1.  A static extraction of data records from one database and the loading of those
records to another database.

2.  Continuous extraction and replication of transactional DML1 operations and
DDL changes (for supported databases) to keep source and target data
consistent.

3. Extraction from a database and replication to a file outside the database.




 
Oracle GoldenGate is composed of the following components:
  •   Replicat
  •   Trails or extract files
  •   Checkpoints
  •   Manager
  •   Collector


2. Overview of process types

  •  Overview of Extract

The Extract process runs on the source system and is the extraction (capture)
mechanism of Oracle GoldenGate. You can configure Extract in one of the following ways:

 Initial loads:

For initial data loads, Extract extracts (captures) a current,static set of data
directly from their source objects.
 Change synchronization:
To keep source data synchronized with another set of data, Extract captures DML and DDL operations after the initial synchronization has taken place.
    Extract captures from a data source that can be one of the following:
1. Source tables, if the run is an initial load. 
2. The database recovery logs or transaction logs (such as the Oracle redo logs or SQL/MX audit trails). The actual method of capturing from the logs varies depending on the database type. 
3. A third-party capture module. This method provides a communication layer that passes data and metadata from an external API to the Extract API.The database vendor or a third-party vendor provides the components that extract the data operations and pass them to Extract.
                      
When configured for change synchronization, Extract captures the DML and DDL operations that are performed on objects in the Extract configuration. Extract stores these operations until it receives commit records or rollbacks for the transactions that  contain them. When a rollback is received, Extract discards the operations for that transaction. When a commit is received, Extract persists the transaction to disk in a series of files called a trail, where it is queued for propagation to the target system.
 All of the operations in each transaction are written to the trail as a sequentially organized transaction unit. This design ensures both speed and data integrity.

Multiple Extract processes can operate on different objects at the same time. For example, two Extract processes can extract and transmit in parallel to two Replicat processes (with two persistence trails) to minimize target latency when the databases are large. To differentiate among different Extract processes, you assign each one a group name

 Overview of data pumps

A data pump is a secondary Extract group within the source Oracle GoldenGate
configuration. If a data pump is not used, Extract must send the captured data
operations to a remote trail on the target. In a typical configuration a data pump however, the primary Extract group writes to a trail on the source system. 
 
The data pump reads this trail and sends the data operations over the network to a remote trail on the target. The data pump adds storage flexibility and also serves to isolate the primary Extract process from TCP/IP activity.

In general, a data pump can perform data filtering, mapping, and conversion, or it can be configured in pass-through mode, where data is passively transferred as-is, without manipulation. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed. In most business cases, you should use a data pump.

 Some reasons for using a data pump include the following:

1. Protection against network and target failures:
In a basic Oracle Golden Gate configuration, with only a trail on the
target system, there is nowhere on the source system to store the data
operations that Extract continuously extracts into memory. If the
network or the target system becomes unavailable, Extract could run
out of memory and ABEND However, with a trail and data pump on the
source system, captured data can be moved to disk, preventing the
ABEND of the primary Extract. When connectivity is restored, the data
pump captures the data from the source trail and sends it to the target
system(s).

2. You are implementing several phases of data filtering or transformation.
When using complex filtering or data transformation configurations, you
can configure a data pump to perform the first transformation either on
the source system or on the target system, or even on an intermediary
system, and then use another data pump or the Replicate group to
perform the second transformation.

3. Consolidating data from many sources to a central target.
When synchronizing multiple source databases with a central target
database, you can store extracted data operations on each source
system and use data pumps on each of those systems to send the data
to a trail on the target system. Dividing the storage load between the
source and target systems reduces the need for massive amounts of
space on the target system to accommodate data arriving from multiple
sources.

4. Synchronizing one source with multiple targets. When sending data to
multiple target systems, you can configure data pumps on the source
system for each target. If network connectivity to any of the targets
fails, data can still be sent to the other targets.

 Overview of Replicat

The Replicat process runs on the target system, reads the trail on that system, and then reconstructs the DML or DDL operations and applies them to the target database. Yo can configure Replicat in one of the following ways:

Initial loads:

For initial data loads, Replicat can apply a static data copy to target objects or route it to a high-speed bulk-load utility.

Change synchronization:

When configured for change synchronization, Replicat applies the replicated source operations to the target objects using a native database interface or ODBC, depending on the database type.

To preserve data integrity, Replicate applies the replicated operations in the same order as they were committed to the source database.

You can use multiple Replicate processes with multiple Extract processes in parallel to increase throughput. To preserve data integrity, each set of processes handles a different set of objects. To differentiate among Replicat processes, you assign each one a group name.

You can delay Replicate so that it waits a specific amount of time before applying the replicated operations to the target database. A delay may be desirable, for example, to prevent the propagation of errant SQL, to control data arrival across different time zones,or to allow time for other planned events to occur. The length of the delay is controlled by the DEFERAPPLYINTERVAL parameter.

 Overview of trails

To support the continuous extraction and replication of database changes, Oracle Golden Gate stores records of the captured changes temporarily on disk in a series of files called a trail.

A trail can exist on the source system, an intermediary system, the target system, or any combination of those systems, depending on how you configure Oracle GoldenGate.

On the local system it is known as an extract trail (or local trail). On a remote system it is known as a remote trail. By using a trail for storage,

Oracle Golden Gate supports data accuracy and fault tolerance The use of a trail also allows extraction and replication activities to occur independently of each other. With these processes separated, you have more choices for how data is processed and delivered. For example, instead of extracting and replicating changes continuously, you could extract changes continuously but store them in the trail for replication to the target later, whenever the target application them. Processes that write to, and read, a trail The primary Extract and the data-pump Extract write to a trail. Only one Extract process can write to a trail, and each Extract must be linked to a trail.

Processes that read the trail are:

1. Data-pump Extract: Extracts DML and DDL operations from a local trail that is linked to a previous Extract (typically the primary Extract), performs further processing if needed, and transfers the data to a trail that is read by the next Oracle GoldenGate process downstream (typically Replicat, but could be another data pump if required)

2. Replicat: Reads the trail and applies replicated DML and DDL operations
 to the target database.

Trail creation and maintenance :

The trail files themselves are created as needed during processing, but you specify a two-character name for the trail when you add it to the Oracle GoldenGate configuration with the ADD RMTTRAIL or ADD EXTTRAIL command. By default, trails are stored in the dirdat sub-directory of the Oracle GoldenGate directory.

Full trail files are aged automatically to allow processing to continue without interruption for file maintenance. As each new file is created, it inherits the two-character trail name appended with a unique, six-digit sequence number from 000000 through 999999 (for example c:\ggs\dirdat\tr000001).

When the sequence number reaches 999999, the numbering starts over at 000000.You can create more than one trail to separate the data from different objects or applications. You link the objects that are specified in a TABLE or SEQUENCE parameter to a trail that is specified with an EXTTRAIL or RMTTRAIL parameter in the Extract parameter file. Aged trail files can be purged by using the Manager parameter
PURGEOLDEXTRACTS.

To maximize throughput, and to minimize I/O load on the system, extracted data is sent into and out of a trail in large blocks. Transactional order is preserved. By default, Oracle GoldenGate writes data to the trail in canonical format, a proprietary format which allows it to be exchanged rapidly and accurately among heterogeneous databases.
However, data can be written in other formats that are compatible with different
applications.


 Overview of extract files

In some configurations, Oracle GoldenGate stores extracted data in an extract file instead of a trail. The extract file can be a single file, or it can be configured to roll over into multiple files in anticipation of limitations onfile size that are imposed by the perating system. In this sense, it is similar to a trail, except that checkpoints are not recorded. The file or files are created automatically during the run. The same versioning features that apply to trails also apply to extract files.

 Overview of checkpoints

Checkpoints store the current read and write positions of a process to disk for recovery purposes. Checkpoints ensure that data changes that are marked for synchronization actually are captured by Extract and applied to the target by Replicat, and they prevent redundant processing.

They provide fault tolerance by preventing the loss of data should the system, the network, or an Oracle GoldenGate process need to be restarted. For complex synchronization configurations, checkpoints enable multiple Extract or Replicat processes to read from the same set of trails.

Checkpoints work with inter-process acknowledgments to prevent messages from being lost in the network. Oracle GoldenGate has a proprietary guaranteed-message delivery technology.

Extract creates checkpoints for its positions in the data source and in the trail. Because Extract only captures committed transactions, it must keeptrack of operations in all open transactions, in the event that any of them are committed. This requires Extract to record a checkpoint where it is currently reading in a transaction log, plus the position of the start of the oldest open transaction, which can be in the current or any preceding log.

To control the amount of transaction log that must be re-processed after an outage, Extract persists the current state and data of processing to disk at specific intervals, including the state and data (if any) of long-running transactions. If Extract stops after one of these intervals, it can recover from a position within the previous interval or at the last checkpoint, instead of having to return to the log position where the oldest open long-running transaction first appeared.

Replicat creates checkpoints for its position in the trail. Replicat stores its checkpoints in a checkpoint table in the target database to couple the commit of its transaction with its position in the trail file. The checkpoint table guarantees consistency after a database recovery by ensuring that a transaction will only be applied once, even if there is a failure of the Replicat process or the database process. For reporting purposes, Replicat also has a checkpoint file on disk in the dirchk sub-directory of the Oracle GoldenGate directory. Checkpoints are not required for non-continuous types of configurations that can be re-run from a start point if needed, such as initial loads.

 Overview of Manager

Manager is the control process of Oracle GoldenGate. Manager must be running on each system in the Oracle GoldenGate configuration before Extract or Replicat can be started, and Manager must remain running while those processes are running so that resource management functions are performed.
Manager performs the following functions:

1. Start Oracle GoldenGate processes
2. Start dynamic processes
3. Maintain port numbers for processes
4. Perform trail management
5. Create event, error, and threshold reports
6. One Manager process can control many Extract or Replicat processes.

 Overview of Collector

Collector is a process that runs in the background on the target system when
continuous, online change synchronization is active.

Collector does the following:

Upon a connection request from a remote Extract to Manger, scan and bind to
unavailable port and then send the port number to Manager for assignment to the requesting Extract process. Receive extracted database changes that are sent by Extract and write them to a trail file. Manager starts Collector automatically when a network connection is required, so Oracle GoldenGate users do not interact with it.can be re-run from a start point if needed, such as initial loads.

Collector can receive information from only one Extract process, so there is one
Collector for each Extract that you use. Collector terminates when the associated Extract process terminates.

Collector can be run manually, if needed. This is known as a static Collector (as opposed to the regular, dynamic Collector). Several Extract processes can share one static Collector; however, a one-to-one ratio is optimal. A static Collector can be used to ensure that the process runs on a specific port.

By default, Extract initiates TCP/IP connections from the source system to Collector on the target, but Oracle GoldenGate can be configured so that Collector initiates connections from the target. Initiating connections from the target might be required if, for example, the target is in a trusted network zone, but the source is in a less trusted
zone.


 Overview of process types

Depending on the requirement, Oracle GoldenGate can be configured with the
following processing types.

1. An online Extract or Replicat process runs until stopped by a user. Online processes maintain recovery checkpoints in the trail so that processing can resume after interruptions. You use online processes to continuously extract
and replicate DML and DDL operations (where supported) to keep source and target objects synchronized. The EXTRACT and REPLICAT parameters apply to this process type.


2. A source-is-table Extract process extracts a current set of static data directly from the source objects in preparation for an initial load to another database. This process type does not use checkpoints. The SOURCEISTABLE parameter applies to this process type.

3. A special-run Replicat process applies data within known begin and end points. You use a special replicat run for initial data loads, and it also can be used with an online Extract to apply data changes from the trail in batches, such as once a day rather than continuously. This process type does not maintain checkpoints, because the run can be started over with the same begin and end points. The SPECIALRUN parameter applies to this process type.

4. A remote task is a special type of initial-load process in which Extract communicates directly with Replicat over TCP/IP. Neither a Collector process nor temporary disk storage in a trail or file is used. The task is defined in the Extract parameter file with the RMTTASK parameter.

 Overview of groups

To differentiate among multiple Extract or Replicat processes on a system, you define processing groups. For example, to replicate different sets of data in parallel, you would create two Replicat groups.

A processing group consists of a process (either Extract or Replicat), its parameter file, its checkpoint file, and any other files associated with the process. For Replicat, a group also includes the associated checkpoint table.

You define groups by using the ADD EXTRACT and ADD REPLICAT commands in the Oracle GoldenGate command interface, GGSCI.

All files and checkpoints relating to a group share the name that is assigned to the group itself. Any time that you issue a command to control or view processing, you supply a group name or multiple group names by means of a wildcard.

 Overview of the Commit Sequence Number (CSN)

When working with Oracle GoldenGate, you might need to refer to a Commit Sequence Number, or CSN. A CSN is an identifier that Oracle GoldenGate constructs to identify a transaction for the purpose of maintaining transactional consistency and data integrity. It uniquely identifies a point in time in which a transaction commits to the database. The CSN can be required to position Extract in the transaction log, to reposition Replicat in the trail, or for other purposes. It is returned by some conversion functions and is included in reports and certain GGSCI output.


 Supported Databases