Transportable Tablespaces
Transportable tablespaces were
introduced in Oracle 8i to allow whole tablespaces to be copied between
databases in the time it takes to copy the datafiles. In Oracle 8i one of the
restrictions was that the block size of both databases must be the same. In
Oracle 9i the introduction of multiple block sizes has removed this
restriction. In this article I will run through a simple example of
transporting a tablespace between two databases.
Related articles.
- Oracle Data Pump in Oracle Database 10g (expdp and impdp)
- Data Pump Enhancements in Oracle Database 11g Release 1
- SQL Developer 3.1 Data Pump Wizards (expdp, impdp)
Setup
For this example I'm going to create
a new tablespace, user and table to work with in the source database.
CONN
/ AS SYSDBA
CREATE
TABLESPACE test_data
DATAFILE
'/u01/app/oracle/oradata/DB11G/test_data01.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE
USER test_user IDENTIFIED BY test_user
DEFAULT TABLESPACE test_data
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON test_data;
GRANT
CREATE SESSION, CREATE TABLE TO test_user;
CONN
test_user/test_user
CREATE
TABLE test_tab (
id
NUMBER,
description VARCHAR2(50),
CONSTRAINT test_tab_pk PRIMARY KEY (id)
);
INSERT
/*+ APPEND */ INTO test_tab (id, description)
SELECT
level,
'Description for ' || level
FROM dual
CONNECT
BY level <= 10000;
COMMIT;
Source
Database
For a tablespace to be transportable
it must be totally self contained. This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure. The TS_LIST parameter
accepts a comma separated list of tablespace names and the INCL_CONSTRAINTS parameter indicates if constraints should be included in
the check.
CONN
/ AS SYSDBA
EXEC
DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'TEST_DATA', incl_constraints =>
TRUE);
PL/SQL
procedure successfully completed.
SQL>
The TRANSPORT_SET_VIOLATIONS
view is used to check for any violations.
SELECT
* FROM transport_set_violations;
no
rows selected
SQL>
Assuming no violations are produced
we are ready to proceed by switching the tablespace to read only mode.
SQL>
ALTER TABLESPACE test_data READ ONLY;
Tablespace
altered.
SQL>
Next we export the tablespace
metadata using the export (expdp or exp) utility. If you are using 10g or above
you should use the expdp utility. This requires a directory object pointing to
a physical directory with the necessary permissions on the database server.
CONN
/ AS SYSDBA
CREATE
OR REPLACE DIRECTORY temp_dir AS '/tmp/';
GRANT
READ, WRITE ON DIRECTORY temp_dir TO system;
We can now export the tablespace
metadata.
$
expdp userid=system/password directory=temp_dir transport_tablespaces=test_data
dumpfile=test_data.dmp logfile=test_data_exp.log
If you are using a version prior to
10g, you do not need the directory object and your command would look something
like this.
$
exp userid='system/password as sysdba' transport_tablespace=y
tablespaces=test_data file=test_data.dmp log=test_data_exp.log
Copy the datafile to the appropriate
location on the destination database server. Also copy the dump file to a
suitable place on the destination database server. You may use binary FTP or
SCP to perform this copy.
The source tablespace can now be
switched back to read/write mode.
ALTER
TABLESPACE test_data READ WRITE;
Tablespace
altered.
SQL>
Destination
Database
Create any users in the destination
database that owned objects within the tablespace being transported, assuming
they do not already exist.
CONN
/ AS SYSDBA
CREATE
USER test_user IDENTIFIED BY test_user;
GRANT
CREATE SESSION, CREATE TABLE TO test_user;
Now we import the metadata into the
destination database. If you are using 10g or above you should use the impdp
utility. This requires a directory object pointing to a physical directory with
the necessary permissions on the database server.
CONN
/ AS SYSDBA
CREATE
OR REPLACE DIRECTORY temp_dir AS '/tmp/';
GRANT
READ, WRITE ON DIRECTORY temp_dir TO system;
We can now import the tablespace
metadata.
$
impdp userid=system/password directory=temp_dir dumpfile=test_data.dmp
logfile=test_data_imp.log
transport_datafiles='/u01/app/oracle/oradata/DB11GB/test_data01.dbf'
If you are using a version prior to
10g, you do not need the directory object and your command would look something
like this.
$
imp userid='system/password as sysdba' transport_tablespace=y
datafiles='/u01/app/oracle/oradata/DB11GB/test_data01.dbf'
tablespaces=test_data file=test_data.dmp log=test_data_imp.log
Switch the new tablespace into read
write mode.
SQL>
ALTER TABLESPACE test_data READ WRITE;
Tablespace
altered.
SQL>
The tablespace is now available in
the destination database.
SELECT
tablespace_name, plugged_in, status
FROM dba_tablespaces
WHERE tablespace_name = 'TEST_DATA';
TABLESPACE_NAME PLU STATUS
------------------------------
--- ---------
TEST_DATA YES ONLINE
1
row selected.
SQL>
How to transport a tablespace in Oracle 10g |
Posted by Zahid
on July 21, 2009.
|
The tablespace that is being transported should be self contained, which means the objects in the tablespace shouldn't have dependent objects in other tablespaces.
$ sqlplus / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0-Production
With the Partitioning, OLAP and Data Mining options
SQL> create tablespace ts_tts
2 datafile '/d01/apps/oradata/oraxpo/ts_tts01.dbf'
3 size 10m autoextend on next 5m
4 extent management local
5 segment space management auto;
Tablespace created.
SQL> alter user scott quota unlimited on ts_tts;
User altered.
SQL> conn scott/tiger
Connected.
SQL> create table p_t1 (name varchar2(30) primary key)
2 tablespace users;
Table created.
SQL> insert into p_t1 values ('ABC');
1 row created.
SQL> insert into p_t1 values ('XYZ');
1 row created.
SQL> commit;
Commit complete.
SQL> create table t1 (name varchar2(30))
2 tablespace ts_tts;
Table created.
SQL> insert into t1 values ('ABC');
1 row created.
SQL> insert into t1 values ('XYZ');
1 row created.
SQL> commit;
Commit complete.
SQL> alter table t1 add constraint t1_fk
2 foreign key (name) references p_t1 (name);
Table altered.
/*
We have create parent child relationship, where parent table
is in tablespace users and child table is in tablespace ts_tts.
We want to transport tablespace ts_tts. Lets check if this
tablespace violates any transportation rules.
*/
SQL> conn / as sysdba
Connected.
SQL> begin
2 dbms_tts.transport_set_check(ts_list=>'TS_TTS',
3 incl_constraints=>TRUE);
4 end;
5 /
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;
VIOLATIONS
-------------------------------------------------------------
Constraint T1_FK between table SCOTT.P_T1 in tablespace USERS
and table SCOTT.T1 in tablespace TS_TTS
/*
dbms_tts.transport_set_check is complaining that the child table
is in ts_tts and parent is in users tablespace. Lets go and fix
this.
NOTE: The fix is whatever suits you. All you have to do is remove
these violations.
*/
SQL> conn scott/tiger
Connected.
SQL> alter table p_t1 move tablespace ts_tts;
Table altered.
SQL> conn / as sysdba
Connected.
SQL> begin
2 dbms_tts.transport_set_check(ts_list=>'TS_TTS',
3 incl_constraints=>TRUE);
4 end;
5 /
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;
VIOLATIONS
------------------------------------------------------------
Index SCOTT.SYS_C005398 in tablespace USERS enforces primary
constraints of table SCOTT.P_T1 in tablespace TS_TTS
/*
Check again and this time it says that the index behind the
parent's table primary key is still in users tablespace.
*/
SQL> conn scott/tiger
Connected.
SQL> alter index SYS_C005398 rebuild tablespace ts_tts;
Index altered.
SQL> conn / as sysdba
Connected.
SQL> begin
2 dbms_tts.transport_set_check(ts_list=>'TS_TTS',
3 incl_constraints=>TRUE);
4 end;
5 /
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;
no rows selected
/* This time we are good to go with the transport. */
SQL> select a.platform_name , b.endian_format
2 from v$database a , v$transportable_platform b
3 where a.platform_name = b.platform_name;
PLATFORM_NAME ENDIAN_FORMAT
--------------------------- --------------
Linux IA (32-bit) Little
/*
This query tells us the platform where our database is running,
its endian format.
*/
SQL> alter tablespace ts_tts read only;
Tablespace altered.
SQL> exitPut the tablespace in read only mode and then export the metadata of the tablespace. Not data, just metadata.
$ expdp system/system
dumpfile=ts_tts_metadata.dmp
transport_tablespaces=ts_tts
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g
Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":
system/******** dumpfile=ts_tts_metadata.dmp transport_tablespaces=ts_tts
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully
loaded/unloaded
*********************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/d01/apps/oracle/rdbms/log/ts_tts_metadata.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at
03:39:40
$Now come to the target platform and lets have some checks.
Start -> Run -> cmd
C:\>sqlplus / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL> select directory_name , directory_path
2 from dba_directories
3 where directory_name='DATA_PUMP_DIR';
DIRECTORY_NAME DIRECTORY_PATH
---------------- -------------------------------------------
DATA_PUMP_DIR C:\oracle\product\10.2.0\admin\orcl\dpdump\
/*
We will put our dump file in this directory to use it with the
metadata import.
*/
SQL> select name from v$datafile;
NAME
------------------------------------------------------------
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\EXAMPLE01.DBF
5 rows selected.
/* Existing files on the target database . */
SQL> select a.platform_name , b.endian_format
2 from v$database a , v$transportable_platform b
3 where a.platform_name = b.platform_name;
PLATFORM_NAME ENDIAN_FORMAT
------------------------------------ --------------
Microsoft Windows x86 64-bit Little
/*
The platform and the endian format of the target database.
NOTE: If the endian format of the target and source database are
different then the source file needs to be converted to the
target platform format before transporting.
*/
SQL> exitIn our case the endian format of the source and target is same so we can simply copy the data files for the tablespace from source to target and import the metadata using data pump. I will give it a go with a scenario some other time where target and source endian formats are different.
Copy the data files and the dump file ts_tts_metadata.dmp to the target system and then put the tablespace back into read write mode.
$ sqlplus / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0-Production
With the Partitioning, OLAP and Data Mining options
SQL> ALTER TABLESPACE ts_tts READ WRITE;
Tablespace altered.
SQL>Come to the target platform and start datapump to import the tablespace metadata into the target database.
Before import make sure that the user who owned the objects in the tablespace in the source database exists here in the target. If this is not acceptable then you will need to remap the schema of the objects to an existing user schema during the import.
I have copied the data file to the location where other data files of this database reside and the dump file to the "C:\oracle\product\10.2.0\admin\orcl\dpdump\" where DATA_PUMP_DIR directory points.
C:\>impdp system/system
dumpfile=ts_tts_metadata.dmp
directory=data_pump_dir
transport_datafiles=
'C:\oracle\product\10.2.0\oradata\orcl\ts_tts01.dbf'
[remap_schema=SCOTT:HR]
-- remap option will import the
-- objects of SCOTT into HR schema.
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition
Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully
loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":
system/******** dumpfile=ts_tts_metadata.dmp
directory=data_pump_dir
transport_datafiles=
'C:\oracle\product\10.2.0\oradata\orcl\ts_tts01.dbf'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at
04:13:32If you see a message of successfully completed import, the tablespace has been imported. We can check that like this:
C:\>sqlplus scott/scott
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL> select * from p_t1;
NAME
------------------------------
ABC
XYZ
SQL> select * from t1;
NAME
------------------------------
ABC
XYZ
SQL> conn / as sysdba
Connected.
SQL> select tablespace_name , file_name from dba_data_files;
TABLESPACE_NAME FILE_NAME
--------------- ---------------------------------------------------
USERS C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
SYSAUX C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
UNDOTBS1 C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
SYSTEM C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
EXAMPLE C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\EXAMPLE01.DBF
TS_TTS C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TS_TTS01.DBF
6 rows selected.Sometimes the target and source databases may have a different block size, for example the source database uses 8k block size but the target database uses 16k, in that case the tablespace in the source should be created with a block size of 16 k so that it can later on be transported to a 16k block size database.
See also:
Oracle 10g db_nk_cache_size Parameters
No comments:
Post a Comment