Oracle Data Pump was
brought in Oracle Database 10g,
as
replacement for the original Export and Import utilities.
From Oracle Database 10g, new Data Pump Export (expdp) and Import
(impdp) clients that use this interface have been provided. Oracle
recommends that customers use these new Data Pump Export and Import
clients rather than the Original Export
and Original Import clients, since the
new utilities have vastly improved performance and greatly enhanced
functionality.
Oracle
Data Pump provides high speed, parallel, bulk data and metadata movement of Oracle
database contents. It’s a server-side replacement for the original Export and
Import utilities. A new public interface package, DBMS_DATAPUMP, provides a
server-side infrastructure for fast data and metadata movement.
Data Pump will make use of streams_pool_size.
Data Pump is available on the Standard Edition, Enterprise Edition and Personal Edition. However, the parallel capability is only available on Oracle 10g and Oracle 11g Enterprise Editions. Data Pump is included on all the platforms supported by Oracle 10g, Oracle 11g.
Original Export and Import Limitations
Data Pump will make use of streams_pool_size.
Data Pump is available on the Standard Edition, Enterprise Edition and Personal Edition. However, the parallel capability is only available on Oracle 10g and Oracle 11g Enterprise Editions. Data Pump is included on all the platforms supported by Oracle 10g, Oracle 11g.
Original Export and Import Limitations
- Design not
scalable to large databases
- Slow unloading
and loading of data, no parallelism
- Difficult to
monitor job progress
- Limited
flexibility in object selection
- No callable API
- Difficult to
maintain
- Non-restartable
- Client-side, single
threaded execution
- Limited tuning
mechanisms
- Limited object
filtering
The Data
Pump system requirements are the same as the standard Oracle Database 10g requirements. Data
Pump doesn’t need a lot of additional system or database resources, but
the time to extract and treat the information will be dependent on the CPU and
memory available on each machine. If system resource consumption becomes an
issue while a Data Pump job is executing, the job can be dynamically
throttled to reduce the number of execution threads.
Using the Direct Path method of unloading, a single stream of data unload is about 2 times faster than normal Export because the Direct Path API has been modified to be even more efficient. Depending on the level of parallelism, the level of improvement can be much more.
A single stream of data load is 15-45 times faster than normal Import. The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more improvement.
Data Pump Features
1.Writes either
– Direct Path unloads
– External tables (part of cluster, has LOB, etc)
2.Command line interface
3.Writing to external tables
4.DBMS_DATAPUMP – Data Pump API
5.DBMS_METADATA – Metadata API
6.Checkpoint / Job Restart
Using the Direct Path method of unloading, a single stream of data unload is about 2 times faster than normal Export because the Direct Path API has been modified to be even more efficient. Depending on the level of parallelism, the level of improvement can be much more.
A single stream of data load is 15-45 times faster than normal Import. The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more improvement.
Data Pump Features
1.Writes either
– Direct Path unloads
– External tables (part of cluster, has LOB, etc)
2.Command line interface
3.Writing to external tables
4.DBMS_DATAPUMP – Data Pump API
5.DBMS_METADATA – Metadata API
6.Checkpoint / Job Restart
- Job progress
recorded in Master Table - All stopped Data Pump jobs can be
restarted without loss of data as long as the master table and dump file
set remain undisturbed while the job is stopped. It doesn’t matter if the
job was stopped voluntarily by a user of if the stoppage was involuntary
due to a crash,power outage, etc.
- May be stopped
and restarted later
- Abnormally
terminated job is also restartable
- Current objects
can be skipped on restart if problematic
7.Better Job Monitoring and Control
- Can detach from
and attach to running jobs from any location - Multiple clients can attach
to a job to see what is going on. Clients may also detach from an
executing job without affecting it.
- Initial job
space estimate and overall percent done - At Export time, the approximate
size of the job is estimated before it gets underway. The default method
for determining this is to estimate the size of a partition by counting
the number of blocks currently allocated to it. If tables have been
analyzed, statistics can also be
used which should provide a more accurate estimate. The user gets an
estimate of how much dump file space will be consumed for the operation.
- Job state and
description - Once the Export begins, the user can get a status of the job
by seeing a percentage of how far along he is in the job. He can then
extrapolate the time required to get the job completed.
- Per-worker
status showing current object and percent done
- Enterprise
Manager interface available - The jobs can be monitored from any location
8.
Interactive Mode for expdp and impdp clients
- PARALLEL: add or
remove workers
- ADD_FILE: add
dump files and wildcard specs
- STATUS: get detailed
per-worker status
- STOP_JOB
{=IMMEDIATE}: stop the job, leaving it restartable, immediate doesn’t wait
for workers to finish current work items.
- START_JOB:
Restart a previously stopped job, can change reporting interval
- KILL_JOB: stop
job and delete all its resources, leaving it unrestartable, the master
table and dump files are deleted
- CONTINUE: leave
interactive mode, continue logging
- EXIT: exit
client, leave job running
All
modes of operation are supported: full, schema, table, tablespace, and
transportable tablespace.
9. Dumpfile Set Management
9. Dumpfile Set Management
- Directory based:
e.g., DMPDIR:export01.dmp where DMPDIR is external directory
- Can specify
maximum size of each dumpfile
- Can dynamically
add dumpfiles to jobs - If a job ran out of space, can use ADD_FILE
command and specify a FILESIZE value
- Wildcard file
specs supported - Wildcard file support makes it easy to spread the I/O
load over multiple spindles:
e.g.: Dumpfile=dmp1dir:full1%u.dmp, dmp2dir:full2%u.dmp - Dump file
compression of metadata - Metadata is compressed before being written to
the dumpfile set COMPRESSION=METADATA_ONLY
- In Oracle Database 11g, this compression
capability has been extended so that we can now compress table data on
export. Data Pump compression is an inline operation, so the
reduced dumpfile size means a significant savings in disk space.
- Automatically
uncompressed during Import. Data Pump compression is fully
inline on the import side as well, so there is no need to uncompress a
dumpfile before importing it.
- Dumpfile set
coherency automatically maintained
- Data
Pump supplies encryption options for more flexible and robust
security.
10.
Network Mode
Data Pump Export and Import both support a network mode in which the job’s source is a remote Oracle instance. This is an overlap of unloading the data, using Export, and loading the data, using Import, so those processes don’t have to be serialized. A database link is used for the network. We don’t have to worry about allocating file space because there are no intermediate dump files.
Network Export
– Unload a remote database to a local dump file set
– Allows export of read-only databases for archiving
Network Import
– Overlap execution of extract and load
– No intermediate dump files
Because Data Pump maintains a Master Control Table and must perform database writes, Data Pump can’t directly Export a Read-only database. Network mode allows the user to export Read-Only databases: The Data Pump Export job runs locally on a read/write instance and extracts the data and metadata from the remote read-only instance. Both network export and import use database links to communicate with the remote source.
First level parallelism is supported for both network export and import. I/O servers do not operate remotely, so second level, intra-partition parallelism is not supported in network operations.
11. Fine-Grained Object Selection
Data Pump Export and Import both support a network mode in which the job’s source is a remote Oracle instance. This is an overlap of unloading the data, using Export, and loading the data, using Import, so those processes don’t have to be serialized. A database link is used for the network. We don’t have to worry about allocating file space because there are no intermediate dump files.
Network Export
– Unload a remote database to a local dump file set
– Allows export of read-only databases for archiving
Network Import
– Overlap execution of extract and load
– No intermediate dump files
Because Data Pump maintains a Master Control Table and must perform database writes, Data Pump can’t directly Export a Read-only database. Network mode allows the user to export Read-Only databases: The Data Pump Export job runs locally on a read/write instance and extracts the data and metadata from the remote read-only instance. Both network export and import use database links to communicate with the remote source.
First level parallelism is supported for both network export and import. I/O servers do not operate remotely, so second level, intra-partition parallelism is not supported in network operations.
11. Fine-Grained Object Selection
- All object types
are supported - With the new EXCLUDE and INCLUDE parameters, a Data
Pump job can include or exclude any type of object and any subset of
objects within a type.
- Exclude
parameter: specified object types are excluded from the operation
- Include
parameter: only the specified object types are included
- Both take an
optional name filter for even finer granularity:
INCLUDE/ PACKAGE: “LIKE PAYROLL%’”
EXCLUDE TABLE: “in (‘FOO’,’BAR’,…)’”
e.g.:
EXCLUDE=function
EXCLUDE=procedure
EXCLUDE=package:”like ‘PAYROLL%’ “
Would exclude all functions, procedures, and packages with names starting with ‘PAYROLL’ from the job.
Using
INCLUDE instead of EXCLUDE above, would include the functions, procedures, and
packages with names starting with ‘PAYROLL’.
12. DDL Transformations
Easy with XML, because object metadata is stored as XML in the dump file set,it is easy to apply transformations when DDL is being formed (via XSL-T) during import.
12. DDL Transformations
Easy with XML, because object metadata is stored as XML in the dump file set,it is easy to apply transformations when DDL is being formed (via XSL-T) during import.
- REMAP_SCHEMA
-> REMAP_SCHEMA provides the old ‘FROMUSER / TOUSER’ capability to
change object ownership.
- REMAP_TABLESPACE
-> REMAP_TABLESPACE allows objects to be moved from one tablespace to
another. This changes the tablespace definition as well
- REMAP_DATAFILE
-> REMAP_DATAFILE is useful when moving databases across platforms that
have different file system semantics.
- Segment and
storage attributes can be suppressed -> The TRANSFORM parameter can
also be used so that storage clauses are not generated in the DDL. This is
useful if the storage characteristics of the target instance are very
different from those of the source.
Data
Pump Benefits
(advantages over normal export & import)
- Restartable
- Improved control
- Files will
created on server, not on client side
- Parallel
execution
- Automated
performance tuning
- Simplified
monitoring
- Improved object
filtering
- Dump will be
compressed
- Data can be
encrypted (in Oracle 11g or
later)
- Remap of data
during export or import (in 11g or later)
- We can export
one or more partitions of a table
without having to move the entire table (in 11g or later)
- XML schemas and
XMLType columns are supported for both export and import (in 11g or
later)
- Using the Direct
Path method of unloading or loading data, a single stream of Data Pump
export (unload) is approximately 2 times faster than original Export,
because the Direct Path API has been modified to be even more efficient.
Depending on the level of parallelism, the level of improvement can be
much more.
- Original Import
uses only conventional mode inserts, so a single stream of Data Pump
Import is 10-45 times faster than normal Import. As with Export, the
job’s single stream can be changed to parallel streams for even more
improvement.
- With Data
Pump, it is much easier for the DBA to manage and monitor jobs. During a
long-running job, the DBA can monitor a job from multiple locations and
know how far along it is, how much there is left to go, what objects are
being worked on, etc. The DBA can also affect the job’s operation, i.e.
abort it, adjust its resource consumption, and stop it for later restart.
- Since the jobs
are completed much more quickly than before, production systems have less
downtime.
- Data
Pump is publicly available as a PL/SQL package (DBMS_DATAPUMP), so
customers can write their own data movement utilities if so desired. The
metadata capabilities of the Data Pump are also available as a
separate PL/SQL package, DBMS_METADATA.
- While importing,
if destination schema is not existed, Data Pump will create the
user and import the objects.
Data Pump requires no special tuning. Data Pump runs optimally “out of the box”. Original Export and (especially) Import require careful tuning to achieve optimum results. There are no Data Pump performance tuning parameters other than the ability to dynamically adjust the degree of parallelism.
We can dynamically throttle the number of threads of execution throughout the lifetime of the job. There is an interactive command mode where we can adjust the level of parallelism. For example, we can start up a job during the day with a PARALLEL=2, and then increase it at night to a higher level.
All the Oracle database data types are supported via Data Pump’s two data movement mechanisms, Direct Path and External Tables.
With Data Pump, there is much more flexibility in selecting objects for unload and load operations. We can now unload any subset of database objects (such as functions, packages, and procedures) and reload them on the target platform. Almost all database object types can be excluded or included in an operation using the new Exclude and Include parameters.
We can either use the Command line interface or the Oracle Enterprise Manager web-based GUI interface.
Data Pump handles all the necessary compatibility issues between hardware platforms and operating systems.
Oracle Data Pump supports Oracle Apps 11i.
We can use the “ESTIMATE ONLY” command to see how much disk space is required for the job’s dump file set before we start the operation.
Jobs can be monitored from any location is going on. Clients may also detach from an executing job without affecting it.
Every Data Pump job creates a Master Table in which the entire record of the job is maintained. The Master Table is the directory to the job, so if a job is stopped for any reason, it can be restarted at a later point in time, without losing any data. Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has been completed and from where to continue etc.
With Data Pump, it is now possible to change the definition of some objects as they are created at import time. For example, we can remap the source datafile name to the target datafile name in all DDL statements where the source datafile is referenced. This is really useful if we are moving across platforms with different file system syntax.
Data Pump supports the Flashback infrastructure, so we can perform an export and get a dumpfile set that is consistent with a specified point in time or SCN.
Data Pump Vs SQL*Loader
We can use SQL*Loader to load data from external files into tables of an Oracle database. Many customers use SQL*Loader on a daily basis to load files (e.g. financial feeds) into their databases. Data Pump Export and Import may be used less frequently, but for very important tasks, such as migrating between platforms, moving data between development, test, and production databases, logical database backup, and for application deployment throughout a corporation.
Data Pump Vs Transportable Tablespaces
We can use Transportable Tablespaces when we want to move an entire tablespace of data from one Oracle database to another. Transportable Tablespaces allows Oracle data files to be unplugged from a database, moved or copied to another location, and then plugged into another database. Moving data using Transportable Tablespaces can be much faster than performing either an export or import of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace dictionary information. Even when transporting a tablespace, Data Pump Export and Import are still used to handle the extraction and recreation of the metadata for that tablespace.
The
various background processes related to Data Pump are:
•Data
Pump Master Process (DMnn) : Creates and deletes the master table at the time
of export and import. Master table contains the job state and object
information.
Coordinates
the Data Pump job tasks performed by Data Pump worker processes and handles
client interactions. The Data Pump master (control) process is started during
job creation and coordinates all tasks performed by the Data Pump job. It
handles all client interactions and communication, establishes all job
contexts, and coordinates all worker process activities on behalf of the job. Creates
the Worker Process.
•Data
Pump Worker Process (DWnn) : It performs the actual heavy duty work of loading
and unloading of data. It maintains the information in master table. The Data
Pump worker process is responsible for performing tasks that are assigned by
the Data Pump master process, such as the loading and unloading of metadata and
data.
•Shadow
Process : When client logs in to an Oracle Server the database creates and Oracle
process to service Data Pump API.
•Client
Process : The client process calls the Data pump API.
Data Pump Disadvantages
•Can’t use UNIX pipes
•Can't run as SYS (/ as sysdba)
Related Views
DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS
Data
Pump Export & Import utilities in Oracle
expdp utility
The
Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases.
With Data Pump, we can do all exp/imp activities, except incremental backups.
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
With Data Pump, we can do all exp/imp activities, except incremental backups.
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ expdp help=y
Keyword
|
Description (Default)
|
ATTACH
|
Attach
to an existing job, e.g. ATTACH [=job name].
|
COMPRESSION
|
Reduce
the size of a dumpfile. Valid keyword values are: ALL, (METADATA_ONLY), DATA_ONLY
and NONE.
|
CONTENT
|
Specifies
data to unload. Valid keyword values are: (ALL), DATA_ONLY, and
METADATA_ONLY.
|
DATA_OPTIONS
|
Data layer flags. Valid value is: XML_CLOBS - write XML
datatype in CLOB format.
|
DIRECTORY
|
Directory
object to be used for dumpfiles and logfiles. (DATA_PUMP_DIR)
e.g. create directory extdir as '/path/'; |
DUMPFILE
|
List
of destination dump files (EXPDAT.DMP),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp. |
ENCRYPTION
|
Encrypt part or all of a dump file. Valid keyword values are:
ALL, DATA_ONLY, METADATA_ONLY, ENCRYPTED_COLUMNS_ONLY, or NONE.
|
ENCRYPTION_ALGORITHM
|
Specify how encryption should be done. Valid keyword values
are: (AES128), AES192, and AES256.
|
ENCRYPTION_MODE
|
Method of generating encryption key. Valid keyword values are:
DUAL, PASSWORD, and (TRANSPARENT).
|
ENCRYPTION_PASSWORD
|
Password
key for creating encrypted data within a dump file.
|
ESTIMATE
|
Calculate
job estimates. Valid keyword values are: (BLOCKS) and STATISTICS.
|
ESTIMATE_ONLY
|
Calculate
job estimates without performing the export.
|
EXCLUDE
|
Exclude
specific object types. e.g. EXCLUDE=TABLE:EMP
|
FILESIZE
|
Specify
the size of each dumpfile in units of bytes.
|
FLASHBACK_SCN
|
SCN
used to reset session snapshot.
|
FLASHBACK_TIME
|
Time
used to find the closest corresponding SCN value.
|
FULL
|
Export
entire database (N). To use this option user must have EXP_FULL_DATABASE
role.
|
HELP
|
Display
help messages (N).
|
INCLUDE
|
Include
specific object types. e.g. INCLUDE=TABLE_DATA.
|
JOB_NAME
|
Name
of export job (default name will be SYS_EXPORT_XXXX_01, where XXXX can be
FULL or SCHEMA or TABLE).
|
LOGFILE
|
Specify
log file name (EXPORT.LOG).
|
NETWORK_LINK
|
Name
of remote database link to the source system.
|
NOLOGFILE
|
Do
not write logfile (N).
|
PARALLEL
|
Change
the number of active workers for current job.
|
PARFILE
|
Specify
parameter file name.
|
QUERY
|
Predicate
clause used to export a subset of a table. e.g. QUERY=emp:"WHERE dept_id
> 10".
|
REMAP_DATA
|
Specify a data conversion function. e.g.
REMAP_DATA=EMP.EMPNO:SCOTT.EMPNO |
REUSE_DUMPFILES
|
Overwrite destination dump file if it exists (N).
|
SAMPLE
|
Percentage
of data to be exported.
|
SCHEMAS
|
List
of schemas to export (login schema).
|
SOURCE_EDITION
|
Edition to be used for extracting metadata (from Oracle
11g release2).
|
STATUS
|
Frequency
(secs) job status is to be monitored where the default (0) will show new
status when available.
|
TABLES
|
Identifies
a list of tables to export. e.g. TABLES=HR.EMP,SH.SALES:SALES_1995.
|
TABLESPACES
|
Identifies
a list of tablespaces to export.
|
TRANSPORTABLE
|
Specify whether transportable method can be used. Valid
keyword values are: ALWAYS, (NEVER).
|
TRANSPORT_FULL_CHECK
|
Verify
storage segments of all tables (N).
|
TRANSPORT_TABLESPACES
|
List
of tablespaces from which metadata will be unloaded.
|
VERSION
|
Version
of objects to export. Valid keywords are: (COMPATIBLE), LATEST, or any valid
database version.
|
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Data Pump Export interactive mode
While exporting is going on, press Control-C to go to interactive mode, it will stop the displaying of the messages on the screen, but not the export process itself.
Export> [[here you can use the below interactive commands]]
Command
|
Description
|
ADD_FILE
|
Add
dumpfile to dumpfile set.
|
CONTINUE_CLIENT
|
Return
to logging mode. Job will be re-started if idle.
|
EXIT_CLIENT
|
Quit
client session and leave job running.
|
FILESIZE
|
Default
filesize (bytes) for subsequent ADD_FILE commands.
|
HELP
|
Summarize
interactive commands.
|
KILL_JOB
|
Detach
and delete job.
|
PARALLEL
|
Change
the number of active workers for current job. PARALLEL=number of workers
|
REUSE_DUMPFILES
|
Overwrite destination dump file if it exists (N).
|
START_JOB
|
Start/resume
current job. Valid value is: SKIP_CURRENT.
|
STATUS
|
Frequency
(secs) job status is to be monitored where the default (0) will show new
status when available. STATUS[=interval]
|
STOP_JOB
|
Orderly
shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs
an immediate shutdown of Data Pump job.
|
Note: values within parenthesis are the default values.
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
Whenever Data
Pump export or import is running, Oracle will create a table with the
JOB_NAME and will be deleted once the job is done. From this table, Oracle will
find out how much job has been completed and from where to continue etc.
Data Pump Export Examples
SQL>
CREATE DIRECTORY dp_dir AS '/u02/dpdata';
SQL>
GRANT READ, WRITE ON DIRECTORY dp_dir TO user_name;
==>
creating an external directory and granting privileges.
$
expdp DUMPFILE=liv_full.dmp LOGFILE=liv_full.log FULL=y PARALLEL=4
==>
exporting whole database, with the help of 4 processes.
$
expdp DUMPFILE=master.dmp LOGFILE=master.log SCHEMAS=satya
(or)
$
expdp system/manager SCHEMAS=hr DIRECTORY=data_pump_dir LOGFILE=example1.log
FILESIZE=300000 DUMPFILE=example1.dmp JOB_NAME=example1
==>
exporting all the objects of a schema.
$
expdp ATTACH=EXAMPLE1
==>
continuing or attaching job to background process.
$
expdp DUMPFILE=search.dmp LOGFILE=search.log SCHEMAS=search,own,tester
==>
exporting all the objects of multiple schemas.
$
expdp anand/coffee TABLES=kick DIRECTORY=ext_dir DUMPFILE=expkick_%U.dmp
PARALLEL=4 JOB_NAME=kick_export
==>
exporting all the rows in table.
$
expdp DUMPFILE=t5.dmp LOGFILE=t5.log SCHEMAS=ym ESTIMATE_ONLY=Y
(or)
$
expdp LOGFILE=t5.log SCHEMAS=manage ESTIMATE_ONLY=Y
==>
estimating export time and size.
$
expdp DUMPFILE=extdir:avail.dmp LOGFILE=extdir:avail.log
==>
exporting without specifying DIRECTORY option and specifying the external
directory name within the file names.
$
expdp SCHEMAS=u1,u6 .... COMPRESSION=metadata_only
==>
exporting two schemas and compressing the metadata.
$
expdp SCHEMAS=cpp,java .... COMPRESSION=all
==>
exporting two schemas and compressing the data (valid in 11g or later).
$
expdp username/password FULL=y DUMPFILE=dba.dmp DIRECTORY=dpump_dir
INCLUDE=GRANT INCLUDE=INDEX CONTENT=ALL
==>
exporting an entire database to a dump file with all GRANTS, INDEXES and data
$
expdp DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE
==>
exporting all the procedures.
$
expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir
INCLUDE=PROCEDURE:\"=\'PROC1\'\",FUNCTION:\"=\'FUNC1\'\"
==>
exporting procedure PROC1 and function FUNC1.
$
expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir
INCLUDE=TABLE:"LIKE
'TAB%'"
(or)
$
expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir
EXCLUDE=TABLE:"NOT LIKE 'TAB%'"
==>
exporting only those tables whose name start with TAB.
$
expdp TABLES=hr.employees VERSION=10.1 DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp
==>
exporting data with version. Data Pump Import can always read dump
file sets created by older versions of Data Pump Export.
$
expdp TABLES=holder,activity REMAP_DATA=holder.cardno:hidedata.newcc
REMAP_DATA=activity.cardno:hidedata.newcc
DIRECTORY=dpump_dir DUMPFILE=hremp2.dmp
==>
exporting and remapping of data.
Exporting using Data Pump API (DBMS_DATAPUMP package)
declare
handle number; begin handle := dbms_datapump.open ('EXPORT', 'SCHEMA'); dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR'); dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR','=''SCOTT'''); dbms_datapump.set_parallel(handle,4); dbms_datapump.start_job(handle); dbms_datapump.detach(handle); exception when others then dbms_output.put_line(substr(sqlerrm, 1, 254)); end; / |
The
Data Pump Import utility provides a mechanism for transferring
data objects between Oracle databases.
Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
data objects between Oracle databases.
Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID
must be the first parameter on the command line. This user must have read &
write permissions on DIRECTORY.
$ impdp help=y
$ impdp help=y
Keyword
|
Description (Default)
|
ATTACH
|
Attach
to an existing job, e.g. ATTACH [=job name].
|
CONTENT
|
Specifies
data to load. Valid keywords are:(ALL), DATA_ONLY, and METADATA_ONLY.
|
DATA_OPTIONS
|
Data layer flags. Valid value is:
SKIP_CONSTRAINT_ERRORS-constraint errors are not fatal.
|
DIRECTORY
|
Directory
object to be used for dump, log, and sql files. (DATA_PUMP_DIR)
|
DUMPFILE
|
List
of dumpfiles to import from (EXPDAT.DMP), e.g. DUMPFILE=scott1.dmp,
scott2.dmp, dmpdir:scott3.dmp.
|
ENCRYPTION_PASSWORD
|
Password
key for accessing encrypted data within a dump file. Not valid for network
import jobs.
|
ESTIMATE
|
Calculate
job estimates. Valid keywords are:(BLOCKS) and STATISTICS.
|
EXCLUDE
|
Exclude
specific object types. e.g. EXCLUDE=TABLE:EMP
|
FLASHBACK_SCN
|
SCN
used to reset session snapshot.
|
FLASHBACK_TIME
|
Time
used to find the closest corresponding SCN value.
|
FULL
|
Import
everything from source (Y). To use this option (full import of the database)
the user must have IMP_FULL_DATABASE role.
|
HELP
|
Display
help messages (N).
|
INCLUDE
|
Include
specific object types. e.g. INCLUDE=TABLE_DATA.
|
JOB_NAME
|
Name
of import job (default name will be SYS_IMPORT_XXXX_01, where XXXX can be
FULL or SCHEMA or TABLE).
|
LOGFILE
|
Log
file name (IMPORT.LOG).
|
NETWORK_LINK
|
Name
of remote database link to the source system.
|
NOLOGFILE
|
Do
not write logfile.
|
PARALLEL
|
Change
the number of active workers for current job.
|
PARFILE
|
Specify
parameter file name.
|
PARTITION_OPTIONS
|
Specify how partitions should be transformed. Valid keywords
are: DEPARTITION, MERGE and (NONE).
|
QUERY
|
Predicate
clause used to import a subset of a table. e.g. QUERY=emp:"WHERE dept_id
> 10".
|
REMAP_DATA
|
Specify a data conversion function.
e.g. REMAP_DATA=EMP.EMPNO:SCOTT.EMPNO |
REMAP_DATAFILE
|
Redefine
datafile references in all DDL statements.
|
REMAP_SCHEMA
|
Objects
from one schema are loaded into another schema.
|
REMAP_TABLE
|
Table names are remapped to another table.
e.g. REMAP_TABLE=EMP.EMPNO:SCOTT.EMPNO. |
REMAP_TABLESPACE
|
Tablespace
object are remapped to another tablespace.
|
REUSE_DATAFILES
|
Tablespace
will be initialized if it already exists(N).
|
SCHEMAS
|
List
of schemas to import.
|
SKIP_UNUSABLE_INDEXES
|
Skip
indexes that were set to the Index Unusable state.
|
SOURCE_EDITION
|
Edition to be used for extracting metadata (from Oracle
11g release2).
|
SQLFILE
|
Write
all the SQL DDL to a specified file.
|
STATUS
|
Frequency
(secs) job status is to be monitored where the default (0) will show new
status when available.
|
STREAMS_CONFIGURATION
|
Enable
the loading of streams metadata.
|
TABLE_EXISTS_ACTION
|
Action
to take if imported object already exists. Valid keywords: (SKIP), APPEND,
REPLACE and TRUNCATE.
|
TABLES
|
Identifies
a list of tables to import. e.g. TABLES=HR.EMP,SH.SALES:SALES_1995.
|
TABLESPACES
|
Identifies
a list of tablespaces to import.
|
TARGET_EDITION
|
Edition to be used for loading metadata (from Oracle
11g release2).
|
TRANSFORM
|
Metadata
transform to apply to applicable objects. Valid keywords: SEGMENT_ATTRIBUTES,
STORAGE, OID and PCTSPACE.
|
TRANSPORTABLE
|
Options for choosing transportable data movement. Valid
keywords: ALWAYS and (NEVER).
Only valid in NETWORK_LINK mode import operations. |
TRANSPORT_DATAFILES
|
List
of datafiles to be imported by transportable mode.
|
TRANSPORT_FULL_CHECK
|
Verify
storage segments of all tables (N).
|
TRANSPORT_TABLESPACES
|
List
of tablespaces from which metadata will be loaded. Only valid in NETWORK_LINK
mode import operations.
|
VERSION
|
Version
of objects to export. Valid keywords are:(COMPATIBLE), LATEST, or any valid
database version. Only valid for NETWORK_LINK and SQLFILE.
|
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Data Pump Import interactive mode
While importing is going on, press Control-C to go to interactive mode.
Import> [[here you can use the below interactive commands]]
Command
|
Description (Default)
|
CONTINUE_CLIENT
|
Return
to logging mode. Job will be re-started if idle.
|
EXIT_CLIENT
|
Quit
client session and leave job running.
|
HELP
|
Summarize
interactive commands.
|
KILL_JOB
|
Detach
and delete job.
|
PARALLEL
|
Change
the number of active workers for current job. PARALLEL=number of workers
|
START_JOB
|
Start/resume
current job. START_JOB=SKIP_CURRENT will start the job after skipping any
action which was in progress when job was stopped.
|
STATUS
|
Frequency
(secs) job status is to be monitored where the default (0) will show new
status when available. STATUS[=interval]
|
STOP_JOB
|
Orderly
shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs
an immediate shutdown of Data Pump job.
|
Note: values within
parenthesis are the default values.
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
The order of
importing objects is:
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
Data
Pump Import Examples
$
impdp DUMPFILE=aslv_full.dmp LOGFILE=aslv_full.log PARALLEL=4
==>
importing all the exported data, with the help of 4 processes.
$
impdp system/manager DUMPFILE=testdb_emp.dmp LOGFILE=testdb_emp_imp.log
TABLES=tester.employee
==>
importing all the records of table (employee table records in tester schema).
$
impdp DUMPFILE=visi.dmp LOGFILE=ref1imp.log TABLES=(brand, mba)
==>
importing all the records of couple of tables.
$
impdp system DUMPFILE=example2.dmp REMAP_TABLESPACE=system:example2
LOGFILE=example2imp.log JOB_NAME=example2
==>
importing data of one tablespace into another tablespace.
$
impdp DUMPFILE=prod.dmp LOGFILE=prod.log REMAP_TABLESPACE=FRI:WED
TABLE_EXISTS_ACTION=REPLACE PARALLEL=4
==>
importing data and replacing already existing tables.
$
impdp user1/user1 DUMPFILE=btw:avail.dmp INCLUDE=PROCEDURE
==>
importing only procedures from the dump file.
$
impdp username/password DIRECTORY=dpump_dir DUMPFILE=scott.dmp TABLES=scott.emp
REMAP_SCHEMA=scott:jim
==>
importing of tables from scott’s account to jim’s account
$
impdp DIRECTORY=dpump_dir FULL=Y DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HR\PAYROLL\tbs6.dbf’:’/db1/hr/payroll/tbs6.dbf’”
==>
importing data by remapping one datafile to another.
$
impdp username/password DIRECTORY=dpump DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql
INCLUDE=TABLE,INDEX
==>
will create sqlfile with DDL that could be executed in another database/schema
to create the tables and indexes.
$
impdp DIRECTORY=dpump_dir DUMPFILE=emps.dmp
REMAP_DATA=emp.empno:fixusers.newempid REMAP_DATA=card.empno:fixusers.newempi
TABLE_EXISTS_ACTION=append
==>
importing and remapping of data.
Importing using Data Pump API (DBMS_DATAPUMP package)
declare
handle number; begin handle := dbms_datapump.open ('IMPORT', 'SCHEMA'); dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR'); dbms_datapump.set_parameter(handle,'TABLE_EXISTS_ACTION','REPLACE'); dbms_datapump.set_parallel(handle,4); dbms_datapump.start_job(handle); dbms_datapump.detach(handle); exception when others then dbms_output.put_line(substr(sqlerrm, 1, 254)); end; / |
Here is a general guideline for using the PARALLEL parameter:
- Set the degree of parallelism to two times the number of CPUs, then tune from there.
- For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of dump files.
- For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
- A PARALLEL greater than one is only available in Enterprise Edition.
Original Export is desupported from 10g release 2.
Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 10g (and later) Data Pump Export dump files. Oracle recommends that customers convert to use the Oracle Data Pump.
No comments :
Post a Comment