Monday, August 3, 2015

Export Import Oracle

Executing EXPDP and IMPDP
To perform data pump export and import, we use the utility EXPDP and IMPDP respectively from the operating system command prompt.
Data pump export utility (EXPDP) queries the database including the data dictionary and saves the output in a XML file. This is the export dump file. This dump file is be then used to import the data using the data pump import utility (IMPDP).
Data pump uses a directory to store its dump files and log files. We can use a already existing directory or create a new one. This directory exists on the server where the DB resides. Before using data pump, we must create a directory pointer that would point to this external directory. This is achieved by executing CREATE DIRECTORY command from sqlplus/sql prompt.
Example
Let us assume that we want to use the external directory - /data pump/DPUMP_1 for storing the dump files and log files. First we need to create a directory object from SQL prompt that would point to this particular location. To achieve this, we execute the following commands –
SQL> CREATE DIRECTORY DPMP AS ‘/data pump/DPUMP_1’;
Here we are creating a directory pointer named DMPM referring to location /data pump/DPUMP_1.
The user who will use the data pump utility must have the read and write privilege on this directory.  Suppose we want user SCOTT to perform the data pump export/import but SCOTT does not have the necessary privilege on the directory then we can grant these privileges using the following commands -
SQL> GRANT READ ON DIRECTORY DPMP TO SCOTT;
SQL> GRANT WRITE ON DIRECTORY DPMP TO SCOTT;
Now to start an export job, we can execute the below commands from the OS command prompt –
expdp scott/tiger dumpfile=dmp1.dmp full=y directory=dpmp logfile=log1.log
When the above expdp command would complete successfully, an export dump file named dmp1.dmp and a log file named log1.log will be created in /data pump/DPUMP_1 location.
Later we can perform an import using this dump file as input for the import job –
impdp scott/tiger dumpfile=dmp1.dmp full=y directory=dpmp logfile=log2.log
Points to note –
We can use the same external directory/directory pointer for multiple export/import jobs. So creating the directory pointer (and creating the external directory) is a one time process, unless we decide to use a new pointer or directory.
The user who will use data pump utility must have the read and write privilege on the directory.
Before using EXPDP and IMPDP, we must first execute a script know as catexp.sql or catalog.sql which is available under … /rdbms/admin/ directory. This script all necessary views, roles etc that are required to perform export and import. This script is required to run only once.
One must have the FULL_EXP_DATABASE and FULL_IMP_DATABASE privilege in order to perform a full export or import.
While performing export we can chose the entire database or a subset of it. It is also possible to include essential data dictionary information like grants, indexes, constraints etc.
Data pump VS traditional EXP/IMP
Data pump is a server process i.e. it runs on server level, we can start a data pump export /import job and then disconnect from the job. It will still keep running in the background even if the client process is disconnected. Later we can reconnect to the job. Also it allows us to start, stop, restart, and check status of running jobs These features are not possible using the original EXP/IMP
Because it runs as a server process, data pump is much faster compared to the original export and import (exp/imp) utility.
Data pump files are not compatible with those generated from the traditional EXP/IMP utility.
Basic Data pump export (EXPDP) parameters
FULL
 export the all data and metadata in the database.
SCHEMAS
 export specific database schemas.
TABLES
 export specific database tables.
TABLESPACES
 export specific Tablespaces.
TRANSPORT_TABLESPACES
 specifies a transportable tablespace mode.
DUMPFILE
 specifies the export dump file
LOGFILE
 specifies the name of the logfile.
PARFILE
 specifies the parameter file to use, if any.
NOLOGFILE
 suppress logfile creation (flag - Y/N)
PARFILE
 specifies the parameter file to use, if any.
DIRECTORY
 specifies the destination directory for dump and log files.
QUERY
 filter rows from tables during export.
INCLUDE
 specifies objects to be exported.
EXCLUDE
 specifies objects not to be exported.
FILESIZE
 specifies the maximum size of each dump file.
ATTACH
 connect to a running export job.
CONTENT
 filters what needs to be exported (DATA_ONLY, METADATA_ONLY or ALL)
JOBNAME
 specifies a name for the export job. Default is system generated.
HELP
 lists available commands and options.
ESTIMATE
 calculates the dumpfile size file and statistics.
STATUS
 display detailed status of the export job.
ESTIMATE_ONLY
 calculate statistics and execute the export job or just calculate statistics (flag - Y/N)
FLASHBACK_SCN
 specifies the export job to be consistent to a specific SCN
FLASHBACK_TIME
 specifies the export job to be consistent to a specific timestamp
TRANSPORT_FULL_CHECK
 specifies whether or not to verify a tablespace for dependency.
VERSION
 specifies version of database objects to be exported (COMPATIBLE, LATEST or version no.)
NETWORK_LINK
 specifies the source DB link (when exporting data from a remote DB)

Basic Data pump import (IMPDP) parameter
FULL
 Import full export file (flag - Y/N).
SCHEMAS
 Import specific database schemas.
TABLES
 Import specific database tables.
TABLESPACES
 import specific Tablespaces.
TRANSPORT_TABLESPACES
 specifies a transportable tablespace mode.
DUMPFILE
 specifies the export dump file
LOGFILE
 specifies the name of the logfile.
PARFILE
 specifies the parameter file to use, if any.
NOLOGFILE
 suppress logfile creation (flag - Y/N)
DIRECTORY
 specifies the destination directory for dump and log files.
QUERY
 filter rows from tables during import.
INCLUDE
 specifies objects to be imported.
EXCLUDE
 specifies objects not to be imported.
REMAP_SCHEMA
 import data from one schema to another
REMAP_TABLESPACE
 import data from one tablespace to another
REUSE_DATAFILE
 specifies whether existing datafiles should be reused during FULL mode import.
TABLE_EXISTS_ACTION
 specifies what to do if the table being imported already exists. Values  SKIP, APPEND, REPLACE. If CONTENT is DATA_ONLY, the default value is APPEND, otherwise default is SKIP.
PARALLEL
specifies number of threads for the import job
ATTACH
 connect to a running import job.
CONTENT
 filters what needs to be imported (DATA_ONLY, METADATA_ONLY or ALL)
JOBNAME
 specifies a name for the job. Default is system generated.
HELP
 lists available commands and options.
ESTIMATE
 calculates the dumpfile size file and statistics.
STATUS
 display detailed status of the export job.
FLASHBACK_SCN
 specifies the import job to be consistent to a specific SCN
FLASHBACK_TIME
specifies the import job to be consistent to a specific timestamp
TRANSPORT_FULL_CHECK
 specifies whether or not to verify a tablespace for dependency.
VERSION
 specifies version of database objects to be exported (COMPATIBLE, LATEST or version no.)
NETWORK_LINK
 specifies the source DB link (when importing data from a remote DB)
SKIP_UNUABLE_INDEXES
 specifies whether to load data into tables whose indexes are set to unusable state (Flag Y/N).
SQLFILE
 specifies the file to which DDL (for the import) will be written. Data will not be loaded to target DB.
STREAM_CONFIGURATION
 specifies whether stream configuration information should be imported (flag Y/N).

Export/Import Full Database
The FULL parameter specifies that we want to export the entire database in the dumpfile.
This will extract the entire data in the database including the metadata.
In order to perform a full database export, we can execute the following EXPDP command –
expdp scott/tiger dumpfile=dmp1.dmp full=y directory=dpmp logfile=log1.log
if we want to perform a full database import we can execute the following IMPDP command –
impdp scott/tiger dumpfile=dmp1.dmp full=y directory=dpmp logfile=log2.log
Export/Import specific Tables
If we want to export specific tables in the database, we can use the TABLES parameter.  The below expdp command will create a dumpfile containing data and metadata for tables PRODUCT and ORDER respectively –
expdp scott/tiger dumpfile=dmp1.dmp tables=product, order directory=dpmp logfile=lg1.log
To import these two tables (PRODUCT and ORDER) into a database, we can use the following IMPDP command –
impdp scott/tiger dumpfile=dmp1.dmp tables=product, order directory=dpmp logfile=lg2.log
Export/Import specific Schemas
If we want to export specific schemas in the database, we can use the SCHEMAS parameter.  The below expdp command will create a dumpfile containing data and metadata for Schemas SMITH and JOHN respectively –
expdp scott/tiger dumpfile=dmp1.dmp schemas=smith, john directory=dpmp logfile=logfile01.log
To import these two schemas (SMITH and JOHN) into a database, we can use the following IMPDP command –
impdp scott/tiger dumpfile=dmp1.dmp schemas=smith, john directory=dpmp logfile=logfile02.log
Using a parameter file in export/import
We can use a parameter file for our export/import jobs. This is achieved using the PARFILE parameter.
We save the export/import parameters in a text file and later when we run export or import job, instead of writing the parameter values we can simply give the parameter file name.
Suppose we created a file named par1.txt with the following contents –
directory=dpmp
dumpfile=dmp1.dmp
schemas=smith, john
logfile=log10.log
We can use this file named par1.txt as input for our export/import job - expdp scott/tiger parfile=par1.txt;
impdp scott/tiger parfile=par1.txt;
Performing export/import using INCUDE
We can limit the objects is to be exported/imported using the INCLUDE parameter. When we use the INCLUDE parameter, only those objects specified in the INCLUDE clause will be exported or imported.
expdp scott/tiger dumpfile=dmp1.dmp schemas=SCOTT include=TABLE:”IN (‘PRODUCT’, ‘ORDER’)” directory=dpmp logfile=log_123.log
This will export only tables PRODUCT and ORDER from the SCOTT schema
impdp scott/tiger dumpfile=dmp1.dmp schemas=SCOTT include=TABLE:”IN (‘PRODUCT’, ‘SALES’)” directory=dpmp logfile=log_x02.log
This will import only tables PRODUCT and SALES from the SCOTT schema.
Performing export/import using EXCLUDE
We can limit the objects is to be exported/imported using the EXCLUDE parameter. When we use the EXCLUDE parameter, those objects specified in the EXCLUDE clause will not be exported or imported.
expdp scott/tiger dumpfile=dmp1.dmp schemas=SCOTT exclude=TABLE:”IN (‘DEPT’)” directory=dpmp logfile=log_123.log
This will export all data from SCOTT schema except the DEPT table.
impdp scott/tiger dumpfile=dmp1.dmp schemas=SCOTT exclude=TABLE:”IN (‘DEPT’, ‘JOBS’)” directory=dpmp logfile=log_123.log
This will import all data from SCOTT schema except DEPT and JOBS table.
Performing export/import using QUERY
The QUERY parameter allows us to filter records that need to be exported or imported.
expdp scott/tiger dumpfile=dmp1.dmp schemas=SCOTT query=EMP:’ ”where salary > 1000000” ‘ directory=dpmp logfile=log_123.log
This will export only those records from EMP table (belonging to SCOTT schema) whose salary value is greater than 1000000
impdp scott/tiger dumpfile=dmp1.dmp schemas=SCOTT query=EMP:’ ”where salary > 1500000” ‘ directory=dpmp logfile=log_123.log
This will import only those records from EMP table (belonging to SCOTT schema) whose salary value is greater than 1500000
Importing to a new Schema
The import parameter REMAP_SCHEMA allows us to import objects from one schema to another.
Suppose we exported all data from the SCOTT schema and want to import these objects into the HR schema. This can be achieved using the REMAP_SCHEMA parameter.
impdp scott/tiger schema=SCOTT directory=dpmp1.dmp logfile=impbonus.log remap_schema=SCOTT:HR
Importing data into existing tables
The import parameter TABLE_EXISTS_ACTION specifies what needs to be done when a table being imported already exists. There are 3 options here – we can append data from the importing table to the already existing table, we can replace the old table with the importing table or we can skip importing this table.
impdp smith/smi0123 tables=emp_history directory=dmpm dumpfile=dtmp1.dmp
table_exists_action=replace;
impdp smith/smi0123 tables=emp_history directory=dmpm dumpfile=dtmp1.dmp
table_exists_action=append;
impdp smith/smi0123 tables=emp_history directory=dmpm dumpfile=dtmp1.dmp
table_exists_action=skip;
The default value for this parameter is SKIP.
REPLACE option is not allowed if the CONTENT parameter is set to DATA_ONLY.
Generating SQL for import jobs
Instead of importing the data and objects, it is also possible to generate a file which will contain the DDL related to the import job. This file will contain SQL statements for the objects and it will be saved in the OS. This is achieved using the SQLINFO parameter.
impdp scott/tiger dumpfile=dp1.dmp directory=dtpmp sqlinfo=sql_dp1.txt
Data pump commands/options with running export/import jobs
Data pump allows us to apply certain commands to a running export/import job.
After starting a export or import job, we can come out of the data pump interface by pressing CTRL – C. If we press CTRL – C while running a export job we will come to the Export prompt.
If we press CTRL – C while running a import job we will come to the Import prompt.
Pressing CTRL-C will not terminate the job; the job will still keep running in the background.
Once we are on the export/import prompt, we can apply certain options or commands to the already running export/import job.
Following are the options that can be applied to a running export/import job –
•        ADD_FILE (Applicable for export only)
•        EXIT_CLIENT
•        CONTINUE_CLIENT
•        STOP_JOB
•        START_JOB
•        KILL_JOB
•        STATUS
•        PARALLEL
•        HELP
To get a clear picture of what these commands do, let us consider that we executed a export job with the following command –
expdp scott/tiger parfile=par1.txt
Now pressing CTRL-C will bring us to the Export prompt –
Export>
We can execute various data pump commands from this prompt. To come out of the data pump (export/import) prompt, we can execute the EXIT_CLIENT command –
Export> EXIT_CLIENT
This will bring us to the operating system command prompt.
To reconnect to the running job, we can use the ATTACH parameter as follows –
expdp scott/tiger parfile=par1.txt attach
This will take us back to the export prompt –
Export>
After we attach to a running job, we will se the basic configuration for the job. To see the log entries we can use the CONTINUE_CLIENT command from this prompt. Example -
Export> CONTINUE_CLIENT
To get detailed status of this job, we can run the STATUS command. Example -
Export> STATUS
To stop the job, we can use the STOP_JOB command. Example -
Export>STOP_JOB
When a job is stopped we can add additional dump files to it using the ADD_FILE command.
Export>ADD_FILE =DMP_FILE1.log, DMP_FILE2.log
ADD_FILE is applicable only for export jobs. It is not supported for Import jobs.
To restart the job, we can issue the STOP_JOB command. Example –
 Export>START_JOB
To alter the number of threads associated with a job, we can use the PARALLEL command. Example –
Export>PARALLEL 2
To terminate this job, we can issue the KILL_JOB command. Example –
Export>KILL_JOB
To get help data pump Export/Import options, we can use the HELP command-
Export>HELP

Please note that in this example above we are considering a scenario with a running export job. All above mentioned commands except ADD_FILE can also be applied to a running import job.

No comments :

Post a Comment