Oracle Datapump Feature
Featured of Datapump Jobs
1. you can deattach or reattch datapump job.
2. you can parallelise datapump job [PARALLEL=4 & %U]
3. without export can do import using NETWORK_LINK parameter.
4. can calculate export dump file size using ESIMATE_ONLY.
5. expdp utility used which is caled by dbms_datapump procedure
6. remapping capabilities during import [REMAP_DATAFILE,EMAP_SCHEMA,REMAP_TABLESPACE,REUSE_DATAFILES]
7. TABLE_EXISTS_ACTION=replace/truncate while importing.
2. you can parallelise datapump job [PARALLEL=4 & %U]
3. without export can do import using NETWORK_LINK parameter.
4. can calculate export dump file size using ESIMATE_ONLY.
5. expdp utility used which is caled by dbms_datapump procedure
6. remapping capabilities during import [REMAP_DATAFILE,EMAP_SCHEMA,REMAP_TABLESPACE,REUSE_DATAFILES]
7. TABLE_EXISTS_ACTION=replace/truncate while importing.
exp help=y
A good place to start is by identifying the most frequent data pump export command line parameters:
PARAMETER
|
DESCRIPTION
|
ATTACH
|
[SCHEMA.]JOB_NAME
Name of an already existing and executing job to connect to. Need EXP_FULL_DATABASE privilege for other schemas
|
COMPRESSION
|
META_DATA_ONLY | ALL | NONE | DATA_ONLY
Compress the dump file contents for the specified criteria
|
CONSISTENT
|
Y|N
Starting in release 11.2, this option gives a read consistent backup, just like flachback_time
|
CONTENT
|
ALL | META_DATA_ONLY | DATA_ONLY
Filter the export of dump file contents to the specified criteria
|
DIRECTORY
|
DATA_PUMP_DIR | DIRECTORY_NAME
Name of directory object pointing to a valid server directory
|
DUMPFILE
|
[DIRECTORY_NAME:]FILE_NAME [, ...]
The name (and optionally the directory) of the export data file
|
ESTIMATE
|
N | Y
Do not export, but rather just estimate the disk space required
|
ESTIMATE_ONLY
|
BLOCKS, STATISTICS
Method export uses to calculate the disk space for data only
|
EXCLUDE
|
OBJECT_TYPE[:NAME_FILTER_EXPRESSION] [, ...]
Database object types as a whole or by object name filter to specifically exclude from the export
|
FILESIZE
|
INTEGER [B | K | M | G]
The maximum file size permitted for any export dump file
|
FLASHBACK_TIME
|
SYSTIMESTAMP | DATE
Allows a read consistent export in expdp. See these notes on using flashback time.
|
FULL
|
N | Y
Whether to perform a full database export or not
Requires EXP_FULL_DATABASE privilege
|
INCLUDE
|
OBJECT_TYPE[:NAME_FILTER_EXPRESSION] [, ...]
Database object types as a whole or by object name filter to specifically include in the export
|
JOB_NAME
|
SYS_EXPORT_<mode>_NN | JOB_NAME
Name by which export job can be referenced (e.g. ATTACH)
|
LOGFILE
|
EXPORT.LOG | [DIRECTORY_NAME:]FILE_NAME
The name (and optionally the directory) of the export log file
|
NOLOGFILE
|
N | Y
Whether or not to suppress creation of the export log file
|
PARALLEL
|
1 | INTEGER
The maximum number of concurrent threads for the export
|
PARFILE
|
[DIRECTORY_SPECIFICATION]FILE_NAME
Name of the operating system specific parameter file
|
QUERY
|
[[SCHEMA.]TABLE_NAME:] FILTER_EXPRESSION
Data filter condition applied to all tables or by schema and object name filters during the export
|
REUSE_DUMPFILES
|
N | Y
Whether or not to overwrite pre-existing export dump files
|
SAMPLE
|
[[SCHEMA.]TABLE_NAME]N, where .000001>=N<100
|
SCHEMAS
|
SCHEMA [, ...]
The schema or schemas to export
Need EXP_FULL_DATABASE privilege for other schemas
|
STATUS
|
0 | INTEGER
The frequency in seconds which job displays client feedback
|
TABLES
|
[SCHEMA.]TABLE_NAME[:PARTITION_NAME] [, ...]
List of tables for a table mode database export
Restricted to a single schema
|
TABLESPACES
|
TABLESPACE_NAME [, ...]
List of tablespaces for a tablespace mode database export
|
TRANSPORT_FULL_CHECK
|
N | Y
Whether or not dependencies verified between transportable tablespace objects in the transport set to those outside the set
|
TRANSPORT_TABLESAPCES
|
TABLESPACE_NAME [, ...]
List of tablespaces for a transportable tablespace mode database export (target database version >= source version)
|
Moreover, to use the stop and restart data pump job capabilities, run data pump in interactive mode so as to get the data pump prompt. Then the following commands are also quite frequently useful:
PARAMETER
|
DESCRIPTION
|
CONTINUE_CLIENT
|
Connect client to currently executing job or restarts the job, and resumes logging mode (i.e. status output)
|
EXIT_CLIENT
|
Disconnect client connection to currently executing job and terminate the client process, but leave server job running
|
KILL_JOB
|
Detach all client processes connected to this data pump job and then terminate (i.e. kill) the currently running job
|
START_JOB
|
Start or resume the current data pump job
|
STOP_JOB
|
[IMMEDIATE]
Detach all client processes connected to this data pump job and then orderly shutdown the currently running job
|
No comments:
Post a Comment