Difference between revisions of "Datapump"
(→Enabling SQL trace) |
(→Use the flashback parameter to export and import a consistent dump) |
||
| Line 7: | Line 7: | ||
where db_params.par would look like this: | where db_params.par would look like this: | ||
| − | flashback_time= | + | flashback_time=systimestamp |
directory=DATAPUMP_DIR | directory=DATAPUMP_DIR | ||
dumpfile=db_schema.dmp | dumpfile=db_schema.dmp | ||
schemas=(apps,test) | schemas=(apps,test) | ||
| + | |||
| + | The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN. | ||
===Use of a par file to create a consistent parallel export=== | ===Use of a par file to create a consistent parallel export=== | ||
Revision as of 13:07, 2 January 2013
Contents
- 1 Use the flashback parameter to export and import a consistent dump
- 2 Use of a par file to create a consistent parallel export
- 3 Import into schemas and tablespaces other than those contained in the export dump file
- 4 Monitor Datapump sessions
- 5 How long will the export take?
- 6 Enable trace mode
- 7 Enabling SQL trace
Use the flashback parameter to export and import a consistent dump
This performs a consistent dump of 2 schemas
Keeping the parameters in a file allows it to be reused for the import:
expdp system/***** parfile=db_params.par
where db_params.par would look like this:
flashback_time=systimestamp directory=DATAPUMP_DIR dumpfile=db_schema.dmp schemas=(apps,test)
The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.
Use of a par file to create a consistent parallel export
Note the %U to generate the parallel file names
SCHEMAS=(CHEOPSFIX,CHEOPSCIVIL,CHEOPSTECH,CHEOPSPENAL)
DUMPFILE=expdp_prdmach_F_230311_%U.dmp
FILESIZE=25G
LOGFILE=expdp_prdmach_F_230311.log
DIRECTORY=expdp_dir
FLASHBACK_TIME="TO_TIMESTAMP('23-03-2011 19:30:00', 'DD-MM-YYYY HH24:MI:SS')"
PARALLEL=8
#TABLES=(CHEOPSTECH.DRANSWR)
#CONTENT='METADATA_ONLY'
#FULL=Y
Import into schemas and tablespaces other than those contained in the export dump file
These will have to be manually pre-created
CREATE USER MACHPENAL PROFILE "DEFAULT" IDENTIFIED BY ****** DEFAULT TABLESPACE "ACCMACH" TEMPORARY TABLESPACE "MACH_TEMP" QUOTA UNLIMITED ON "ACCMACH" QUOTA UNLIMITED ON "MACHX_TBS" ACCOUNT UNLOCK; GRANT ALTER SESSION TO "MACHPENAL"; GRANT CREATE JOB TO "MACHPENAL"; GRANT CREATE MATERIALIZED VIEW TO "MACHPENAL"; GRANT MANAGE SCHEDULER TO "MACHPENAL"; GRANT RESUMABLE TO "MACHPENAL"; GRANT UNLIMITED TABLESPACE TO "MACHPENAL"; GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "MACHPENAL"; GRANT "CONNECT" TO "MACHPENAL"; GRANT "MACHP_ROLE" TO "MACHPENAL";
impdp system/****** parfile=impdp_prd_accmach.par
where parfile looks something like this:
SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX) #INCLUDE=POST_TABLE_ACTION #INCLUDE=TRIGGER #INCLUDE=PROCACT_SCHEMA EXCLUDE=STATISTICS REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH ... REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH REMAP_SCHEMA=CHEOPSFIX:MACHFIX REMAP_SCHEMA=CHEOPSCIVIL:MACHCIVIL REMAP_SCHEMA=CHEOPSTECH:MACHTECH REMAP_SCHEMA=CHEOPSPENAL:MACHPENAL DUMPFILE=expdp_prdmach_F_230311_%U.dmp LOGFILE=impdp_refresh_240311.log DIRECTORY=expdp_dir PARALLEL=8
Monitor Datapump sessions
select s.sid , s.serial# from v$session s , dba_datapump_sessions dps where s.saddr = dps.saddr;
Once the session is identified, you can see the parallel query slave sessions spawned off by the main DataPump session by querying the V$PX_SESSION view:
select sid from v$px_session where qcsid = <DP Session ID>;
How long will the export take?
DataPump sends information about its progress to the longops view and can be consulted thus:
select sid , serial# , sofar , totalwork from v$session_longops where 1=1 and opname = '<DP Job Name>' and sofar != totalwork /
Enable trace mode
Two trace possibilities are available, standard and full:
impdp trace=0480300 ...
provides standard tracing. Trace files will be in user-dump_dest
impdp trace=1FF0300 ...
provides full tracing. Trace files will be in user-dump_dest
Enabling SQL trace
Cannot simply use:
alter session set sql_trace=true;
as DataPump starts another session. An event has to be set:
select sid , serial# , username , program from v$session where 1=1 and upper(program) like '%(DW%)' or upper(program) like '%(DM%)' /
then use the sid and serial# to set the event:
exec dbms_system.set_ev(<sid>, <serial#>, 10046, 12, );
Trace files will be in user-dump_dest
For the full beans on what DataPump can do, visit Arup Nanda's DataPump page