User Tools

Site Tools


datapump

This is an old revision of the document!


Table of Contents

Datapump

Use the flashback parameter to export and import a consistent dump

This performs a consistent dump of 2 schemas<br /> Keeping the parameters in a file allows it to be reused for the import:

expdp system/* parfile=db_params.par

where db_params.par would look like this: flashback_time=systimestamp directory=DATA_PUMP_DIR dumpfile=db_schemas.dmp logfile=db_schemas.log schemas=(apps,test)

The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility.<br /> The export operation is performed with data that is consistent as of this SCN.<br /> By default expdp is consistent only for the table it is currently exporting. Use 'flashback_time=systimestamp' to simulate the old export 'consistent=y'

Typical export with Data Pump

This method is useful to dynamically build a script. Note the %u to generate the parallel file names…

0@@

Reduce the amount of data exported/imported with clauses like these…

1@@

Could be made neater by using a parfile…

2@@

and the parfile could look like this

3@@

Datapump pitfalls/gotchas

  • Exclude a single table from a full export

If you have a par file like this and expect it to work, you will be disappointed:

4@@

The emp table will be exported!<br /> To get this to work, you need to use a trick…

5@@

<br />

  • exclude and include are conceptually different…
6@@

is not the same as

7@@

the first one does what you think it will do… it exports all objects except a function called raise_sal owned by scott.<br /> the second one, convoluted though it is, will not do what you might think. It exports all functions (and only functions) except raise_sal owned by scott.

Find partition ed tables in database

Before converting a database from Enterprise Edition to Standard Edition (Metalink: 139642.1), partitioned tables will need to be merged (as Standard Edition does not allow them)

8@@

Get more detailed information while Data pump is working

To get an idea of how long it takes to export each object, add this to the parfile (or command line)

9@@

Keep the master table after a successful data pump session

To see what went on in the master table, add this to the parfile (or command line)

10@@

The master table contains all the data pump log messages.<br /> It is used to track the detailed progress of a Data Pump job - which is more than the log messages.<br /> Amongst other things it conatins:

  • Completed rows of a table.
  • Total number of errors during data pump operation.
  • Elapsed time for each table to do data pump export/import operation.
  • The current set of dump files.
  • The current state of every object exported or imported and their locations in the dump file set.
  • The job's user-supplied parameters.
  • The status of every worker process.
  • The state of current job status and restart information.
  • The dump file location, the directory name information.

Trace the Data pump job

Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp.<br /> The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300.<br /> Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive.<br /> TRACE does not add anything to the output of DataPump, it creates additional trace files.<br /> Add TRACE=<hex digits below> to the parfile (or command line)

11@@

To trace an already running export job

12@@

Check to see if the objects can be recovered from the recyclebin

This would save time restoring from tape/dumps

13@@
14@@

Check the indexes and restore as necessary…

15@@

Import tables from a dump file into a schema

16@@

where import_tables.par looks something like this

17@@

where directory has already been created with

18@@

without a par file and with a transform to ignore table creation attributes

19@@

replace existing tables in an existing schema

20@@

import existing tables into a different schema

21@@

import existing partitioned tables into a different schema

22@@

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

Import is taking a long time and stuck on this line. Is it waiting for something or just slow?<br /> Datapump does not import indexes, it recreates them using create index statements. This can be slow, especially as all the metadata uses just 1 worker.

Check the status of the job

23@@

Check to see if the impdp session is waiting for resources

24@@

Check dba_resumable for data pump jobs

25@@

Check alert log for possible error/warning messages

Find out what Datapump jobs are running

26@@
27@@

Kill a datapump job with PL/SQL

28@@

Connect to a running export Datapump job from command line

29@@

or

30@@

Suspend a running data pump

31@@

Restart a previously suspended data pump job

32@@

Terminate a data pump job

Also kills the operating system processes

33@@

Drop erroneous Data Pump master tables

34@@

and

35@@

Did the import work?

Moving a database with Data pump? Check the number of rows imported against those exported!<br /> I noticed that sometimes, maybe tables with BLOB/LOB columns?, that the import does not import all rows…<br /> eg:

36@@

Run this on the import logfile to check for this condition

37@@

or a more thorough approach… count the number of records in all tables both locally and remotely for comparison.<br /> Change value for l_remote_tns. File in csv format is created in DATA_PUMP_DIR.

38@@

Import into schemas and tablespaces other than those contained in the export dump file

These will have to be manually pre-created CREATE USER MACHPENAL PROFILE “DEFAULT” IDENTIFIED BY DEFAULT TABLESPACE “ACCMACH” TEMPORARY TABLESPACE “MACH_TEMP” QUOTA UNLIMITED ON “ACCMACH” QUOTA UNLIMITED ON “MACHX_TBS” ACCOUNT UNLOCK; GRANT ALTER SESSION TO “MACHPENAL”; GRANT CREATE JOB TO “MACHPENAL”; GRANT CREATE MATERIALIZED VIEW TO “MACHPENAL”; GRANT MANAGE SCHEDULER TO “MACHPENAL”; GRANT RESUMABLE TO “MACHPENAL”; GRANT UNLIMITED TABLESPACE TO “MACHPENAL”; GRANT EXECUTE ON “SYS”.“DBMS_LOCK” TO “MACHPENAL”; GRANT “CONNECT” TO “MACHPENAL”; GRANT “MACHP_ROLE” TO “MACHPENAL”; impdp system/ parfile=impdp_prd_accmach.par where parfile looks something like this: SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX) #INCLUDE=POST_TABLE_ACTION #INCLUDE=TRIGGER #INCLUDE=PROCACT_SCHEMA EXCLUDE=STATISTICS REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH … REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH REMAP_SCHEMA=CHEOPSFIX:MACHFIX REMAP_SCHEMA=CHEOPSCIVIL:MACHCIVIL REMAP_SCHEMA=CHEOPSTECH:MACHTECH REMAP_SCHEMA=CHEOPSPENAL:MACHPENAL DUMPFILE=expdp_prdmach_F_230311_%U.dmp LOGFILE=impdp_refresh_240311.log DIRECTORY=expdp_dir PARALLEL=8

Change parallelism after expdp has started

Export started with 1 worker but taking a long time? Change the number of parallel workers!

39@@
40@@
41@@

Monitor Data Pump sessions

select s.sid , s.serial# from v$session s , dba_datapump_sessions dps where s.saddr = dps.saddr;

Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view: select sid from v$px_session where qcsid = <DP Session ID>;

How long will the export take?

Data Pump sends information about its progress to the longops view and can be consulted thus:

42@@

or

43@@

or

44@@

Debugging a Data Pump session

While exporting/importing, the Datapump job can sometimes appear to hang. What is it doing?<br /> Firstly, we can add “METRICS=Y” to the parameter file. This gives a bit more logging info.<br /> Next, we can dig deeper into the session using the session id.

45@@
46@@
47@@

then go to the user dump directory and use tkprof to read the trace file.

Enable logging during a datpump impdp/expdp

As of 12c, much better logging is available:

48@@

This prefixes all messages in the logfile with a timestamp.

Enabling SQL trace

Cannot simply use:

49@@

as Data Pump starts another session. An event has to be set:

50@@

then use the sid and serial# to set the event:

51@@

Trace files will be in user-dump_dest

For the full beans on what Data Pump can do, visit Arup Nanda's Data Pump page

Restore a schema from a dumpfile into a new schema so that user can pick his objects

  • Restore relevant file from TSM
  • Connect to the database as sysdba and
52@@

Create a new schema to put the recovered data

53@@

Create a par file to remap the schema and tablespace

54@@

Run the import

55@@

Restore schemas from dump. Clean out schema objects first.

You can also drop schema but then you lose all grants.<br /> Check we have the dump files…

56@@

Backup schemas just in case…

57@@

Clean out schemas to be imported…

58@@

<br /> Import the required data…

59@@

Alternatively, drop and recreate the schema / user using dbms_metadata get_ddl and get_granted_ddl

Run this before dropping the schema to generate DDL necessary to recreate it

60@@

References

datapump.1544130326.txt.gz · Last modified: 2018/12/06 21:05 by 91.177.234.129

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki