datapump
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| datapump [2018/12/08 12:49] – created 0.0.0.0 | datapump [2023/09/26 14:52] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== Datapump ====== | + | * [[http:// |
| + | * [[http:// | ||
| + | * [[https:// | ||
| - | * [[http://jensenmo.blogspot.be/2012/10/optimising-data-pump-export-and-import.html|Morten Jensen's tips for Optimising Data Pump Export and Import Performance]] | + | ==== gzip the output of expdp directly using a named pipe ==== |
| - | * [[http:// | + | This saves space as the dump file does not need to be created before separately gzipping it. |
| - | =====Use the flashback parameter to export and import a consistent dump===== | + | |
| - | This performs a consistent dump of 2 schemas<br /> | + | Create a pipe |
| + | < | ||
| + | mknod expdp.pipe p | ||
| + | </code> | ||
| + | |||
| + | Setup the pipe in the background so that gunzip picks up what is sent to it | ||
| + | < | ||
| + | gunzip > | ||
| + | </code> | ||
| + | |||
| + | Feed the pipe | ||
| + | < | ||
| + | expdp pfile=expdp_full_db.par dumpfile=expdp.pipe | ||
| + | </code> | ||
| + | |||
| + | The pipe should already have gone | ||
| + | < | ||
| + | rm -f expdp.pipe | ||
| + | </code> | ||
| + | |||
| + | ==== Use create table as select with Datapump driver ==== | ||
| + | If you have a complex query that would mean using the query like this | ||
| + | < | ||
| + | userid=' | ||
| + | dumpfile=exp_bgaap%U_20200101_to_20200723.dmp | ||
| + | reuse_dumpfiles=y | ||
| + | logfile=exp_bgaap_20200101_to_20200723.log | ||
| + | directory=data_pump_dir | ||
| + | parallel=5 | ||
| + | filesize=6G | ||
| + | exclude=statistics | ||
| + | metrics=y | ||
| + | query=' | ||
| + | query=ads_dba.fah_day_cba_bgaap:" | ||
| + | </ | ||
| + | Consider instead something like this | ||
| + | < | ||
| + | create table ads_archive_to_20200724 | ||
| + | organization external | ||
| + | (type oracle_datapump default directory data_pump_dir location(' | ||
| + | | ||
| + | select | ||
| + | union all | ||
| + | select * from ads_dba.fah_day_cba_bgaap f where f.treatment_date >= to_date(' | ||
| + | </ | ||
| + | or just create a table as select... and use datapump with %U to export it into several files | ||
| + | |||
| + | ==== Use the flashback parameter to export and import a consistent dump ==== | ||
| + | |||
| + | This performs a consistent dump of 2 schemas | ||
| Keeping the parameters in a file allows it to be reused for the import: | Keeping the parameters in a file allows it to be reused for the import: | ||
| + | < | ||
| expdp system/ | expdp system/ | ||
| + | </ | ||
| where db_params.par would look like this: | where db_params.par would look like this: | ||
| + | < | ||
| | | ||
| | | ||
| Line 15: | Line 69: | ||
| | | ||
| | | ||
| + | </ | ||
| - | The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility.<br /> | + | The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility. |
| - | The export operation is performed with data that is consistent as of this SCN.<br /> | + | The export operation is performed with data that is consistent as of this SCN. |
| By default expdp is consistent only for the table it is currently exporting. Use ' | By default expdp is consistent only for the table it is currently exporting. Use ' | ||
| - | =====Typical export with Data Pump===== | + | ==== Typical export with Data Pump ==== |
| This method is useful to dynamically build a script. | This method is useful to dynamically build a script. | ||
| Note the %u to generate the parallel file names... | Note the %u to generate the parallel file names... | ||
| - | < | + | < |
| + | SID=$1 | ||
| + | TIMESTAMP=`date +' | ||
| + | FLASHBACK_TO=" | ||
| + | |||
| + | expdp "/" | ||
| + | dumpfile=expdp_${SID}_D_FULL_${TIMESTAMP}_%u.dmp | ||
| + | logfile=expdp_${SID}_D_FULL_${TIMESTAMP}.log | ||
| + | job_name=expdp_${SID}_D_FULL_${TIMESTAMP} | ||
| + | flashback_time=${FLASHBACK_TO} | ||
| + | directory=data_pump_dir | ||
| + | reuse_dumpfiles=y | ||
| + | parallel=8 | ||
| + | full=y | ||
| + | exclude=statistics | ||
| + | </ | ||
| + | |||
| + | Sometimes I notice that flashback_time gives errors related to date_format (maybe a bug in 12.1.0.2). If this is the case, find the scn for the timestamp with | ||
| + | < | ||
| + | select timestamp_to_scn(to_timestamp(' | ||
| + | </ | ||
| + | and use | ||
| + | < | ||
| + | flashback_scn=< | ||
| + | </ | ||
| Reduce the amount of data exported/ | Reduce the amount of data exported/ | ||
| - | < | + | < |
| + | exclude=table:" | ||
| + | exclude=table:" | ||
| + | query=prd1.op_log:'" | ||
| + | QUERY=SIM_HEADER:" | ||
| + | QUERY=SIM_BLOB:" | ||
| + | QUERY=SIM_HEADER_DETAILS: | ||
| + | QUERY=SIM_DETAILS_BLOB: | ||
| + | </ | ||
| Could be made neater by using a parfile... | Could be made neater by using a parfile... | ||
| - | < | + | < |
| + | expdp / parfile=/ | ||
| + | </ | ||
| and the parfile could look like this | and the parfile could look like this | ||
| - | < | + | < |
| + | full=y | ||
| + | reuse_dumpfiles=y | ||
| + | compression=all | ||
| + | exclude=statistics | ||
| + | parallel=6 | ||
| + | </ | ||
| - | =====Datapump pitfalls/ | + | ==== Datapump pitfalls/ |
| - | * Exclude a single table from a full export | + | * Exclude a single table from a full export |
| If you have a par file like this and expect it to work, you will be disappointed: | If you have a par file like this and expect it to work, you will be disappointed: | ||
| - | < | + | < |
| - | The emp table will be exported!<br /> | + | exclude=table:" |
| + | full=y | ||
| + | </ | ||
| + | The emp table will be exported! | ||
| To get this to work, you need to use a trick... | To get this to work, you need to use a trick... | ||
| - | < | + | < |
| - | <br /> | + | query=scott.emp:" |
| - | * exclude and include are conceptually different... | + | full=y |
| - | < | + | </code> |
| + | |||
| + | * exclude and include are conceptually different... | ||
| + | < | ||
| + | exclude=function:" | ||
| + | </ | ||
| is not the same as | is not the same as | ||
| - | < | + | < |
| - | the first one does what you think it will do... it exports all objects except a function called raise_sal owned by scott.<br /> | + | include=function:" |
| + | </ | ||
| + | the first one does what you think it will do... it exports all objects except a function called raise_sal owned by scott. | ||
| the second one, convoluted though it is, will not do what you might think. It exports all functions (and only functions) except raise_sal owned by scott. | the second one, convoluted though it is, will not do what you might think. It exports all functions (and only functions) except raise_sal owned by scott. | ||
| - | =====Find partition ed tables in database===== | ||
| - | Before converting a database from Enterprise Edition to Standard Edition (Metalink: 139642.1), partitioned tables will need to be merged (as Standard Edition does not allow them) | ||
| - | < | ||
| - | =====Get more detailed information while Data pump is working===== | + | ==== Get more detailed information while Data pump is working ==== |
| To get an idea of how long it takes to export each object, add this to the parfile (or command line) | To get an idea of how long it takes to export each object, add this to the parfile (or command line) | ||
| - | < | + | < |
| - | =====Keep the master table after a successful data pump session===== | + | metrics=y |
| + | </ | ||
| + | ==== Keep the master table after a successful data pump session ==== | ||
| To see what went on in the master table, add this to the parfile (or command line) | To see what went on in the master table, add this to the parfile (or command line) | ||
| - | < | + | < |
| - | The master table contains all the data pump log messages.<br /> | + | keep_master=y |
| - | It is used to track the detailed progress of a Data Pump job - which is more than the log messages.<br /> | + | </ |
| + | The master table contains all the data pump log messages. | ||
| + | It is used to track the detailed progress of a Data Pump job - which is more than the log messages. | ||
| Amongst other things it conatins: | Amongst other things it conatins: | ||
| - | * Completed rows of a table. | + | * Completed rows of a table. |
| - | * Total number of errors during data pump operation. | + | * Total number of errors during data pump operation. |
| - | * Elapsed time for each table to do data pump export/ | + | * Elapsed time for each table to do data pump export/ |
| - | * The current set of dump files. | + | * The current set of dump files. |
| - | * The current state of every object exported or imported and their locations in the dump file set. | + | * The current state of every object exported or imported and their locations in the dump file set. |
| - | * The job's user-supplied parameters. | + | * The job's user-supplied parameters. |
| - | * The status of every worker process. | + | * The status of every worker process. |
| - | * The state of current job status and restart information. | + | * The state of current job status and restart information. |
| - | * The dump file location, the directory name information. | + | * The dump file location, the directory name information. |
| - | =====Trace the Data pump job===== | + | ==== Trace the Data pump job ==== |
| - | * More info here: [[https:// | + | * More info here: [[https:// |
| - | Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp.<br /> | + | Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp. |
| - | The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300.<br /> | + | The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300. |
| - | Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive.<br /> | + | Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive. |
| - | TRACE does not add anything to the output of DataPump, it creates additional trace files.<br /> | + | TRACE does not add anything to the output of DataPump, it creates additional trace files. |
| Add TRACE=< | Add TRACE=< | ||
| - | < | + | < |
| - | ====To trace an already running export job==== | + | Here is a summary of the Data Pump trace levels: |
| - | < | + | 10300 SHDW: To trace the Shadow process |
| + | 20300 KUPV: To trace Fixed table | ||
| + | 40300 ' | ||
| + | 80300 KUPM: To trace Master Control Process | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | 1000300 META: To trace Metadata Package | ||
| + | 1FF0300 ' | ||
| + | </ | ||
| + | === To trace an already running export job === | ||
| + | < | ||
| + | expdp system/ | ||
| + | Export> stop | ||
| + | expdp system/ | ||
| + | Export> cont | ||
| + | </ | ||
| - | =====Check to see if the objects can be recovered from the recyclebin===== | + | ==== Check to see if the objects can be recovered from the recyclebin ==== |
| This would save time restoring from tape/dumps | This would save time restoring from tape/dumps | ||
| - | < | + | < |
| - | < | + | select original_name, |
| + | </ | ||
| + | < | ||
| + | flashback table proddta.F5642026 to before drop; | ||
| + | </ | ||
| Check the indexes and restore as necessary... | Check the indexes and restore as necessary... | ||
| - | < | + | < |
| + | select index_name from dba_indexes where owner = ' | ||
| + | alter index " | ||
| + | </ | ||
| - | =====Import tables from a dump file into a schema===== | + | ==== Export tables from a schema ==== |
| - | < | + | < |
| + | expdp \'\/ as sysdba\' | ||
| + | </ | ||
| + | ==== Import tables from a dump file into a schema ==== | ||
| + | < | ||
| + | impdp \'\/ as sysdba\' | ||
| + | </ | ||
| where import_tables.par looks something like this | where import_tables.par looks something like this | ||
| - | < | + | < |
| + | directory=DATA_PUMP_DIR_JDBEOP1 | ||
| + | dumpfile=expdp_JDBEOP1_D_PRODDTA_20131007200000_%U.dmp | ||
| + | logfile=RESTORE_20131012.log | ||
| + | job_name=RESTORE_20131012 | ||
| + | tables=(PRODDTA.F47032, | ||
| + | remap_schema=PRODDTA: | ||
| + | remap_tablespace=PRODDTAT: | ||
| + | remap_tablespace=PRODDTAI: | ||
| + | table_exists_action=replace | ||
| + | </ | ||
| where directory has already been created with | where directory has already been created with | ||
| - | < | + | < |
| - | ====without a par file and with a transform to ignore table creation attributes==== | + | create or replace directory DATA_PUMP_DIR_JDBEOP1 as '/ |
| - | < | + | grant read,write on < |
| - | ====replace existing tables in an existing schema==== | + | </ |
| - | < | + | === without a par file and with a transform to ignore table creation attributes === |
| - | ====import | + | < |
| - | < | + | impdp \'/ as sysdba\' |
| - | ====import existing partitioned | + | |
| - | < | + | tables=SY812.F00950, |
| + | transform=segment_attributes: | ||
| + | parallel=5 \ | ||
| + | remap_schema=SY812: | ||
| + | remap_tablespace=SY812T: | ||
| + | dumpfile=expdp_JDB_20140531200001_%u.dmp \ | ||
| + | job_name=impdp_SY812 | ||
| + | logfile=impdp_JDBEOP1_SY812.log | ||
| + | </ | ||
| + | === replace | ||
| + | < | ||
| + | impdp \'/ as sysdba\' | ||
| + | | ||
| + | | ||
| + | transform=segment_attributes: | ||
| + | parallel=5 \ | ||
| + | dumpfile=expdp_JDBEOP1_D_PRODDTA_20140725210000_%u.dmp \ | ||
| + | | ||
| + | logfile=impdp_JDBEOP1_D_PRODDTA_F5642026_F596111.log | ||
| - | =====Processing object type SCHEMA_EXPORT/ | + | </ |
| - | Import is taking a long time and stuck on this line. Is it waiting for something or just slow?<br /> | + | === import existing tables into a different schema === |
| + | < | ||
| + | impdp \'/ as sysdba\' | ||
| + | directory=data_pump_dir | ||
| + | tables=DWH.FACT_OPEX, | ||
| + | remap_schema=DWH: | ||
| + | remap_tablespace=USERS: | ||
| + | dumpfile=backup_export_10g_CODWHP1_20151104210000_%u.dmp | ||
| + | job_name=import_DWH_TABLES | ||
| + | logfile=import_DWH_TABLES.log | ||
| + | </ | ||
| + | === import existing partitioned tables into a different schema === | ||
| + | < | ||
| + | impdp \'/ as sysdba\' | ||
| + | directory=data_pump_dir | ||
| + | tables=TIP_PRD1.EODMARKETPNL: | ||
| + | remap_schema=DWH: | ||
| + | remap_tablespace=USERS: | ||
| + | dumpfile=backup_export_10g_CODWHP1_20151104210000_%u.dmp | ||
| + | job_name=import_DWH_TABLES | ||
| + | logfile=import_DWH_TABLES.log | ||
| + | </ | ||
| + | |||
| + | Export a single table parition | ||
| + | < | ||
| + | expdp \"/ as sysdba\" | ||
| + | </ | ||
| + | And import it into another database remapping the schema | ||
| + | < | ||
| + | impdp \"/ as sysdba\" | ||
| + | </ | ||
| + | |||
| + | ==== Processing object type SCHEMA_EXPORT/ | ||
| + | Import is taking a long time and stuck on this line. Is it waiting for something or just slow? | ||
| Datapump does not import indexes, it recreates them using create index statements. This can be slow, especially as all the metadata uses just 1 worker. | Datapump does not import indexes, it recreates them using create index statements. This can be slow, especially as all the metadata uses just 1 worker. | ||
| - | ====Check the status of the job==== | + | === Check the status of the job === |
| - | < | + | < |
| - | ====Check to see if the impdp session is waiting for resources==== | + | select * from dba_datapump_jobs; |
| - | < | + | </ |
| - | ====Check dba_resumable for data pump jobs==== | + | === Check to see if the impdp session is waiting for resources === |
| - | < | + | < |
| - | ====Check alert log for possible error/ | + | select vsw.* |
| - | =====Find out what Datapump jobs are running===== | + | from |
| - | < | + | , v$session vs |
| - | < | + | , v$session_wait vsw |
| + | where dds.saddr = vs.saddr | ||
| + | and vs.sid | ||
| + | / | ||
| + | </ | ||
| + | === Check dba_resumable for data pump jobs === | ||
| + | < | ||
| + | select * from dba_resumable; | ||
| + | </ | ||
| + | === Check alert log for possible error/ | ||
| + | |||
| + | ==== Find out what Datapump jobs are running ==== | ||
| + | < | ||
| + | SET lines 200 | ||
| + | COL owner_name FORMAT a10; | ||
| + | COL job_name FORMAT a20 | ||
| + | COL state FORMAT a11 | ||
| + | COL operation LIKE state | ||
| + | COL job_mode LIKE state | ||
| + | |||
| + | SELECT job_name | ||
| + | , owner_name | ||
| + | , operation | ||
| + | , job_mode | ||
| + | , state | ||
| + | , attached_sessions | ||
| + | FROM | ||
| + | WHERE 1=1 | ||
| + | and job_name NOT LIKE ' | ||
| + | ORDER BY 2,1 | ||
| + | / | ||
| + | </ | ||
| + | < | ||
| + | JOB_NAME | ||
| + | -------------------- ---------- ----------- ----------- ----------- ----------------- | ||
| + | EXP_20131018100328 | ||
| + | |||
| + | SQL> | ||
| + | </ | ||
| - | =====Kill a datapump job with PL/SQL===== | + | ==== Kill a datapump job with PL/SQL ==== |
| - | < | + | < |
| - | =====Connect to a running export Datapump job from command line===== | + | SET serveroutput on |
| - | < | + | SET lines 100 |
| + | DECLARE | ||
| + | l_handle NUMBER; | ||
| + | BEGIN | ||
| + | l_handle := DBMS_DATAPUMP.ATTACH(' | ||
| + | DBMS_DATAPUMP.STOP_JOB (l_handle, | ||
| + | END; | ||
| + | / | ||
| + | </ | ||
| + | ==== Connect to a running export Datapump job from command line ==== | ||
| + | < | ||
| + | expdp / attach=" | ||
| + | </ | ||
| or | or | ||
| - | < | + | < |
| + | expdp \\'/ as sysdba\\' | ||
| + | </ | ||
| - | =====Suspend a running data pump===== | + | ==== Suspend a running data pump ==== |
| - | < | + | < |
| - | =====Restart a previously suspended data pump job===== | + | stop_job |
| - | < | + | </ |
| - | =====Terminate a data pump job===== | + | ==== Restart a previously suspended data pump job ==== |
| + | < | ||
| + | start_job | ||
| + | </ | ||
| + | ==== Terminate a data pump job ==== | ||
| Also kills the operating system processes | Also kills the operating system processes | ||
| - | < | + | < |
| + | kill_job | ||
| + | </ | ||
| - | =====Drop erroneous Data Pump master tables===== | + | ==== Drop erroneous Data Pump master tables ==== |
| - | < | + | < |
| + | select 'DROP TABLE ' | ||
| + | from | ||
| + | where 1=1 | ||
| + | and table_name like ' | ||
| + | / | ||
| + | </ | ||
| and | and | ||
| - | < | + | < |
| + | set pages 100 lines 200 | ||
| + | select 'drop table "' | ||
| + | from | ||
| + | , dba_datapump_jobs j | ||
| + | where 1=1 | ||
| + | and o.owner | ||
| + | and o.object_name = j.job_name | ||
| + | and j.state | ||
| + | / | ||
| + | </ | ||
| - | =====Did the import work?===== | + | ==== Did the import work? ==== |
| - | Moving a database with Data pump? Check the number of rows imported against those exported!<br /> | + | Moving a database with Data pump? Check the number of rows imported against those exported! |
| - | I noticed that sometimes, maybe tables with BLOB/LOB columns?, that the import does not import all rows...<br /> | + | I noticed that sometimes, maybe tables with BLOB/LOB columns?, that the import does not import all rows... |
| eg: | eg: | ||
| - | < | + | < |
| + | Processing object type DATABASE_EXPORT/ | ||
| + | Processing object type DATABASE_EXPORT/ | ||
| + | . . imported " | ||
| + | </ | ||
| Run this on the import logfile to check for this condition | Run this on the import logfile to check for this condition | ||
| - | < | + | < |
| - | or a more thorough approach... count the number of records in all tables both locally and remotely for comparison.<br /> | + | cat import_WM712P.log | perl -ne 'if (m/(\\d+) out of (\\d+) rows/) { print if $1 != $2; }' |
| + | </ | ||
| + | or a more thorough approach... count the number of records in all tables both locally and remotely for comparison. | ||
| Change value for l_remote_tns. File in csv format is created in DATA_PUMP_DIR. | Change value for l_remote_tns. File in csv format is created in DATA_PUMP_DIR. | ||
| - | < | + | < |
| + | set serveroutput on | ||
| + | declare | ||
| + | cursor c_tables is | ||
| + | select owner | ||
| + | , table_name | ||
| + | from | ||
| + | where 1=1 | ||
| + | and tablespace_name not in (' | ||
| + | ; | ||
| + | l_remote_tns | ||
| + | l_sql varchar2(240) := null; | ||
| + | l_local_row_count | ||
| + | l_remote_row_count number | ||
| + | f_file | ||
| + | begin | ||
| + | dbms_output.enable(null); | ||
| + | f_file := utl_file.fopen(' | ||
| + | utl_file.put_line (f_file, ' | ||
| + | for r_tables in c_tables | ||
| + | loop | ||
| + | l_sql := ' | ||
| + | execute immediate l_sql into l_local_row_count; | ||
| + | l_sql := ' | ||
| + | l_remote_row_count := 0; | ||
| + | begin | ||
| + | execute immediate l_sql into l_remote_row_count; | ||
| + | exception | ||
| + | when others then | ||
| + | null; | ||
| + | end; | ||
| + | --dbms_output.put_line (r_tables.owner||' | ||
| + | utl_file.put_line (f_file, r_tables.owner||' | ||
| + | end loop; | ||
| + | utl_file.fclose (f_file); | ||
| + | exception | ||
| + | when others then | ||
| + | dbms_output.put_line(' | ||
| + | end; | ||
| + | / | ||
| + | </ | ||
| - | =====Import into schemas and tablespaces other than those contained in the export dump file===== | + | ==== Import into schemas and tablespaces other than those contained in the export dump file ==== |
| These will have to be manually pre-created | These will have to be manually pre-created | ||
| + | < | ||
| | | ||
| GRANT ALTER SESSION TO " | GRANT ALTER SESSION TO " | ||
| Line 159: | Line 477: | ||
| GRANT " | GRANT " | ||
| GRANT " | GRANT " | ||
| + | </ | ||
| + | < | ||
| impdp system/ | impdp system/ | ||
| + | </ | ||
| where parfile looks something like this: | where parfile looks something like this: | ||
| + | < | ||
| | | ||
| # | # | ||
| Line 167: | Line 489: | ||
| # | # | ||
| | | ||
| - | | + | |
| - | | + | |
| - | | + | |
| - | | + | |
| - | | + | |
| - | | + | |
| - | | + | |
| - | | + | |
| ... | ... | ||
| - | | + | |
| - | | + | |
| - | | + | |
| - | | + | |
| | | ||
| | | ||
| Line 189: | Line 511: | ||
| | | ||
| | | ||
| + | </ | ||
| - | =====Change parallelism after expdp has started===== | + | ==== Change parallelism after expdp has started ==== |
| Export started with 1 worker but taking a long time? Change the number of parallel workers! | Export started with 1 worker but taking a long time? Change the number of parallel workers! | ||
| - | <code> | + | < |
| - | < | + | expdp \\'\\/ as sysdba\\' |
| - | < | + | |
| - | =====Monitor Data Pump sessions===== | + | Export: Release 12.1.0.2.0 - Production on Mon Oct 1 15:44:41 2018 |
| + | |||
| + | Copyright (c) 1982, 2014, Oracle and/or its affiliates. | ||
| + | |||
| + | Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production | ||
| + | With the Partitioning, | ||
| + | |||
| + | Job: SYS_EXPORT_TABLE_01 | ||
| + | Owner: SYS | ||
| + | Operation: EXPORT | ||
| + | Creator Privs: TRUE | ||
| + | GUID: 772B5F839E8000ECE0530A085A3842B1 | ||
| + | Start Time: Monday, 01 October, 2018 14:59:20 | ||
| + | Mode: TABLE | ||
| + | Instance: clnt | ||
| + | Max Parallelism: | ||
| + | Timezone: +02:00 | ||
| + | Timezone version: 18 | ||
| + | Endianness: BIG | ||
| + | NLS character set: AL32UTF8 | ||
| + | NLS NCHAR character set: AL16UTF16 | ||
| + | EXPORT Job Parameters: | ||
| + | Parameter Name Parameter Value: | ||
| + | | ||
| + | State: EXECUTING | ||
| + | Bytes Processed: 73, | ||
| + | Percent Done: 50 | ||
| + | Current Parallelism: | ||
| + | Job Error Count: 0 | ||
| + | Dump File: / | ||
| + | bytes written: 73, | ||
| + | |||
| + | Worker 1 Status: | ||
| + | Instance ID: 1 | ||
| + | Instance name: clnt | ||
| + | Host name: hn5306.cln.be | ||
| + | Process Name: DW00 | ||
| + | State: EXECUTING | ||
| + | Object Schema: TPK | ||
| + | Object Name: EID01 | ||
| + | Object Type: TABLE_EXPORT/ | ||
| + | Completed Objects: 5 | ||
| + | Total Objects: 192 | ||
| + | Completed Rows: 835,298 | ||
| + | Worker Parallelism: | ||
| + | </ | ||
| + | < | ||
| + | Export> parallel=4 | ||
| + | |||
| + | Export> status | ||
| + | </ | ||
| + | < | ||
| + | Job: SYS_EXPORT_TABLE_01 | ||
| + | Operation: EXPORT | ||
| + | Mode: TABLE | ||
| + | State: EXECUTING | ||
| + | Bytes Processed: 73, | ||
| + | Percent Done: 50 | ||
| + | Current Parallelism: | ||
| + | Job Error Count: 0 | ||
| + | Dump File: / | ||
| + | bytes written: 73, | ||
| + | |||
| + | |||
| + | Worker 1 Status: | ||
| + | Instance ID: 1 | ||
| + | Instance name: clnt | ||
| + | Host name: hn5306.cln.be | ||
| + | Process Name: DW00 | ||
| + | State: EXECUTING | ||
| + | Object Schema: TPK | ||
| + | Object Name: CCD01 | ||
| + | Object Type: TABLE_EXPORT/ | ||
| + | Completed Objects: 10 | ||
| + | Total Objects: 192 | ||
| + | Completed Rows: 317,968 | ||
| + | Worker Parallelism: | ||
| + | |||
| + | Worker 2 Status: | ||
| + | Instance ID: 1 | ||
| + | Instance name: clnt | ||
| + | Host name: hn5306.cln.be | ||
| + | Process Name: DW01 | ||
| + | State: EXECUTING | ||
| + | Object Schema: TPK | ||
| + | Object Name: BCD18 | ||
| + | Object Type: TABLE_EXPORT/ | ||
| + | Completed Objects: 4 | ||
| + | Total Objects: 192 | ||
| + | Worker Parallelism: | ||
| + | |||
| + | Worker 3 Status: | ||
| + | Instance ID: 1 | ||
| + | Instance name: clnt | ||
| + | Host name: hn5306.cln.be | ||
| + | Process Name: DW02 | ||
| + | State: EXECUTING | ||
| + | Object Schema: TPK | ||
| + | Object Name: FCD06 | ||
| + | Object Type: TABLE_EXPORT/ | ||
| + | Completed Objects: 4 | ||
| + | Total Objects: 192 | ||
| + | Worker Parallelism: | ||
| + | |||
| + | Worker 4 Status: | ||
| + | Instance ID: 1 | ||
| + | Instance name: clnt | ||
| + | Host name: hn5306.cln.be | ||
| + | Process Name: DW03 | ||
| + | State: EXECUTING | ||
| + | Object Schema: TPK | ||
| + | Object Name: CBD10 | ||
| + | Object Type: TABLE_EXPORT/ | ||
| + | Completed Objects: 4 | ||
| + | Total Objects: 192 | ||
| + | Worker Parallelism: | ||
| + | |||
| + | Export> | ||
| + | </ | ||
| + | |||
| + | ==== Monitor Data Pump sessions ==== | ||
| + | < | ||
| | | ||
| , | , | ||
| Line 202: | Line 645: | ||
| , | , | ||
| where s.saddr = dps.saddr; | where s.saddr = dps.saddr; | ||
| + | </ | ||
| Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view: | Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view: | ||
| + | < | ||
| | | ||
| where qcsid = <DP Session ID>; | where qcsid = <DP Session ID>; | ||
| + | </ | ||
| - | =====How long will the export take?===== | + | ==== How long will the export take? ==== |
| Data Pump sends information about its progress to the longops view and can be consulted thus: | Data Pump sends information about its progress to the longops view and can be consulted thus: | ||
| - | < | + | < |
| + | select sid | ||
| + | , serial# | ||
| + | , sofar | ||
| + | , totalwork | ||
| + | from | ||
| + | where 1=1 | ||
| + | and opname = '< | ||
| + | and sofar != totalwork | ||
| + | / | ||
| + | </ | ||
| or | or | ||
| - | < | + | < |
| + | select vsl.* | ||
| + | from | ||
| + | where 1=1 | ||
| + | and (vsl.sid, | ||
| + | and vsl.sofar != vsl.totalwork | ||
| + | / | ||
| + | </ | ||
| or | or | ||
| - | < | + | < |
| + | set serveroutput on | ||
| + | DECLARE | ||
| + | ind NUMBER; | ||
| + | h1 NUMBER; | ||
| + | percent_done NUMBER; | ||
| + | job_state VARCHAR2(30); | ||
| + | js ku$_JobStatus; | ||
| + | ws ku$_WorkerStatusList; | ||
| + | sts ku$_Status; | ||
| + | BEGIN | ||
| + | h1 := DBMS_DATAPUMP.attach('& | ||
| + | dbms_datapump.get_status(h1, | ||
| + | | ||
| + | | ||
| + | | ||
| + | js := sts.job_status; | ||
| + | ws := js.worker_status_list; | ||
| + | dbms_output.put_line(' | ||
| + | | ||
| + | dbms_output.put_line(' | ||
| + | ind := ws.first; | ||
| + | while ind is not null loop | ||
| + | dbms_output.put_line(' | ||
| + | ind := ws.next(ind); | ||
| + | end loop; | ||
| + | DBMS_DATAPUMP.detach(h1); | ||
| + | end; | ||
| + | / | ||
| + | </ | ||
| - | =====Debugging a Data Pump session===== | + | ==== Debugging a Data Pump session ==== |
| - | While exporting/ | + | While exporting/ |
| - | Firstly, we can add " | + | Firstly, we can add " |
| Next, we can dig deeper into the session using the session id. | Next, we can dig deeper into the session using the session id. | ||
| - | < | + | < |
| - | < | + | set lines 150 pages 100 |
| - | < | + | col program for a30 |
| + | col username for a15 | ||
| + | col spid for a7 | ||
| + | col job_name for a25 | ||
| + | select to_char (sysdate, ' | ||
| + | , s.program | ||
| + | , s.sid | ||
| + | , s.status | ||
| + | , s.username | ||
| + | , dds.job_name | ||
| + | , p.spid | ||
| + | , s.serial# | ||
| + | , p.pid | ||
| + | from | ||
| + | , v$process p | ||
| + | , dba_datapump_sessions dds | ||
| + | where p.addr | ||
| + | and s.saddr = dds.saddr | ||
| + | / | ||
| + | </ | ||
| + | < | ||
| + | select sw.sid | ||
| + | , sw.event | ||
| + | , sw.seconds_in_wait | ||
| + | from | ||
| + | , dba_datapump_sessions dds | ||
| + | , v$session_wait sw | ||
| + | where dds.saddr = s.saddr | ||
| + | and s.sid = sw.sid | ||
| + | / | ||
| + | </ | ||
| + | < | ||
| + | execute sys.dbms_system.set_ev (& | ||
| + | </ | ||
| then go to the user dump directory and use tkprof to read the trace file. | then go to the user dump directory and use tkprof to read the trace file. | ||
| - | =====Enable logging during a datpump impdp/expdp===== | + | ==== Enable logging during a datpump impdp/expdp ==== |
| As of 12c, much better logging is available: | As of 12c, much better logging is available: | ||
| - | < | + | < |
| + | expdp / full=y dump_file=test.dmp logfile=test.log logtime=all | ||
| + | </ | ||
| This prefixes all messages in the logfile with a timestamp. | This prefixes all messages in the logfile with a timestamp. | ||
| - | =====Enabling SQL trace===== | + | ==== Enabling SQL trace ==== |
| Cannot simply use: | Cannot simply use: | ||
| - | < | + | < |
| + | alter session set sql_trace=true; | ||
| + | </ | ||
| as Data Pump starts another session. An event has to be set: | as Data Pump starts another session. An event has to be set: | ||
| - | < | + | < |
| + | select sid | ||
| + | , serial# | ||
| + | , username | ||
| + | , program | ||
| + | from | ||
| + | where 1=1 | ||
| + | and upper(program) like ' | ||
| + | or | ||
| + | / | ||
| + | </ | ||
| then use the sid and serial# to set the event: | then use the sid and serial# to set the event: | ||
| - | < | + | < |
| + | exec dbms_system.set_ev(< | ||
| + | </ | ||
| Trace files will be in user-dump_dest | Trace files will be in user-dump_dest | ||
| For the full beans on what Data Pump can do, visit [[http:// | For the full beans on what Data Pump can do, visit [[http:// | ||
| - | =====Restore a schema from a dumpfile into a new schema so that user can pick his objects===== | + | ==== Restore a schema from a dumpfile into a new schema so that user can pick his objects ==== |
| - | * Restore relevant file from [[TSM]] | + | * Restore relevant file from [[TSM]] |
| - | * Connect to the database as sysdba and | + | * Connect to the database as sysdba and |
| - | < | + | < |
| + | create tablespace temp_recovery | ||
| + | | ||
| + | size 100M autoextend on next 100M maxsize 20000M | ||
| + | | ||
| + | | ||
| + | </ | ||
| Create a new schema to put the recovered data | Create a new schema to put the recovered data | ||
| - | < | + | < |
| + | create user temp_recovery identified by temp_recovery default tablespace temp_recovery temporary tablespace temp; | ||
| + | grant connect, resource to temp_recovery; | ||
| + | </ | ||
| Create a par file to remap the schema and tablespace | Create a par file to remap the schema and tablespace | ||
| - | < | + | < |
| + | vi impdp_schema.par | ||
| + | |||
| + | directory=DATA_PUMP_DIR | ||
| + | dumpfile=/ | ||
| + | logfile=impdp_schema.log | ||
| + | job_name=impdp_schema | ||
| + | schemas=SCHEMA_TO_RECOVER | ||
| + | remap_schema=SCHEMA_TO_RECOVER: | ||
| + | remap_tablespace=USERS: | ||
| + | table_exists_action=skip | ||
| + | </ | ||
| Run the import | Run the import | ||
| - | < | + | < |
| - | =====Restore schemas from dump. Clean out schema objects first.===== | + | impdp \\'/ as sysdba\\' |
| - | You can also drop schema but then you lose all grants.<br /> | + | </ |
| + | ==== Restore schemas from dump. Clean out schema objects first. ==== | ||
| + | You can also drop schema but then you lose all grants. | ||
| Check we have the dump files... | Check we have the dump files... | ||
| - | < | + | < |
| + | ll | ||
| + | -rw-r----- | ||
| + | -rw-r----- | ||
| + | -rw-r----- | ||
| + | </ | ||
| Backup schemas just in case... | Backup schemas just in case... | ||
| - | < | + | < |
| + | cat expdp_just_in_case.ksh | ||
| + | expdp \'/ as sysdba\' | ||
| + | DIRECTORY=DATA_PUMP_DIR | ||
| + | schemas=GPCOMP4, | ||
| + | dumpfile=expdp_GPTASKP2_JUST_IN_CASE_201310181500.dmp | ||
| + | job_name=impdp_GPTASKP2_GPCOMP4_GPCOMP6 | ||
| + | logfile=expdp_GPTASKP2_GPCOMP4_GPCOMP6.log | ||
| + | </ | ||
| Clean out schemas to be imported... | Clean out schemas to be imported... | ||
| - | < | + | < |
| - | <br /> | + | - !/ |
| + | - ============================================================================== | ||
| + | - Name : userman_clean_out_schema.ksh | ||
| + | - Description | ||
| + | - | ||
| + | - Parameters | ||
| + | - 2 - -e if you want the schema to be wiped automatically | ||
| + | - | ||
| + | - Notes : Generates an SQL file and an execution spool file | ||
| + | - | ||
| + | - Modification History | ||
| + | - ==================== | ||
| + | - When Who | ||
| + | - ========= ================= ================================================== | ||
| + | - 09-NOV-13 Stuart Barkley | ||
| + | - 06-JAN-14 Stuart Barkley | ||
| + | - 23-JAN-14 Stuart Barkley | ||
| + | - 25-APR-14 Stuart Barkley | ||
| + | - ============================================================================== | ||
| + | |||
| + | ts=`date +" | ||
| + | |||
| + | - ---------------------------- | ||
| + | - get any command line options | ||
| + | - ---------------------------- | ||
| + | unset USERNAME | ||
| + | |||
| + | DOIT=" | ||
| + | while getopts " | ||
| + | do | ||
| + | case " | ||
| + | u) USERNAME=$OPTARG;; | ||
| + | e) DOIT=" | ||
| + | esac | ||
| + | done | ||
| + | shift $((OPTIND-1)) | ||
| + | |||
| + | |||
| + | - -------------------------- | ||
| + | - check mandatory parameters | ||
| + | - -------------------------- | ||
| + | if [[ -z $USERNAME ]]; then | ||
| + | echo "" | ||
| + | echo " | ||
| + | echo " | ||
| + | echo "" | ||
| + | exit 1 | ||
| + | fi | ||
| + | |||
| + | sqlplus -s / as sysdba | ||
| + | set pages 0 | ||
| + | set feedb off | ||
| + | set verif off | ||
| + | spool | ||
| + | select 'drop ' | ||
| + | from | ||
| + | where 1=1 | ||
| + | and owner = ' | ||
| + | order by object_type | ||
| + | , object_name | ||
| + | / | ||
| + | spool off | ||
| + | EOSQL | ||
| + | |||
| + | if [[ " | ||
| + | echo " | ||
| + | sqlplus -s / as sysdba << | ||
| + | spool / | ||
| + | set termo on echo on feedb on | ||
| + | start / | ||
| + | |||
| + | purge recyclebin; | ||
| + | spool off | ||
| + | EOSQL | ||
| + | echo "done wiping user" | ||
| + | else | ||
| + | echo "To execute this automatically, | ||
| + | echo "or you can manually run the sql file I just generated: userman_clean_out_${USERNAME}.sql" | ||
| + | fi | ||
| + | </code> | ||
| Import the required data... | Import the required data... | ||
| - | < | + | < |
| - | ====Alternatively, | + | cat impdp_GPTASKP2_GPCOMP4_GPCOMP6.ksh |
| + | impdp \'/ as sysdba\' | ||
| + | DIRECTORY=DATA_PUMP_DIR | ||
| + | schemas=GPCOMP4, | ||
| + | dumpfile=expdp_GPTASKP2_D_FULL_20131017210003_%u.dmp | ||
| + | job_name=impdp_GPTASKP2_GPCOMP4_GPCOMP6 | ||
| + | logfile=impdp_GPTASKP2_GPCOMP4_GPCOMP6 | ||
| + | |||
| + | </ | ||
| + | === Alternatively, | ||
| Run this **before** dropping the schema to generate DDL necessary to recreate it | Run this **before** dropping the schema to generate DDL necessary to recreate it | ||
| - | < | + | < |
| + | sqlplus -s /nolog << | ||
| + | connect / as sysdba | ||
| + | whenever sqlerror continue | ||
| + | set hea off echo off pages 0 newp none long 99999 lin 2000 trims on feed off veri off | ||
| + | |||
| + | execute dbms_metadata.set_transform_param(dbms_metadata.session_transform,' | ||
| + | spool / | ||
| + | |||
| + | select to_char( dbms_metadata.get_ddl | ||
| + | select to_char( dbms_metadata.get_granted_ddl ( ' | ||
| + | select to_char( dbms_metadata.get_granted_ddl ( ' | ||
| + | select to_char( dbms_metadata.get_granted_ddl ( ' | ||
| + | select to_char( dbms_metadata.get_granted_ddl ( ' | ||
| + | |||
| + | |||
| + | -- db_links do not work properly with dbms_metadata if they are for a different schema | ||
| + | select ' | ||
| + | | ||
| + | ' | ||
| + | | ||
| + | | ||
| + | '" | ||
| + | | ||
| + | '" | ||
| + | | ||
| + | '" | ||
| + | | ||
| + | '"' | ||
| + | ';' | ||
| + | from | ||
| + | , sys.user$ u | ||
| + | where 1=1 | ||
| + | and l.owner# = u.user# | ||
| + | and u.name | ||
| + | |||
| + | spool off | ||
| + | EOSQL | ||
| + | </ | ||
| + | |||
| + | ==== Export database links for a schema ==== | ||
| + | < | ||
| + | expdp userid=\' | ||
| + | </ | ||
| + | |||
| + | === References === | ||
| + | * Master Note for Data Pump [[ID 1264715.1]] | ||
| + | * Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [[ID 453895.1]] | ||
| + | * Parallel Capabilities of Oracle Data Pump [[ID 365459.1]] | ||
| + | * [[http:// | ||
| - | ====References==== | ||
| - | * Master Note for Data Pump [[ID|1264715.1]] | ||
| - | * Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [[ID|453895.1]] | ||
| - | * Parallel Capabilities of Oracle Data Pump [[ID|365459.1]] | ||
| - | * [[http:// | ||
datapump.1544273360.txt.gz · Last modified: 2018/12/08 12:49 by 0.0.0.0
