User Tools

Site Tools


datapump

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
datapump [2018/12/08 12:49] – created 0.0.0.0datapump [2023/09/26 14:52] (current) – external edit 127.0.0.1
Line 1: Line 1:
-====== Datapump ======+  *  [[http://jensenmo.blogspot.be/2012/10/optimising-data-pump-export-and-import.html|Morten Jensen's tips for Optimising Data Pump Export and Import Performance]] 
 +  *  [[http://oradbastuff.blogspot.co.uk/2011/11/expdp-slow-after-upgrade-from-11106-to.html|expdp slow after upgrade]] 
 +  *  [[https://medium.com/@FranckPachot/passwordless-data-pump-19c-b21cd1e00c16|Passwordless Data Pump 19c using Oracle Wallet]]
  
-  * [[http://jensenmo.blogspot.be/2012/10/optimising-data-pump-export-and-import.html|Morten Jensen's tips for Optimising Data Pump Export and Import Performance]] +==== gzip the output of expdp directly using a named pipe ==== 
-  * [[http://oradbastuff.blogspot.co.uk/2011/11/expdp-slow-after-upgrade-from-11106-to.html|expdp slow after upgrade]] +This saves space as the dump file does not need to be created before separately gzipping it. 
-=====Use the flashback parameter to export and import a consistent dump===== + 
-This performs a consistent dump of 2 schemas<br />+Create a pipe 
 +<code> 
 +mknod expdp.pipe p 
 +</code> 
 + 
 +Setup the pipe in the background so that gunzip picks up what is sent to it 
 +<code> 
 +gunzip >expdp_database.dmp.gz <expdp.pipe & 
 +</code> 
 + 
 +Feed the pipe 
 +<code> 
 +expdp pfile=expdp_full_db.par dumpfile=expdp.pipe 
 +</code> 
 + 
 +The pipe should already have gone 
 +<code> 
 +rm -f expdp.pipe 
 +</code> 
 + 
 +==== Use create table as select with Datapump driver ==== 
 +If you have a complex query that would mean using the query like this 
 +<code> 
 +userid='sys/sys as sysdba' 
 +dumpfile=exp_bgaap%U_20200101_to_20200723.dmp 
 +reuse_dumpfiles=y 
 +logfile=exp_bgaap_20200101_to_20200723.log 
 +directory=data_pump_dir 
 +parallel=5 
 +filesize=6G 
 +exclude=statistics 
 +metrics=y 
 +query='ads_archive.fah_day_cba_bgaap_arc:"where ku$.treatment_date >= to_date(''01-JAN-2020'',''DD-MON-YYYY'')"' 
 +query=ads_dba.fah_day_cba_bgaap:"where fah_day_cba_bgaap.treatment_date >= to_date('01-JAN-2020','DD-MON-YYYY')" 
 +</code> 
 +Consider instead something like this 
 +<code> 
 +create table ads_archive_to_20200724 
 +  organization external 
 +    (type oracle_datapump default directory data_pump_dir location('exp_bgaap_20200101_to_20200723.dmp')) 
 +  as 
 +  select from ads_archive.fah_day_cba_bgaap_arc f where f.treatment_date >= to_date('01-JAN-2020','DD-MON-YYYY'
 +  union all 
 +  select * from ads_dba.fah_day_cba_bgaap f where f.treatment_date >= to_date('01-JAN-2020','DD-MON-YYYY'
 +</code> 
 +or just create a table as select... and use datapump with %U to export it into several files 
 + 
 +==== Use the flashback parameter to export and import a consistent dump ==== 
 + 
 +This performs a consistent dump of 2 schemas
 Keeping the parameters in a file allows it to be reused for the import: Keeping the parameters in a file allows it to be reused for the import:
  
 +<code>
  expdp system/***** parfile=db_params.par  expdp system/***** parfile=db_params.par
 +</code>
  
 where db_params.par would look like this: where db_params.par would look like this:
 +<code>
  flashback_time=systimestamp  flashback_time=systimestamp
  directory=DATA_PUMP_DIR  directory=DATA_PUMP_DIR
Line 15: Line 69:
  logfile=db_schemas.log  logfile=db_schemas.log
  schemas=(apps,test)  schemas=(apps,test)
 +</code>
  
-The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility.<br /> +The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility. 
-The export operation is performed with data that is consistent as of this SCN.<br />+The export operation is performed with data that is consistent as of this SCN.
 By default expdp is consistent only for the table it is currently exporting. Use 'flashback_time=systimestamp' to simulate the old export 'consistent=y' By default expdp is consistent only for the table it is currently exporting. Use 'flashback_time=systimestamp' to simulate the old export 'consistent=y'
  
-=====Typical export with Data Pump=====+==== Typical export with Data Pump ====
 This method is useful to dynamically build a script. This method is useful to dynamically build a script.
 Note the %u to generate the parallel file names... Note the %u to generate the parallel file names...
-<code>0@@</code>+<code> 
 +SID=$1 
 +TIMESTAMP=`date +'%Y%m%d%H%M%S'
 +FLASHBACK_TO="to_timestamp('13-02-2018 13:35:00', 'DD-MM-YYYY HH24:MI:SS')" 
 + 
 +expdp "/"                                                  \ 
 +dumpfile=expdp_${SID}_D_FULL_${TIMESTAMP}_%u.dmp           \ 
 +logfile=expdp_${SID}_D_FULL_${TIMESTAMP}.log               \ 
 +job_name=expdp_${SID}_D_FULL_${TIMESTAMP}                  \ 
 +flashback_time=${FLASHBACK_TO}                             \ 
 +directory=data_pump_dir                                    \ 
 +reuse_dumpfiles=y                                          \ 
 +parallel=8                                                 \ 
 +full=y                                                     \ 
 +exclude=statistics 
 +</code> 
 + 
 +Sometimes I notice that flashback_time gives errors related to date_format (maybe a bug in 12.1.0.2). If this is the case, find the scn for the timestamp with 
 +<code> 
 +select timestamp_to_scn(to_timestamp('12-08-2021 13:35:00', 'DD-MM-YYYY HH24:MI:SS')) scn from dual; 
 +</code> 
 +and use 
 +<code> 
 +flashback_scn=<scn>                 \ 
 +</code>
  
 Reduce the amount of data exported/imported with clauses like these... Reduce the amount of data exported/imported with clauses like these...
-<code>1@@</code>+<code> 
 +exclude=table:"in (select table_name from dba_tables where owner = 'eucvdta'  and table_name like 'F4%')" 
 +exclude=table:"in (select table_name from dba_tables where owner = 'eucrpdta'  and table_name like 'F4%')"
 +query=prd1.op_log:'"where trunc(creation_date) > trunc(sysdate-7)"'   \ 
 +QUERY=SIM_HEADER:"WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header) OR run_time IN (SELECT TRUNC(run_time, 'YEAR')-FROM sim_header)" 
 +QUERY=SIM_BLOB:"WHERE sim_run_id > (SELECT MIN(sim_run_id) FROM sim_header WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header)) OR sim_run_id IN (SELECT sim_run_id FROM sim_header WHERE run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header))" 
 +QUERY=SIM_HEADER_DETAILS: "WHERE sim_run_id > (SELECT MIN(sim_run_id) FROM sim_header WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header)) OR sim_run_id IN (SELECT sim_run_id FROM sim_header WHERE run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header))" 
 +QUERY=SIM_DETAILS_BLOB: "WHERE unique_sim_key IN (SELECT unique_sim_key FROM sim_header_details WHERE sim_run_id > (SELECT MIN(sim_run_id) FROM sim_header WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header)) OR sim_run_id IN (SELECT sim_run_id FROM sim_header WHERE run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header)))" 
 +</code>
  
 Could be made neater by using a parfile... Could be made neater by using a parfile...
-<code>2@@</code>+<code> 
 +expdp / parfile=/home/ibmtools/etc/oracle/nhapplp1_full.parfile 
 +</code>
 and the parfile could look like this and the parfile could look like this
-<code>3@@</code>+<code> 
 +full=y 
 +reuse_dumpfiles=y 
 +compression=all 
 +exclude=statistics 
 +parallel=6 
 +</code>
  
-=====Datapump pitfalls/gotchas===== +==== Datapump pitfalls/gotchas ==== 
-  * Exclude a single table from a full export+  *  Exclude a single table from a full export
 If you have a par file like this and expect it to work, you will be disappointed: If you have a par file like this and expect it to work, you will be disappointed:
-<code>4@@</code> +<code> 
-The emp table will be exported!<br />+exclude=table:"in ('scott.emp')" 
 +full=y 
 +</code> 
 +The emp table will be exported!
 To get this to work, you need to use a trick... To get this to work, you need to use a trick...
-<code>5@@</code> +<code> 
-<br /> +query=scott.emp:"where rownum 1" 
-  * exclude and include are conceptually different... +full=y 
-<code>6@@</code>+</code> 
 + 
 +  *  exclude and include are conceptually different... 
 +<code> 
 +exclude=function:"in ('scott.raise_sal')" 
 +</code>
 is not the same as is not the same as
-<code>7@@</code> +<code> 
-the first one does what you think it will do... it exports all objects except a function called raise_sal owned by scott.<br />+include=function:"not in ('scott.raise_sal')" 
 +</code> 
 +the first one does what you think it will do... it exports all objects except a function called raise_sal owned by scott.
 the second one, convoluted though it is, will not do what you might think. It exports all functions (and only functions) except raise_sal owned by scott. the second one, convoluted though it is, will not do what you might think. It exports all functions (and only functions) except raise_sal owned by scott.
-=====Find partition ed tables in database===== 
-Before converting a database from Enterprise Edition to Standard Edition (Metalink: 139642.1), partitioned tables will need to be merged (as Standard Edition does not allow them) 
-<code>8@@</code> 
  
-=====Get more detailed information while Data pump is working=====+==== Get more detailed information while Data pump is working ====
 To get an idea of how long it takes to export each object, add this to the parfile (or command line) To get an idea of how long it takes to export each object, add this to the parfile (or command line)
-<code>9@@</code> +<code> 
-=====Keep the master table after a successful data pump session=====+metrics=y 
 +</code> 
 +==== Keep the master table after a successful data pump session ====
 To see what went on in the master table, add this to the parfile (or command line) To see what went on in the master table, add this to the parfile (or command line)
-<code>10@@</code> +<code> 
-The master table contains all the data pump log messages.<br /> +keep_master=y 
-It is used to track the detailed progress of a Data Pump job - which is more than the log messages.<br />+</code> 
 +The master table contains all the data pump log messages. 
 +It is used to track the detailed progress of a Data Pump job - which is more than the log messages.
 Amongst other things it conatins: Amongst other things it conatins:
-  * Completed rows of a table. +  *  Completed rows of a table. 
-  * Total number of errors during data pump operation. +  *  Total number of errors during data pump operation. 
-  * Elapsed time for each table to do data pump export/import operation. +  *  Elapsed time for each table to do data pump export/import operation. 
-  * The current set of dump files. +  *  The current set of dump files. 
-  * The current state of every object exported or imported and their locations in the dump file set. +  *  The current state of every object exported or imported and their locations in the dump file set. 
-  * The job's user-supplied parameters. +  *  The job's user-supplied parameters. 
-  * The status of every worker process. +  *  The status of every worker process. 
-  * The state of current job status and restart information. +  *  The state of current job status and restart information. 
-  * The dump file location, the directory name information. +  *  The dump file location, the directory name information. 
-=====Trace the Data pump job===== +==== Trace the Data pump job ==== 
-  * More info here: [[https://dbasolutions.wikispaces.com/Expdp+-+Trace|https://dbasolutions.wikispaces.com/Expdp+-+Trace]] +  *  More info here: [[https://dbasolutions.wikispaces.com/Expdp+-+Trace|https://dbasolutions.wikispaces.com/Expdp+-+Trace]] 
-Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp.<br /> +Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp. 
-The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300.<br /> +The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300. 
-Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive.<br /> +Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive. 
-TRACE does not add anything to the output of DataPump, it creates additional trace files.<br />+TRACE does not add anything to the output of DataPump, it creates additional trace files.
 Add TRACE=<hex digits below> to the parfile (or command line) Add TRACE=<hex digits below> to the parfile (or command line)
-<code>11@@</code> +<code> 
-====To trace an already running export job==== +Here is a summary of the Data Pump trace levels: 
-<code>12@@</code>+  10300 SHDW: To trace the Shadow process 
 +  20300 KUPV: To trace Fixed table 
 +  40300 'div' To trace Process services 
 +  80300 KUPM: To trace Master Control Process 
 + 100300 KUPF: To trace File Manager 
 + 200300 KUPC: To trace Queue services 
 + 400300 KUPW: To trace Worker process(es) 
 + 800300 KUPD: To trace Data Package 
 +1000300 META: To trace Metadata Package 
 +1FF0300 'all' To trace all components, full tracing 
 +</code> 
 +=== To trace an already running export job === 
 +<code> 
 +expdp system/manager attach=sys_export_full_01 
 +Export> stop 
 +expdp system/manager attach=sys_export_full_01 trace=480300 
 +Export> cont 
 +</code>
  
-=====Check to see if the objects can be recovered from the recyclebin=====+==== Check to see if the objects can be recovered from the recyclebin ====
 This would save time restoring from tape/dumps This would save time restoring from tape/dumps
-<code>13@@</code> +<code> 
-<code>14@@</code>+select original_name,ts_name,operation,droptime,can_undrop from dba_recyclebin where owner='PRODDTA' order by droptime; 
 +</code> 
 +<code> 
 +flashback table proddta.F5642026 to before drop; 
 +</code>
 Check the indexes and restore as necessary... Check the indexes and restore as necessary...
-<code>15@@</code>+<code> 
 +select index_name from dba_indexes where owner = 'PRODDTA' and table_name = 'F5642026'; 
 +alter index "bin$dbo9uchtzsbgqfemiadccq==$0" rename to proddta.F5642026_IX; 
 +</code>
  
-=====Import tables from a dump file into a schema===== +==== Export tables from a schema ==== 
-<code>16@@</code>+<code> 
 +expdp \'\/ as sysdba\' directory=data_pump_dir tables="applsys.fnd_user, applsys.fnd_oracle_userid" dumpfile=fnd_passwords.dmp logfile=fnd_passwords.log exclude=statistics 
 +</code> 
 +==== Import tables from a dump file into a schema ==== 
 +<code> 
 +impdp \'\/ as sysdba\' parfile=import_tables.par 
 +</code>
 where import_tables.par looks something like this where import_tables.par looks something like this
-<code>17@@</code>+<code> 
 +directory=DATA_PUMP_DIR_JDBEOP1 
 +dumpfile=expdp_JDBEOP1_D_PRODDTA_20131007200000_%U.dmp 
 +logfile=RESTORE_20131012.log 
 +job_name=RESTORE_20131012 
 +tables=(PRODDTA.F47032,PRODDTA.F47031,PRODDTA.F47022,PRODDTA.F47021,PRODDTA.F4311Z1,PRODDTA.F4301Z1) 
 +remap_schema=PRODDTA:DMIRROR 
 +remap_tablespace=PRODDTAT:RESTORE_TS_2013 
 +remap_tablespace=PRODDTAI:RESTORE_TS_2013 
 +table_exists_action=replace 
 +</code>
 where directory has already been created with where directory has already been created with
-<code>18@@</code> +<code> 
-====without a par file and with a transform to ignore table creation attributes==== +create or replace directory DATA_PUMP_DIR_JDBEOP1 as '/oracle/export/ubenoa26_jdbeop1'; 
-<code>19@@</code> +grant read,write on <DATA_PUMP_DIR_JDBEOP1 to OPS$IMPDP; 
-====replace existing tables in an existing schema==== +</code> 
-<code>20@@</code> +=== without a par file and with a transform to ignore table creation attributes === 
-====import existing tables into a different schema==== +<code> 
-<code>21@@</code> +impdp \'as sysdba\'                        \ 
-====import existing partitioned tables into a different schema==== +    DIRECTORY=DATA_PUMP_DIR                  \ 
-<code>22@@</code>+    tables=SY812.F00950,SY812.F95921         \ 
 +    transform=segment_attributes:          \ 
 +    parallel=5                               \ 
 +    remap_schema=SY812:QAC_REPORT_TEST       \ 
 +    remap_tablespace=SY812T:QAC_REPORT_TEST 
 +    dumpfile=expdp_JDB_20140531200001_%u.dmp \ 
 +    job_name=impdp_SY812                     \ 
 +    logfile=impdp_JDBEOP1_SY812.log 
 +</code> 
 +=== replace existing tables in an existing schema === 
 +<code> 
 +impdp \'as sysdba\'                                      \ 
 +    directory=data_pump_dir                                \ 
 +    tables=proddta.F5642026,proddta.F596111                \ 
 +    transform=segment_attributes:                        \ 
 +    parallel=5                                             \ 
 +    dumpfile=expdp_JDBEOP1_D_PRODDTA_20140725210000_%u.dmp \ 
 +    job_name=impdp_JDBEOP1_D_PRODDTA_F5642026_F596111      \ 
 +    logfile=impdp_JDBEOP1_D_PRODDTA_F5642026_F596111.log
  
-=====Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX===== +</code> 
-Import is taking a long time and stuck on this line. Is it waiting for something or just slow?<br />+=== import existing tables into a different schema === 
 +<code> 
 +impdp \'/ as sysdba\'                                                          \ 
 +directory=data_pump_dir                                                        \ 
 +tables=DWH.FACT_OPEX,DWH.FACT_OPEX_PTC,DWH.FACT_AFIN_SCEN,DWH.FACT_AFIN_PTC    \ 
 +remap_schema=DWH:_kantal                                                       \ 
 +remap_tablespace=USERS:TOOLS,INDX:TOOLS                                        \ 
 +dumpfile=backup_export_10g_CODWHP1_20151104210000_%u.dmp                       \ 
 +job_name=import_DWH_TABLES                                                     \ 
 +logfile=import_DWH_TABLES.log 
 +</code> 
 +=== import existing partitioned tables into a different schema === 
 +<code> 
 +impdp \'/ as sysdba\'                                                          \ 
 +directory=data_pump_dir                                                        \ 
 +tables=TIP_PRD1.EODMARKETPNL:SYS_P1751,TIP_PRD1.PHYSICALPOSITION:SYS_P16591    \ 
 +remap_schema=DWH:_kantal                                                       \ 
 +remap_tablespace=USERS:TOOLS,INDX:TOOLS                                        \ 
 +dumpfile=backup_export_10g_CODWHP1_20151104210000_%u.dmp                       \ 
 +job_name=import_DWH_TABLES                                                     \ 
 +logfile=import_DWH_TABLES.log 
 +</code> 
 + 
 +Export a single table parition 
 +<code> 
 +expdp  \"/ as sysdba\" directory=data_pump_dir dumpfile=CGD30IOT_P20_%U.dmp logfile=CGD30IOT_P20.log parallel=4 exclude=statistics tables=EPK.CGD30IOT:P20 
 +</code> 
 +And import it into another database remapping the schema 
 +<code> 
 +impdp  \"/ as sysdba\" directory=data_pump_dir dumpfile=CGD30IOT_P20_%U.dmp logfile=impdp_CGD30IOT_P20.log parallel=4 tables=EPK.CGD30IOT:P20 remap_schema=EPK:APK 
 +</code> 
 + 
 +==== Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX ==== 
 +Import is taking a long time and stuck on this line. Is it waiting for something or just slow?
 Datapump does not import indexes, it recreates them using create index statements. This can be slow, especially as all the metadata uses just 1 worker. Datapump does not import indexes, it recreates them using create index statements. This can be slow, especially as all the metadata uses just 1 worker.
-====Check the status of the job==== +=== Check the status of the job === 
-<code>23@@</code> +<code> 
-====Check to see if the impdp session is waiting for resources==== +select * from dba_datapump_jobs; 
-<code>24@@</code> +</code> 
-====Check dba_resumable for data pump jobs==== +=== Check to see if the impdp session is waiting for resources === 
-<code>25@@</code> +<code> 
-====Check alert log for possible error/warning messages==== +select vsw.* 
-=====Find out what Datapump jobs are running===== +from   dba_datapump_sessions dds 
-<code>26@@</code> +,      v$session vs 
-<code>27@@</code>+,      v$session_wait vsw 
 +where  dds.saddr = vs.saddr 
 +and    vs.sid    = vsw.sid 
 +
 +</code> 
 +=== Check dba_resumable for data pump jobs === 
 +<code> 
 +select * from dba_resumable; 
 +</code> 
 +=== Check alert log for possible error/warning messages === 
 + 
 +==== Find out what Datapump jobs are running ==== 
 +<code> 
 +SET lines 200 
 +COL owner_name FORMAT a10; 
 +COL job_name FORMAT a20 
 +COL state FORMAT a11 
 +COL operation LIKE state 
 +COL job_mode LIKE state 
 + 
 +SELECT job_name 
 +,      owner_name 
 +,      operation 
 +,      job_mode 
 +,      state 
 +,      attached_sessions 
 +FROM   dba_datapump_jobs 
 +WHERE  1=1 
 +and    job_name NOT LIKE 'BIN$%' 
 +ORDER  BY 2,1 
 +
 +</code> 
 +<code> 
 +JOB_NAME             OWNER_NAME OPERATION   JOB_MODE    STATE       ATTACHED_SESSIONS 
 +-------------------- ---------- ----------- ----------- ----------- ----------------- 
 +EXP_20131018100328   OPS$ORADP  EXPORT      FULL        EXECUTING                   1 
 + 
 +SQL> 
 +</code>
  
-=====Kill a datapump job with PL/SQL===== +==== Kill a datapump job with PL/SQL ==== 
-<code>28@@</code> +<code> 
-=====Connect to a running export Datapump job from command line===== +SET serveroutput on 
-<code>29@@</code>+SET lines 100 
 +DECLARE 
 +    l_handle NUMBER; 
 +BEGIN 
 +    l_handle := DBMS_DATAPUMP.ATTACH('EXP_20131018100328','OPS$ORADP'); 
 +    DBMS_DATAPUMP.STOP_JOB (l_handle,1,0); 
 +END; 
 +
 +</code> 
 +==== Connect to a running export Datapump job from command line ==== 
 +<code> 
 +expdp / attach="expdp_JDBEOT21_D_FULL_20130228143835" 
 +</code>
 or or
-<code>30@@</code>+<code> 
 +expdp \\'/ as sysdba\\' attach="expdp_JDBEOT21_D_FULL_20130228143835" 
 +</code>
  
-=====Suspend a running data pump===== +==== Suspend a running data pump ==== 
-<code>31@@</code> +<code> 
-=====Restart a previously suspended data pump job===== +stop_job 
-<code>32@@</code> +</code> 
-=====Terminate a data pump job=====+==== Restart a previously suspended data pump job ==== 
 +<code> 
 +start_job 
 +</code> 
 +==== Terminate a data pump job ====
 Also kills the operating system processes Also kills the operating system processes
-<code>33@@</code>+<code> 
 +kill_job 
 +</code>
  
-=====Drop erroneous Data Pump master tables===== +==== Drop erroneous Data Pump master tables ==== 
-<code>34@@</code>+<code> 
 +select 'DROP TABLE '||owner||'.'||table_name||' purge;' 
 +from   dba_tables 
 +where  1=1 
 +and    table_name like '%SYS%EXPORT%' 
 +
 +</code>
 and and
-<code>35@@</code>+<code> 
 +set pages 100 lines 200 
 +select 'drop table "' || o.owner||'"."'||object_name || '" purge;' 
 +from   dba_objects o 
 +,      dba_datapump_jobs j 
 +where  1=1 
 +and    o.owner       = j.owner_name 
 +and    o.object_name = j.job_name 
 +and    j.state      != 'EXECUTING' 
 +
 +</code>
  
-=====Did the import work?===== +==== Did the import work? ==== 
-Moving a database with Data pump? Check the number of rows imported against those exported!<br /> +Moving a database with Data pump? Check the number of rows imported against those exported! 
-I noticed that sometimes, maybe tables with BLOB/LOB columns?, that the import does not import all rows...<br />+I noticed that sometimes, maybe tables with BLOB/LOB columns?, that the import does not import all rows...
 eg: eg:
-<code>36@@</code>+<code> 
 +Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION 
 +Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA 
 +. . imported "WM712"."BIZDOCCONTENT"                     104.8 GB       0 out of 10389777 rows 
 +</code>
 Run this on the import logfile to check for this condition Run this on the import logfile to check for this condition
-<code>37@@</code> +<code> 
-or a more thorough approach... count the number of records in all tables both locally and remotely for comparison.<br />+cat import_WM712P.log | perl -ne 'if (m/(\\d+) out of (\\d+) rows/) { print if $1 != $2; }' 
 +</code> 
 +or a more thorough approach... count the number of records in all tables both locally and remotely for comparison.
 Change value for l_remote_tns. File in csv format is created in DATA_PUMP_DIR. Change value for l_remote_tns. File in csv format is created in DATA_PUMP_DIR.
-<code>38@@</code>+<code> 
 +set serveroutput on 
 +declare 
 +    cursor c_tables is 
 +    select owner 
 +    ,      table_name 
 +    from   dba_tables 
 +    where  1=1 
 +    and    tablespace_name not in ('SYSTEM','SYSAUX','TEMP'
 +    ; 
 +    l_remote_tns       varchar2(100) := 'wm712windows'; 
 +    l_sql              varchar2(240) := null; 
 +    l_local_row_count  number        := 0; 
 +    l_remote_row_count number        := 0; 
 +    f_file             utl_file.file_type; 
 +begin 
 +    dbms_output.enable(null); 
 +    f_file := utl_file.fopen('DATA_PUMP_DIR','table_row_count.csv','W'); 
 +    utl_file.put_line (f_file, 'Table;Local;Remote'); 
 +    for r_tables in c_tables 
 +    loop 
 +        l_sql := 'select count(1) into :l_local_row_count from '||r_tables.owner||'.'||r_tables.table_name; 
 +        execute immediate l_sql into l_local_row_count; 
 +        l_sql := 'select count(1) into :l_remote_row_count from '||r_tables.owner||'.'||r_tables.table_name||'@'||l_remote_tns; 
 +        l_remote_row_count := 0; 
 +        begin 
 +            execute immediate l_sql into l_remote_row_count; 
 +        exception 
 +        when others then 
 +            null; 
 +        end; 
 +        --dbms_output.put_line (r_tables.owner||'.'||r_tables.table_name||';'||l_local_row_count||';'||l_remote_row_count); 
 +        utl_file.put_line (f_file, r_tables.owner||'.'||r_tables.table_name||';'||l_remote_row_count||';'||l_local_row_count); 
 +    end loop; 
 +    utl_file.fclose (f_file); 
 +exception 
 +when others then 
 +    dbms_output.put_line('Problem: '||sqlerrm); 
 +end; 
 +
 +</code>
  
-=====Import into schemas and tablespaces other than those contained in the export dump file=====+==== Import into schemas and tablespaces other than those contained in the export dump file ====
 These will have to be manually pre-created These will have to be manually pre-created
 +<code>
  CREATE USER MACHPENAL PROFILE "DEFAULT" IDENTIFIED BY ****** DEFAULT TABLESPACE "ACCMACH" TEMPORARY TABLESPACE "MACH_TEMP" QUOTA UNLIMITED ON "ACCMACH" QUOTA UNLIMITED ON "MACHX_TBS" ACCOUNT UNLOCK;  CREATE USER MACHPENAL PROFILE "DEFAULT" IDENTIFIED BY ****** DEFAULT TABLESPACE "ACCMACH" TEMPORARY TABLESPACE "MACH_TEMP" QUOTA UNLIMITED ON "ACCMACH" QUOTA UNLIMITED ON "MACHX_TBS" ACCOUNT UNLOCK;
  GRANT ALTER SESSION TO "MACHPENAL";  GRANT ALTER SESSION TO "MACHPENAL";
Line 159: Line 477:
  GRANT "CONNECT" TO "MACHPENAL";  GRANT "CONNECT" TO "MACHPENAL";
  GRANT "MACHP_ROLE" TO "MACHPENAL";  GRANT "MACHP_ROLE" TO "MACHPENAL";
 +</code>
  
 +<code>
  impdp system/****** parfile=impdp_prd_accmach.par  impdp system/****** parfile=impdp_prd_accmach.par
 +</code>
 where parfile looks something like this: where parfile looks something like this:
 +<code>
  SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX)  SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX)
  #INCLUDE=POST_TABLE_ACTION  #INCLUDE=POST_TABLE_ACTION
Line 167: Line 489:
  #INCLUDE=PROCACT_SCHEMA  #INCLUDE=PROCACT_SCHEMA
  EXCLUDE=STATISTICS  EXCLUDE=STATISTICS
- REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH                                      + REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH                                     + REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH                                       + REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH                                      + REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH                                      + REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH                                       + REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH                                       + REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH   + REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH
  ...  ...
- REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH                                       + REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH                                       + REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH                                       + REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH 
- REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH                                      + REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH
  REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH  REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH
  REMAP_SCHEMA=CHEOPSFIX:MACHFIX  REMAP_SCHEMA=CHEOPSFIX:MACHFIX
Line 189: Line 511:
  DIRECTORY=expdp_dir  DIRECTORY=expdp_dir
  PARALLEL=8  PARALLEL=8
 +</code>
  
-=====Change parallelism after expdp has started=====+==== Change parallelism after expdp has started ====
 Export started with 1 worker but taking a long time? Change the number of parallel workers! Export started with 1 worker but taking a long time? Change the number of parallel workers!
-<code>39@@</code> +<code> 
-<code>40@@</code> +expdp \\'\\as sysdba\\' attach=SYS_EXPORT_TABLE_01
-<code>41@@</code>+
  
-=====Monitor Data Pump sessions=====+Export: Release 12.1.0.2.0 - Production on Mon Oct 1 15:44:41 2018 
 + 
 +Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved. 
 + 
 +Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 
 +With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options 
 + 
 +Job: SYS_EXPORT_TABLE_01 
 +  Owner: SYS 
 +  Operation: EXPORT 
 +  Creator Privs: TRUE 
 +  GUID: 772B5F839E8000ECE0530A085A3842B1 
 +  Start Time: Monday, 01 October, 2018 14:59:20 
 +  Mode: TABLE 
 +  Instance: clnt 
 +  Max Parallelism:
 +  Timezone: +02:00 
 +  Timezone version: 18 
 +  Endianness: BIG 
 +  NLS character set: AL32UTF8 
 +  NLS NCHAR character set: AL16UTF16 
 +  EXPORT Job Parameters: 
 +  Parameter Name      Parameter Value: 
 +     CLIENT_COMMAND        /******** AS SYSDBA parfile=/oracle/scripts/expdp_clnt_tables.par 
 +  State: EXECUTING 
 +  Bytes Processed: 73,116,744,528 
 +  Percent Done: 50 
 +  Current Parallelism:
 +  Job Error Count: 0 
 +  Dump File: /cln/tst/ora_data1/clnt/archivelog/clnt_tables_20180926.dmp 
 +    bytes written: 73,117,818,880 
 + 
 +Worker 1 Status: 
 +  Instance ID: 1 
 +  Instance name: clnt 
 +  Host name: hn5306.cln.be 
 +  Process Name: DW00 
 +  State: EXECUTING 
 +  Object Schema: TPK 
 +  Object Name: EID01 
 +  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA 
 +  Completed Objects: 5 
 +  Total Objects: 192 
 +  Completed Rows: 835,298 
 +  Worker Parallelism:
 +</code> 
 +<code> 
 +Export> parallel=4 
 + 
 +Export> status 
 +</code> 
 +<code> 
 +Job: SYS_EXPORT_TABLE_01 
 +  Operation: EXPORT 
 +  Mode: TABLE 
 +  State: EXECUTING 
 +  Bytes Processed: 73,116,744,528 
 +  Percent Done: 50 
 +  Current Parallelism:
 +  Job Error Count: 0 
 +  Dump File: /cln/tst/ora_data1/clnt/archivelog/clnt_tables_20180926.dmp 
 +    bytes written: 73,117,818,880 
 + 
 + 
 +Worker 1 Status: 
 +  Instance ID: 1 
 +  Instance name: clnt 
 +  Host name: hn5306.cln.be 
 +  Process Name: DW00 
 +  State: EXECUTING 
 +  Object Schema: TPK 
 +  Object Name: CCD01 
 +  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA 
 +  Completed Objects: 10 
 +  Total Objects: 192 
 +  Completed Rows: 317,968 
 +  Worker Parallelism:
 + 
 +Worker 2 Status: 
 +  Instance ID: 1 
 +  Instance name: clnt 
 +  Host name: hn5306.cln.be 
 +  Process Name: DW01 
 +  State: EXECUTING 
 +  Object Schema: TPK 
 +  Object Name: BCD18 
 +  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA 
 +  Completed Objects: 4 
 +  Total Objects: 192 
 +  Worker Parallelism:
 + 
 +Worker 3 Status: 
 +  Instance ID: 1 
 +  Instance name: clnt 
 +  Host name: hn5306.cln.be 
 +  Process Name: DW02 
 +  State: EXECUTING 
 +  Object Schema: TPK 
 +  Object Name: FCD06 
 +  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA 
 +  Completed Objects: 4 
 +  Total Objects: 192 
 +  Worker Parallelism:
 + 
 +Worker 4 Status: 
 +  Instance ID: 1 
 +  Instance name: clnt 
 +  Host name: hn5306.cln.be 
 +  Process Name: DW03 
 +  State: EXECUTING 
 +  Object Schema: TPK 
 +  Object Name: CBD10 
 +  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA 
 +  Completed Objects: 4 
 +  Total Objects: 192 
 +  Worker Parallelism:
 + 
 +Export> 
 +</code> 
 + 
 +==== Monitor Data Pump sessions ==== 
 +<code>
  select s.sid  select s.sid
  ,      s.serial#  ,      s.serial#
Line 202: Line 645:
  ,      dba_datapump_sessions dps  ,      dba_datapump_sessions dps
  where s.saddr = dps.saddr;  where s.saddr = dps.saddr;
 +</code>
  
 Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view: Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view:
 +<code>
  select sid from v$px_session  select sid from v$px_session
  where qcsid = <DP Session ID>;  where qcsid = <DP Session ID>;
 +</code>
  
-=====How long will the export take?=====+==== How long will the export take? ====
 Data Pump sends information about its progress to the longops view and can be consulted thus: Data Pump sends information about its progress to the longops view and can be consulted thus:
-<code>42@@</code>+<code> 
 +select sid 
 +,      serial# 
 +,      sofar 
 +,      totalwork 
 +from   v$session_longops 
 +where  1=1 
 +and    opname = '<DP Job Name>' 
 +and    sofar != totalwork 
 +
 +</code>
 or or
-<code>43@@</code>+<code> 
 +select vsl.* 
 +from   v$session_longops vsl 
 +where  1=1 
 +and    (vsl.sid,vsl.serial#) in (select sid,serial# from dba_datapump_sessions) 
 +and    vsl.sofar != vsl.totalwork 
 +
 +</code>
 or or
-<code>44@@</code>+<code> 
 +set serveroutput on 
 +DECLARE 
 +  ind NUMBER;              -- Loop index 
 +  h1 NUMBER;               -- Data Pump job handle 
 +  percent_done NUMBER;     -- Percentage of job complete 
 +  job_state VARCHAR2(30);  -- To keep track of job state 
 +  js ku$_JobStatus;        -- The job status from get_status 
 +  ws ku$_WorkerStatusList; -- Worker status 
 +  sts ku$_Status;          -- The status object returned by get_status 
 +BEGIN 
 +h1 := DBMS_DATAPUMP.attach('&JOB_NAME', '&OWNER'); -- job name and owner 
 +dbms_datapump.get_status(h1, 
 +           dbms_datapump.ku$_status_job_error + 
 +           dbms_datapump.ku$_status_job_status + 
 +           dbms_datapump.ku$_status_wip, 0, job_state, sts); 
 +js := sts.job_status; 
 +ws := js.worker_status_list; 
 +      dbms_output.put_line('*** Job percent done = ' || 
 +                           to_char(js.percent_done)); 
 +      dbms_output.put_line('restarts - '||js.restart_count); 
 +ind := ws.first; 
 +  while ind is not null loop 
 +    dbms_output.put_line('rows completed - '||ws(ind).completed_rows); 
 +    ind := ws.next(ind); 
 +  end loop; 
 +DBMS_DATAPUMP.detach(h1); 
 +end; 
 +
 +</code>
  
-=====Debugging a Data Pump session===== +==== Debugging a Data Pump session ==== 
-While exporting/importing, the Datapump job can sometimes appear to hang. What is it doing?<br /> +While exporting/importing, the Datapump job can sometimes appear to hang. What is it doing? 
-Firstly, we can add "METRICS=Y" to the parameter file. This gives a bit more logging info.<br />+Firstly, we can add "METRICS=Y" to the parameter file. This gives a bit more logging info.
 Next, we can dig deeper into the session using the session id. Next, we can dig deeper into the session using the session id.
-<code>45@@</code> +<code> 
-<code>46@@</code> +set lines 150 pages 100 
-<code>47@@</code>+col program for a30 
 +col username for a15 
 +col spid for a7 
 +col job_name for a25 
 +select to_char (sysdate, 'DD-MON-YY HH24:MI:SS') timenow 
 +,      s.program 
 +,      s.sid 
 +,      s.status 
 +,      s.username 
 +,      dds.job_name 
 +,      p.spid 
 +,      s.serial# 
 +,      p.pid 
 +from   v$session s 
 +,      v$process p 
 +,      dba_datapump_sessions dds 
 +where  p.addr  = s.paddr 
 +and    s.saddr = dds.saddr 
 +
 +</code> 
 +<code> 
 +select sw.sid 
 +,      sw.event 
 +,      sw.seconds_in_wait 
 +from   v$session s 
 +,      dba_datapump_sessions dds 
 +,      v$session_wait sw 
 +where  dds.saddr = s.saddr 
 +and    s.sid     = sw.sid 
 +
 +</code> 
 +<code> 
 +execute sys.dbms_system.set_ev (&sid,15,10046,8,//); 
 +</code>
 then go to the user dump directory and use tkprof to read the trace file. then go to the user dump directory and use tkprof to read the trace file.
  
-=====Enable logging during a datpump impdp/expdp=====+==== Enable logging during a datpump impdp/expdp ====
 As of 12c, much better logging is available: As of 12c, much better logging is available:
-<code>48@@</code>+<code> 
 +expdp / full=y dump_file=test.dmp logfile=test.log logtime=all 
 +</code>
 This prefixes all messages in the logfile with a timestamp. This prefixes all messages in the logfile with a timestamp.
  
-=====Enabling SQL trace=====+==== Enabling SQL trace ====
 Cannot simply use: Cannot simply use:
-<code>49@@</code>+<code> 
 +alter session set sql_trace=true; 
 +</code>
 as Data Pump starts another session. An event has to be set: as Data Pump starts another session. An event has to be set:
-<code>50@@</code>+<code> 
 +select sid 
 +,      serial# 
 +,      username 
 +,      program 
 +from   v$session 
 +where  1=1 
 +and    upper(program) like '%(DW%)' 
 +or     upper(program) like '%(DM%)' 
 +
 +</code>
 then use the sid and serial# to set the event: then use the sid and serial# to set the event:
-<code>51@@</code>+<code> 
 +exec dbms_system.set_ev(<sid>, <serial#>, 10046, 12, // ); 
 +</code>
 Trace files will be in user-dump_dest Trace files will be in user-dump_dest
  
 For the full beans on what Data Pump can do, visit [[http://www.rampant-books.com/art_nanda_datapump.htm|Arup Nanda's Data Pump page]] For the full beans on what Data Pump can do, visit [[http://www.rampant-books.com/art_nanda_datapump.htm|Arup Nanda's Data Pump page]]
  
-=====Restore a schema from a dumpfile into a new schema so that user can pick his objects===== +==== Restore a schema from a dumpfile into a new schema so that user can pick his objects ==== 
-  * Restore relevant file from [[TSM]] +  *  Restore relevant file from [[TSM]] 
-  * Connect to the database as sysdba and +  *  Connect to the database as sysdba and 
-<code>52@@</code>+<code> 
 +create tablespace temp_recovery 
 +       datafile '/oracle/<ORACLE_SID>/oradata1/temp_recovery.dbf' 
 +       size 100M autoextend on next 100M maxsize 20000M 
 +       extent management local 
 +       segment space management auto; 
 +</code>
 Create a new schema to put the recovered data Create a new schema to put the recovered data
-<code>53@@</code>+<code> 
 +create user temp_recovery identified by temp_recovery default tablespace temp_recovery temporary tablespace temp; 
 +grant connect, resource to temp_recovery; 
 +</code>
  
 Create a par file to remap the schema and tablespace Create a par file to remap the schema and tablespace
-<code>54@@</code>+<code> 
 +vi impdp_schema.par 
 + 
 +directory=DATA_PUMP_DIR 
 +dumpfile=/oracle/export/<ORACLE_SID>/expdp_<ORACLE_SID>_D_FULL_20120918213006_%U.dmp 
 +logfile=impdp_schema.log 
 +job_name=impdp_schema 
 +schemas=SCHEMA_TO_RECOVER 
 +remap_schema=SCHEMA_TO_RECOVER:temp_recovery 
 +remap_tablespace=USERS:temp_recovery 
 +table_exists_action=skip 
 +</code>
  
 Run the import Run the import
-<code>55@@</code> +<code> 
-=====Restore schemas from dump. Clean out schema objects first.===== +impdp \\'/ as sysdba\\' parfile=impdp_schema.par 
-You can also drop schema but then you lose all grants.<br />+</code> 
 +==== Restore schemas from dump. Clean out schema objects first. ==== 
 +You can also drop schema but then you lose all grants.
 Check we have the dump files... Check we have the dump files...
-<code>56@@</code>+<code> 
 +ll 
 +-rw-r-----   1 GPTASKP2 dba      8540041216 Oct 17 21:17 expdp_GPTASKP2_D_FULL_20131017210003_01.dmp 
 +-rw-r-----   1 GPTASKP2 dba      9391812608 Oct 17 21:17 expdp_GPTASKP2_D_FULL_20131017210003_02.dmp 
 +-rw-r-----   1 GPTASKP2 dba      32161792 Oct 17 21:17 expdp_GPTASKP2_D_FULL_20131017210003_03.dmp 
 +</code>
 Backup schemas just in case... Backup schemas just in case...
-<code>57@@</code>+<code> 
 +cat expdp_just_in_case.ksh 
 +expdp \'/ as sysdba\'                                      \ 
 +    DIRECTORY=DATA_PUMP_DIR                                \ 
 +    schemas=GPCOMP4,GPCOMP6                                \ 
 +    dumpfile=expdp_GPTASKP2_JUST_IN_CASE_201310181500.dmp 
 +    job_name=impdp_GPTASKP2_GPCOMP4_GPCOMP6                \ 
 +    logfile=expdp_GPTASKP2_GPCOMP4_GPCOMP6.log 
 +</code>
 Clean out schemas to be imported... Clean out schemas to be imported...
-<code>58@@</code+<code> 
-<br />+  - !/usr/bin/ksh 
 +  -  ============================================================================== 
 +  -  Name         : userman_clean_out_schema.ksh 
 +  -  Description  : Rids a schema of all objects. An empty shell ready for import. 
 +  -  
 +  -  Parameters   : 1 - -u <schema name to clean out> 
 +  -                 2 - -e if you want the schema to be wiped automatically 
 +  -  
 +  -  Notes        : Generates an SQL file and an execution spool file 
 +  -  
 +  -  Modification History 
 +  -  ==================== 
 +  -  When      Who               What 
 +  -  ========= ================= ================================================== 
 +  -  09-NOV-13 Stuart Barkley    Created 
 +  -  06-JAN-14 Stuart Barkley    Added SYNONYM 
 +  -  23-JAN-14 Stuart Barkley    Added -e option and tidied 
 +  -  25-APR-14 Stuart Barkley    Create files in /tmp to avoid permissions errors 
 +  -  ============================================================================== 
 + 
 +ts=`date +"%Y%M%d"
 + 
 +  -  ---------------------------- 
 +  -  get any command line options 
 +  -  ---------------------------- 
 +unset USERNAME 
 + 
 +DOIT="N" 
 +while getopts "u:e" OPT 
 +do 
 +    case "$OPT" in 
 +    u) USERNAME=$OPTARG;; 
 +    e) DOIT="Y";; 
 +    esac 
 +done 
 +shift $((OPTIND-1)) 
 + 
 + 
 +  -  -------------------------- 
 +  -  check mandatory parameters 
 +  -  -------------------------- 
 +if [[ -z $USERNAME ]]; then 
 +    echo "" 
 +    echo "Usage:  `basename $0` [[-e]] -u <schemaname to wipe>" 
 +    echo "specifying -e makes the script go ahead and do the wipe automatically" 
 +    echo "" 
 +    exit 1 
 +fi 
 + 
 +sqlplus -s / as sysdba <<EOSQL 
 +set pages 0 
 +set feedb off 
 +set verif off 
 +spool  /tmp/userman_clean_out_${USERNAME}.sql 
 +select 'drop '||object_type||' "'||owner||'"."'||object_name||'"'||decode(object_type,'TABLE',' CASCADE CONSTRAINTS PURGE;',';') text 
 +from   dba_objects 
 +where  1=1 
 +and    owner       = '$USERNAME' 
 +order  by object_type 
 +,      object_name 
 +
 +spool off 
 +EOSQL 
 + 
 +if [[ "$DOIT" == "Y" ]]; then 
 +    echo "wiping user..." 
 +    sqlplus -s / as sysdba <<EOSQL 
 +spool /tmp/userman_clean_out_${USERNAME}_run.log 
 +set termo on echo on feedb on 
 +start /tmp/userman_clean_out_${USERNAME}.sql 
 + 
 +purge recyclebin; 
 +spool off 
 +EOSQL 
 +    echo "done wiping user" 
 +else 
 +    echo "To execute this automatically, add '-e' as a parameter to the script" 
 +    echo "or you can manually run the sql file I just generated: userman_clean_out_${USERNAME}.sql" 
 +fi 
 +</code> 
 Import the required data... Import the required data...
-<code>59@@</code> +<code> 
-====Alternatively, drop and recreate the schema / user using dbms_metadata get_ddl and get_granted_ddl====+cat impdp_GPTASKP2_GPCOMP4_GPCOMP6.ksh 
 +impdp \'/ as sysdba\'                                     \ 
 +    DIRECTORY=DATA_PUMP_DIR                               \ 
 +    schemas=GPCOMP4,GPCOMP6                               \ 
 +    dumpfile=expdp_GPTASKP2_D_FULL_20131017210003_%u.dmp 
 +    job_name=impdp_GPTASKP2_GPCOMP4_GPCOMP6               \ 
 +    logfile=impdp_GPTASKP2_GPCOMP4_GPCOMP6 
 + 
 +</code> 
 +=== Alternatively, drop and recreate the schema / user using dbms_metadata get_ddl and get_granted_ddl ===
 Run this **before** dropping the schema to generate DDL necessary to recreate it Run this **before** dropping the schema to generate DDL necessary to recreate it
-<code>60@@</code>+<code> 
 +sqlplus -s /nolog <<EOSQL 2>/tmp/results.$$ 
 +connect / as sysdba 
 +whenever sqlerror continue 
 +set hea off echo off pages 0 newp none long 99999 lin 2000 trims on feed off veri off 
 + 
 +execute dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SQLTERMINATOR',true); 
 +spool /tmp/recreate_schemae_$$.sql 
 + 
 +select to_char( dbms_metadata.get_ddl         ( 'USER',         du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}'); 
 +select to_char( dbms_metadata.get_granted_ddl ( 'ROLE_GRANT',   du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}'); 
 +select to_char( dbms_metadata.get_granted_ddl ( 'DEFAULT_ROLE', du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}'); 
 +select to_char( dbms_metadata.get_granted_ddl ( 'SYSTEM_GRANT', du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}'); 
 +select to_char( dbms_metadata.get_granted_ddl ( 'OBJECT_GRANT', du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}'); 
 + 
 + 
 +-- db_links do not work properly with dbms_metadata if they are for a different schema 
 +select 'create '                                      || 
 +       decode (u.name, 'PUBLIC', 'public '          || 
 +       'database link "'                              || 
 +       decode (u.name, 'PUBLIC', null, u.name || '.') || 
 +       l.name                                         || 
 +       '" connect to "'                               || 
 +       l.userid                                       || 
 +       '" identified by values "'                     || 
 +       l.passwordx                                    || 
 +       '" using "'                                    || 
 +       l.host                                         || 
 +       '"'                                            || 
 +       ';' text 
 +from   sys.link$ l 
 +,      sys.user$ u 
 +where  1=1 
 +and    l.owner# = u.user# 
 +and    u.name   in upper('${SCHEMAE}'); 
 + 
 +spool off 
 +EOSQL 
 +</code> 
 + 
 +==== Export database links for a schema ==== 
 +<code> 
 +expdp userid=\'sys\/sys as sysdba\' dumpfile=dbsizing_dblinks.dmp logfile=expdp_dbsizing_dblinks.log content=metadata_only include=db_link schemas=db_sizing directory=data_pump_dir 
 +</code> 
 + 
 +=== References === 
 +  *  Master Note for Data Pump [[ID 1264715.1]] 
 +  *  Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [[ID 453895.1]] 
 +  *  Parallel Capabilities of Oracle Data Pump [[ID 365459.1]] 
 +  *  [[http://jensenmo.blogspot.be/2012/10/optimising-data-pump-export-and-import.html|Morton Jensen's Optimising Data Pump Export and Import Performance / investigation into speeding up LOB import/export]]
  
-====References==== 
-  * Master Note for Data Pump [[ID|1264715.1]] 
-  * Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [[ID|453895.1]] 
-  * Parallel Capabilities of Oracle Data Pump [[ID|365459.1]] 
-  * [[http://jensenmo.blogspot.be/2012/10/optimising-data-pump-export-and-import.html|Morton Jensen's Optimising Data Pump Export and Import Performance / investigation into speeding up LOB import/export]] 
datapump.1544273360.txt.gz · Last modified: 2018/12/08 12:49 by 0.0.0.0

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki