Essentially use log_file_convert=“<name>,<name>” ! along with ALTER SYSTEM SET standby_file_management='AUTO';
Don't where else to put this. But this query executed by Commvault before starting to backup, either takes 10 minutes or more to return or just does not return!
SELECT 'D,' || c.NAME || ':' || t.NAME || ':' || c.dbId || ',' || d.STATUS || ',' || d.ENABLED || ',' || TO_CHAR(d.BYTES) || ',' || TO_CHAR(d.BYTES - NVL(ff.fbytes,0)) || ',' || TRIM(' ' FROM d.NAME) || ',' || TRIM(' ' FROM d.FILE#) AS PDB_TS_DF FROM v$datafile d, v$tablespace t, v$CONTAINERS c,(SELECT f.CON_ID CON_ID, f.file_id file_id, SUM(f.bytes) fbytes FROM CDB_FREE_SPACE f GROUP BY f.file_id,f.CON_ID) ff WHERE c.CON_ID = d.CON_ID and d.CON_ID = t.CON_ID and d.TS#=t.TS# AND ff.file_id (+)= d.FILE# ORDER BY PDB_TS_DF;
To be clear, it is not a Commvault issue. It is a database issue and it is the same old thing that has plagued queries like this for years.
If a query attacks dba_free_space (or cdb_free_space in this case), and there are too many records in the recyclebin, say goodbye to a select on that table.
SQL> select count(*) from cdb_recyclebin;
COUNT(*)
----------
3519
SQL> set timi on
SQL> purge dba_recyclebin;
DBA Recyclebin purged.
Elapsed: 00:04:18.42
Reran the query and it took less that 1 second.
duplicate target database for standby from active database nofilenamecheck; or restore standby controlfile from service 'primary'; alter database mount; restore database from service 'primary'; or restore standby controlfile from service 'primary'; alter database mount; recover database from service 'primary'; or dbca -silent -createDuplicateDB -gdbName standby -dbUniquename standby -primaryDBConnectionString <primary_host>:<primary_port>/<primary_service_name> -sysPassword sys -sid <primary_SID> -createAsStandby
As mentioned in Manual Completion of a Failed RMAN Backup Based Duplicate (Doc ID 360962.1)
Talk about bad hair day. They say a disaster is never one thing but a chain of events in succession. Let's see!
Checked alertlog on appli and found
2021-11-09T18:57:05.160356+01:00 alter tablespace SYSAUX offline immediate Completed: alter tablespace SYSAUX offline immediate alter tablespace SYSTEM offline immediate ORA-1541 signalled during: alter tablespace SYSTEM offline immediate ... 2021-11-09T18:57:30.540087+01:00 Errors in file /cln/exp/ora_bin1/app/oracle/diag/rdbms/appli/appli/trace/appli_mz00_49219024.trc: ORA-01110: data file 2: '/cln/exp/ora_data3/appli/sysaux01.dbf' 2021-11-09T18:57:31.542538+01:00 Checker run found 2 new persistent data failures 2021-11-09T18:58:30.800526+01:00 Errors in file /cln/exp/ora_bin1/app/oracle/diag/rdbms/appli/appli/trace/appli_mz00_55838908.trc: ORA-01110: data file 2: '/cln/exp/ora_data3/appli/sysaux01.dbf' 2021-11-09T18:59:30.731775+01:00 Errors in file /cln/exp/ora_bin1/app/oracle/diag/rdbms/appli/appli/trace/appli_mz00_40699784.trc: ORA-01110: data file 2: '/cln/exp/ora_data3/appli/sysaux01.dbf' 2021-11-09T19:00:30.849986+01:00 Errors in file /cln/exp/ora_bin1/app/oracle/diag/rdbms/appli/appli/trace/appli_mz00_27331146.trc: ORA-01110: data file 2: '/cln/exp/ora_data3/appli/sysaux01.dbf'
The duplicate database command has corrupted the sysaux datafile - Oops.
Attempting to restore/recover the sysaux datafile…
export NLS_DATE_FORMAT='DD-MON-YYYY HH24MISS'
TS="$(date '+%Y%m%d%H%M%S')"
rman debug all trace=srdc_rman_debug_${TS}.trc log=srdc_rman_output_${TS}.log
connect target /
connect catalog rman/*******@rman12
set echo on;
backup validate check logical tablespace SYSAUX;
restore datafile 2 preview;
restore datafile 2;
recover datafile 2;
This is the output of the restore
RMAN> restore datafile 2; RMAN-03090: Starting restore at 11-NOV-2021 134609 RMAN-12016: using channel ORA_SBT_TAPE_1 RMAN-12016: using channel ORA_SBT_TAPE_2 RMAN-12016: using channel ORA_DISK_1 RMAN-11007: new media label is "V_27422725_221829400" for piece "70651506_APPLI_470dpprq_1_1" RMAN-08016: channel ORA_SBT_TAPE_1: starting datafile backup set restore RMAN-08089: channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set RMAN-08610: channel ORA_SBT_TAPE_1: restoring datafile 00002 to /cln/exp/ora_data3/appli/sysaux01.dbf RMAN-08003: channel ORA_SBT_TAPE_1: reading from backup piece 70651506_APPLI_470dpprq_1_1 RMAN-08615: channel ORA_SBT_TAPE_1: ORA-19870: error while restoring backup piece 70651506_APPLI_470dpprq_1_1 ORA-19507: failed to retrieve sequential file, handle="70651506_APPLI_470dpprq_1_1", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: non RMAN, but media manager or vendor specific failure, error text: sbtrestore: Job[0] thread[7931304]: InitializeCLRestore() failed. RMAN-05016: failover to previous backup ... ... RMAN-05016: failover to previous backup RMAN-08016: channel ORA_SBT_TAPE_1: starting datafile backup set restore RMAN-08089: channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set RMAN-08610: channel ORA_SBT_TAPE_1: restoring datafile 00002 to /cln/exp/ora_data3/appli/sysaux01.dbf RMAN-08003: channel ORA_SBT_TAPE_1: reading from backup piece 69202456_APPLI_b70bfvc9_1_1 RMAN-08615: channel ORA_SBT_TAPE_1: ORA-19870: error while restoring backup piece 69202456_APPLI_b70bfvc9_1_1 ORA-19507: failed to retrieve sequential file, handle="69202456_APPLI_b70bfvc9_1_1", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: non RMAN, but media manager or vendor specific failure, error text: sbtrestore: Job[0] thread[7931304]: InitializeCLRestore() failed. RMAN-05016: failover to previous backup RMAN-06064: creating datafile file number=2 name=/cln/exp/ora_data3/appli/sysaux01.dbf RMAN-03091: Finished restore at 11-NOV-2021 135711
So even though it failed a few times it seems to have eventually worked. But…
This is the log of the recover
RMAN> recover datafile 2; RMAN-03090: Starting recover at 11-NOV-2021 135903 RMAN-12016: using channel ORA_SBT_TAPE_1 RMAN-12016: using channel ORA_SBT_TAPE_2 RMAN-12016: using channel ORA_DISK_1 RMAN-08054: starting media recovery RMAN-06050: archived log for thread 1 with sequence 398377 is already on disk as file /cln/exp/ora_data3/archivelog/appli/log1_398377_821307724.arc RMAN-06050: archived log for thread 1 with sequence 398378 is already on disk as file /cln/exp/ora_data3/archivelog/appli/log1_398378_821307724.arc RMAN-06050: archived log for thread 1 with sequence 398379 is already on disk as file /cln/exp/ora_data3/archivelog/appli/log1_398379_821307724.arc RMAN-06050: archived log for thread 1 with sequence 398380 is already on disk as file /cln/exp/ora_data3/archivelog/appli/log1_398380_821307724.arc RMAN-08060: unable to find archived log RMAN-08510: archived log thread=1 sequence=1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of recover command at 11/11/2021 14:00:13 RMAN-03009: failure of restoring and applying logs command on default channel at 11/11/2021 14:00:13 RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 1 and starting SCN of 2055
What has happened? All the failing back through previous backups has caused the datafile to be restored (somehow) to the initial version of 8 years ago and it's asking for sequence number 1! - Oops.
Archivelogs are only kept for a month which is more than enough as an incremental level 0 is taken every week.
So now we have a broken production database and a restored copy of it that is even worse.
SQL> select file#,name,status,checkpoint_change#,enabled from v$datafile;
FILE# NAME STATUS CHECKPOINT_CHANGE# ENABLED
---------------- --------------------------------------------------------------------------- ------- ------------------ ----------
1 /cln/exp/ora_data3/appli/system01.dbf SYSTEM 387010236550 READ WRITE
2 /cln/exp/ora_data3/appli/sysaux01.dbf RECOVER 386783040289 DISABLED
3 /cln/exp/ora_data3/appli/undotbs01.dbf ONLINE 387010236550 READ WRITE
4 /cln/exp/ora_data3/appli/users01.dbf ONLINE 387010236550 READ WRITE
5 /cln/exp/ora_data3/appli/ts_cre_data_01.dbf ONLINE 387010236550 READ WRITE
My thoughts now turn to getting the data out of appli because if the database crashes, it may not come back uo again.
Datapump to the rescue
expdp system/******** dumpfile=expdp_appli_20211116184514_%u.dmp logfile=expdp_appli_20211116184514.log job_name=expdp_appli_20211116184514 directory=data_pump_dir reuse_dumpfiles=y parallel=4 compression=all full=y exclude=statistics Export: Release 19.0.0.0.0 - Production on Tue Nov 16 18:47:17 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. UDE-00604: operation generated ORACLE error 604 ORA-00604: error occurred at recursive SQL level 1 ORA-00376: file 2 cannot be read at this time ORA-01110: data file 2: '/cln/exp/ora_data3/appli/sysaux01.dbf'
Ah, no. Datapump needs sysaux - Oops.
Ok, old school exp should do it…
exp system/******** file=expdp_appli_20211116184514.dmp log=expdp_appli_20211116184514.log full=y Export: Release 19.0.0.0.0 - Production on Tue Nov 16 18:49:49 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. EXP-00008: ORACLE error 604 encountered ORA-00604: error occurred at recursive SQL level 1 ORA-00376: file 2 cannot be read at this time ORA-01110: data file 2: '/cln/exp/ora_data3/appli/sysaux01.dbf' EXP-00000: Export terminated unsuccessfully
Also a no-go. For some reason exp now needs sysaux as well - Oops.
I won't put all the different combinations I tried here but the only thing that worked was a table level exp. So specifying every table in the databse would work and that would be only marginally preferable to CTAS every table. There would be no packages, triggers, … a bad situation.
Back to RMAN. Up to now, I've been using RMAN scripts on the dev server. Let's try to get a restore out of Commvault from the console.
This is the pfile from the prod db trimmed down and modified for the restore
cat initappli.ora *.audit_file_dest='/cln/acc/ora_bin1/app/oracle/admin/appli/adump' *.audit_sys_operations=TRUE *.audit_syslog_level='LOCAL0.INFO' *.audit_trail='OS' *.compatible='19.3.0' *.control_file_record_keep_time=28 *.control_files='/cln/acc/ora_data3/appli/control01.ctl','/cln/acc/ora_data3/appli/control02.ctl','/cln/acc/ora_data3/appli/control03.ctl' *.db_block_size=8192 *.db_name='appli' *.diagnostic_dest='/cln/acc/ora_bin1/app/oracle' *.filesystemio_options='SETALL' *.job_queue_processes=8 *.log_archive_dest_1='LOCATION=/cln/acc/ora_data3/appli/archivelog' *.log_archive_format='log%t_%s_%r.arc' *.open_cursors=300 *.parallel_max_servers=60 *.parallel_servers_target=60 *.pga_aggregate_target=524288000 *.processes=200 *.remote_login_passwordfile='EXCLUSIVE' *.sga_max_size=3145728000 *.sga_target=3145728000 *.undo_tablespace='UNDOTBS1'
It doesn't need to be anywhere near this complex but it helps to keep it close to prod.
I'm not a genius with the Commvault console so got a colleague to try a restore. It failed. Tried some different options and go back a few days before the drop of the db links… we got a restore.
I have no idea how but I'll get a copy of the script.
The database came up in mount mode but would not open.
QL> select open_mode from v$database; OPEN_MODE -------------------- MOUNTED SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01666: control file is for a standby database SQL> select name, open_mode, database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- APPLI MOUNTED PHYSICAL STANDBY
WTF? This customer does not use Dataguard and never has. How did this db get restored as a standby?
SQL> select name,dbid,controlfile_type,open_mode,log_mode,checkpoint_change#,archive_change# from v$database;
NAME DBID CONTROL OPEN_MODE LOG_MODE CHECKPOINT_CHANGE# ARCHIVE_CHANGE#
--------- ---------------- ------- -------------------- ------------ ------------------ ----------------
APPLI 3522785804 STANDBY MOUNTED ARCHIVELOG 386661399191 386702950024
1 row selected.
SQL> select name,dbid,current_scn,log_mode,open_mode from v$database;
NAME DBID CURRENT_SCN LOG_MODE OPEN_MODE
--------- ---------------- ---------------- ------------ --------------------
APPLI 3522785804 386702950077 ARCHIVELOG MOUNTED
1 row selected.
SQL> select * from v$database_incarnation;
INCARNATION# RESETLOGS_CHANGE# RESETLOGS_TIME PRIOR_RESETLOGS_CHANGE# PRIOR_RESETLOGS_TIME STATUS RESETLOGS_ID PRIOR_INCARNATION# FLASHBACK_DATABASE_ALLOWED
---------------- ----------------- -------------------- ----------------------- -------------------- ------- ---------------- ------------------ --------------------------
CON_ID
----------------
1 1 20-jul-2013 21:02:04 0 CURRENT 821307724 0 NO
0
SQL> select file#,name,status,checkpoint_change#,enabled from v$datafile;
FILE# NAME STATUS CHECKPOINT_CHANGE# ENABLED
---------------- --------------------------------------------------------------------------- ------- ------------------ ----------
1 /cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSTEM_FNO-1 SYSTEM 386661399191 READ WRITE
2 /cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSAUX_FNO-2 ONLINE 386661399191 DISABLED
3 /cln/acc/ora_data3/appli/data_D-APPLI_TS-UNDOTBS1_FNO-3 ONLINE 386661399191 READ WRITE
4 /cln/acc/ora_data3/appli/data_D-APPLI_TS-USERS_FNO-4 ONLINE 386661399191 READ WRITE
5 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_CRE_DATA_FNO-5 ONLINE 386661399191 READ WRITE
6 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_CRE_INDEX_FNO-6 ONLINE 386661399191 READ WRITE
7 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_ORSYP_FNO-7 ONLINE 386661399191 READ WRITE
SQL> select TS#,NAME,INCLUDED_IN_DATABASE_BACKUP,FLASHBACK_ON from v$tablespace order by TS#;
TS# NAME INC FLA
---------------- --------------------------------------------------------------------------- --- ---
0 SYSTEM YES YES
1 SYSAUX YES YES
2 UNDOTBS1 YES YES
3 TEMP NO YES
4 USERS YES YES
7 TS_ORSYP YES YES
9 TS_KAUPTHING YES YES
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_ ERROR CHANGE# TIME CON_ID
---------------- ------- ------- ----------------------------------------------------------------- ---------------- -------------------- ----------------
1 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
2 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
3 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
4 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
5 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
6 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
7 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
8 ONLINE ONLINE 386469491991 07-nov-2021 05:00:20 0
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME CON_ID
---------------- ------------------ ---------------- -------------------- ----------------
1 NOT ACTIVE 0 0
2 NOT ACTIVE 0 0
3 NOT ACTIVE 0 0
4 NOT ACTIVE 0 0
5 NOT ACTIVE 0 0
6 NOT ACTIVE 0 0
7 NOT ACTIVE 0 0
SQL> select sequence#, first_change#, first_time, status from v$archived_log;
SEQUENCE# FIRST_CHANGE# FIRST_TIME S
---------------- ---------------- -------------------- -
398173 386469435044 07-nov-2021 03:11:42 A
398172 386469434667 07-nov-2021 03:05:51 A
398175 386469466044 07-nov-2021 04:05:41 A
398174 386469435065 07-nov-2021 03:11:44 A
398176 386469466068 07-nov-2021 04:05:44 A
SQL> select file#,name,recover,fuzzy,resetlogs_change#,checkpoint_change#,creation_change#,checkpoint_time,creation_time,RESETLOGS_TIME,status from v$datafile_header;
FILE# NAME REC FUZ RESETLOGS_CHANGE# CHECKPOINT_CHANGE# CREATION_CHANGE# CHECKPOINT_TIME
---------------- --------------------------------------------------------------------------- --- --- ----------------- ------------------ ---------------- --------------------
CREATION_TIME RESETLOGS_TIME STATUS
-------------------- -------------------- -------
1 /cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSTEM_FNO-1 NO 1 386469491991 8 07-nov-2021 05:00:20
20-jul-2013 21:02:20 20-jul-2013 21:02:04 ONLINE
2 /cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSAUX_FNO-2 NO 1 386469491991 2055 07-nov-2021 05:00:20
20-jul-2013 21:02:30 20-jul-2013 21:02:04 ONLINE
3 /cln/acc/ora_data3/appli/data_D-APPLI_TS-UNDOTBS1_FNO-3 NO 1 386469491991 3226 07-nov-2021 05:00:20
20-jul-2013 21:02:33 20-jul-2013 21:02:04 ONLINE
4 /cln/acc/ora_data3/appli/data_D-APPLI_TS-USERS_FNO-4 NO 1 386469491991 17626 07-nov-2021 05:00:20
20-jul-2013 21:02:48 20-jul-2013 21:02:04 ONLINE
That's the money shot. The checkpoint time on sysaux is the same as all the others, yay!
SQL> select status, to_char(checkpoint_change#), to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time, 2 count(*) from v$datafile_header 3 group by status, checkpoint_change#, checkpoint_time order by status, checkpoint_change#, checkpoint_time; STATUS TO_CHAR(CHECKPOINT_CHANGE#) CHECKPOINT_TIME COUNT(*) ------- ---------------------------------------- -------------------- ---------------- ONLINE 386469491991 07-NOV-2021 05:00:20 28 1 row selected.
Trying to open the database
SQL> alter database open; ORA-10458: standby database requires recovery ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1: '/cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSTEM_FNO-1'
Ah, yes. This is normal trying to open a standby! Let's use the fact that it thinks it's a standby and recover it as such…
SQL> recover automatic standby database;
ORA-00279: change 386469492444 generated at 11/07/2021 05:04:41 needed for
thread 1
ORA-00289: suggestion :
/cln/acc/ora_data3/appli/archivelog/log1_398177_821307724.arc
ORA-00280: change 386469492444 for thread 1 is in sequence #398177
ORA-00278: log file
'/cln/acc/ora_data3/appli/archivelog/log1_398177_821307724.arc' no longer
needed for this recovery
ORA-00308: cannot open archived log
'/cln/acc/ora_data3/appli/archivelog/log1_398177_821307724.arc'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 7
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1:
'/cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSTEM_FNO-1'
Need to restore archivelogs from Commvault to the archivelog directory (we must not connect to the catalog as the database name and dbid is the same).
RMAN> run {
2> allocate channel c1 type 'sbt_tape';
3> allocate channel c2 type 'sbt_tape';
4> allocate channel c3 type 'sbt_tape';
5> allocate channel c4 type 'sbt_tape';
6> allocate channel c5 type 'sbt_tape';
7> set archivelog destination to '/cln/acc/ora_data3/appli/archivelog';
8> restore archivelog from logseq 398176 until logseq 398377;
9> }
allocated channel: c1
channel c1: SID=233 device type=SBT_TAPE
channel c1: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c2
channel c2: SID=268 device type=SBT_TAPE
channel c2: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c3
channel c3: SID=299 device type=SBT_TAPE
channel c3: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c4
channel c4: SID=332 device type=SBT_TAPE
channel c4: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c5
channel c5: SID=365 device type=SBT_TAPE
channel c5: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
executing command: SET ARCHIVELOG DESTINATION
Starting restore at 15-NOV-21 17:11:08
released channel: c1
released channel: c2
released channel: c3
released channel: c4
released channel: c5
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 11/15/2021 17:11:10
RMAN-06026: some targets not found - aborting restore
RMAN-06102: no channel to restore a backup or copy of archived log for thread 1 with sequence 398377 and starting SCN of 386702950028
RMAN>
RMAN> **end-of-file**
RMAN> exit
Ok, go one less.
RMAN> run {
2> allocate channel c1 type 'sbt_tape';
3> allocate channel c2 type 'sbt_tape';
4> allocate channel c3 type 'sbt_tape';
5> allocate channel c4 type 'sbt_tape';
6> allocate channel c5 type 'sbt_tape';
7> set archivelog destination to '/cln/acc/ora_data3/appli/archivelog';
8> restore archivelog from logseq 398176 until logseq 398376;
9> }
allocated channel: c1
channel c1: SID=233 device type=SBT_TAPE
channel c1: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c2
channel c2: SID=268 device type=SBT_TAPE
channel c2: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c3
channel c3: SID=299 device type=SBT_TAPE
channel c3: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c4
channel c4: SID=332 device type=SBT_TAPE
channel c4: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
allocated channel: c5
channel c5: SID=365 device type=SBT_TAPE
channel c5: CommVault Systems for Oracle: Version 11.0.0(BUILD80)
executing command: SET ARCHIVELOG DESTINATION
Starting restore at 15-NOV-21 17:12:28
new media label is "V_27413968_221539311" for piece "70513410_APPLI_ut0divu2_1_1"
new media label is "V_27423243_221542701" for piece "70515240_APPLI_uv0dj3g4_1_1"
new media label is "V_27413968_221546406" for piece "70517076_APPLI_v10dj6v9_1_1"
new media label is "V_27413968_221549903" for piece "70518806_APPLI_v30djafb_1_1"
new media label is "V_27413968_221552297" for piece "70520026_APPLI_v50djcop_1_1"
new media label is "V_27421252_221553623" for piece "70520598_APPLI_v70djdva_1_1"
new media label is "V_27423243_221557870" for piece "70522974_APPLI_v90djhoi_1_1"
new media label is "V_27421252_221563311" for piece "70525524_APPLI_vb0djl51_1_1"
new media label is "V_27421252_221566100" for piece "70526669_APPLI_vd0djngt_1_1"
new media label is "V_27413968_221568075" for piece "70528379_APPLI_vf0djooc_1_1"
...
new media label is "V_27429497_221850212" for piece "70661451_APPLI_4n0dq805_1_1"
new media label is "V_27429497_221853772" for piece "70663705_APPLI_4p0dqbhn_1_1"
channel c1: starting archived log restore to user-specified destination
archived log destination=/cln/acc/ora_data3/appli/archivelog
channel c1: restoring archived log
archived log thread=1 sequence=398176
channel c1: restoring archived log
archived log thread=1 sequence=398177
channel c1: reading from backup piece 70499475_APPLI_u70di3re_1_1
channel c2: starting archived log restore to user-specified destination
archived log destination=/cln/acc/ora_data3/appli/archivelog
channel c2: restoring archived log
archived log thread=1 sequence=398178
channel c2: restoring archived log
archived log thread=1 sequence=398179
channel c2: reading from backup piece 70500970_APPLI_u90di6tn_1_1
channel c3: starting archived log restore to user-specified destination
archived log destination=/cln/acc/ora_data3/appli/archivelog
channel c3: restoring archived log
...
channel c2: restoring archived log
archived log thread=1 sequence=398375
channel c2: restoring archived log
archived log thread=1 sequence=398376
channel c2: reading from backup piece 70663705_APPLI_4p0dqbhn_1_1
channel c3: piece handle=70658561_APPLI_4j0dq3t1_1_1 tag=ARCHIVELOG
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:01
channel c4: piece handle=70644423_APPLI_3v0dpfp9_1_1 tag=ORACLE_HOURLY
channel c4: restored backup piece 1
channel c4: restore complete, elapsed time: 00:00:04
channel c5: piece handle=70659158_APPLI_4l0dq4fn_1_1 tag=ORACLE_HOURLY
channel c5: restored backup piece 1
channel c5: restore complete, elapsed time: 00:00:01
channel c1: piece handle=70661451_APPLI_4n0dq805_1_1 tag=ORACLE_HOURLY
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:01
channel c2: piece handle=70663705_APPLI_4p0dqbhn_1_1 tag=ORACLE_HOURLY
channel c2: restored backup piece 1
channel c2: restore complete, elapsed time: 00:00:07
Finished restore at 15-NOV-21 17:13:51
released channel: c1
released channel: c2
released channel: c3
released channel: c4
released channel: c5
RMAN>
RMAN> **end-of-file**
RMAN> exit
Disco! The apply now takes a while as there are 200 of them to do. Watch the alertlog from another session if necessary
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Nov 15 17:15:34 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> recover automatic standby database;
ORA-00279: change 386702950028 generated at 11/10/2021 08:05:10 needed for
thread 1
ORA-00289: suggestion :
/cln/acc/ora_data3/appli/archivelog/log1_398377_821307724.arc
ORA-00280: change 386702950028 for thread 1 is in sequence #398377
ORA-00278: log file
'/cln/acc/ora_data3/appli/archivelog/log1_398377_821307724.arc' no longer
needed for this recovery
ORA-00308: cannot open archived log
'/cln/acc/ora_data3/appli/archivelog/log1_398377_821307724.arc'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 7
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
ORA-00279: change 386778260810 generated at 11/11/2021 06:55:08 needed for
thread 1
ORA-00289: suggestion :
/cln/acc/ora_data3/appli/archivelog/log1_398378_821307724.arc
ORA-00280: change 386778260810 for thread 1 is in sequence #398378
ORA-00278: log file
'/cln/acc/ora_data3/appli/archivelog/log1_398377_821307724.arc' no longer
needed for this recovery
...
ORA-00279: change 387010236550 generated at 11/15/2021 08:02:53 needed for
thread 1
ORA-00289: suggestion :
/cln/acc/ora_data3/appli/archivelog/log1_398389_821307724.arc
ORA-00280: change 387010236550 for thread 1 is in sequence #398389
ORA-00278: log file
'/cln/acc/ora_data3/appli/archivelog/log1_398388_821307724.arc' no longer
needed for this recovery
ORA-00308: cannot open archived log
'/cln/acc/ora_data3/appli/archivelog/log1_398389_821307724.arc'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 7
SQL> alter database open;
Database altered.
SQL> select file#,name,status,checkpoint_change#,enabled from v$datafile;
FILE# NAME STATUS CHECKPOINT_CHANGE# ENABLED
---------------- --------------------------------------------------------------------------- ------- ------------------ ----------
1 /cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSTEM_FNO-1 SYSTEM 387010236550 READ ONLY
2 /cln/acc/ora_data3/appli/data_D-APPLI_TS-SYSAUX_FNO-2 ONLINE 387010236550 READ ONLY
3 /cln/acc/ora_data3/appli/data_D-APPLI_TS-UNDOTBS1_FNO-3 ONLINE 387010236550 READ ONLY
4 /cln/acc/ora_data3/appli/data_D-APPLI_TS-USERS_FNO-4 ONLINE 387010236550 READ ONLY
5 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_CRE_DATA_FNO-5 ONLINE 387010236550 READ ONLY
6 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_CRE_INDEX_FNO-6 ONLINE 387010236550 READ ONLY
7 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_ORSYP_FNO-7 ONLINE 387010236550 READ ONLY
8 /cln/acc/ora_data3/appli/data_D-APPLI_TS-TS_RATR_FNO-8 ONLINE 387010236550 READ ONLY
Looks good but notice the ENABLED flag is 'READ ONLY'! We are still a standby database.
SQL> select open_mode,database_role from v$database; OPEN_MODE DATABASE_ROLE -------------------- ---------------- READ ONLY PHYSICAL STANDBY SQL> alter database close; Database altered. SQL> select open_mode,database_role from v$database; OPEN_MODE DATABASE_ROLE -------------------- ---------------- MOUNTED PHYSICAL STANDBY SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; SWITCHOVER_STATUS -------------------- NOT ALLOWED SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE; Database altered. SQL> SHUTDOWN IMMEDIATE; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 637483336 bytes Fixed Size 9058632 bytes Variable Size 465567744 bytes Database Buffers 150994944 bytes Redo Buffers 11862016 bytes Database mounted. Database opened. SQL> select open_mode,database_role from v$database; OPEN_MODE DATABASE_ROLE -------------------- ---------------- READ WRITE PRIMARY SQL> select tablespace_name,status from dba_tablespaces; TABLESPACE_NAME STATUS ------------------------------ --------- SYSTEM ONLINE SYSAUX OFFLINE UNDOTBS1 ONLINE TEMP ONLINE USERS ONLINE TS_ORSYP ONLINE TS_KAUPTHING ONLINE SQL> alter tablespace sysaux online; Tablespace altered. SQL> select tablespace_name,status from dba_tablespaces; TABLESPACE_NAME STATUS ------------------------------ --------- SYSTEM ONLINE SYSAUX ONLINE UNDOTBS1 ONLINE TEMP ONLINE USERS ONLINE TS_ORSYP ONLINE TS_KAUPTHING ONLINE
Ok, we have a working database that we could copy back to production but in the mean time, production has been open and is being used. So to recover completely, the whole exercise needs to be done again with appli on production shut for users. That's ok, we know it can be done now.
There is another way. Copy just the sysaux datafile over to production and recover the database. This may not be an option finally, as the incarnation is different. The alter database open may have done this. So could do it all again but without the open.
To be continued…
Update:
This is the script used to restore and recover the database
run {
allocate channel c1 type 'sbt_tape';
trace 0;
restore controlfile;
sql 'alter database mount';
}
run {
set newname for database to '/cln/acc/ora_data3/appli/%U';
allocate channel c1 type 'sbt_tape';
allocate channel c2 type 'sbt_tape';
restore database until time = "to_date('08-11-2021 09:00:00','dd-mm-yyyy hh24:mi:ss')";
switch datafile all;
}
run {
allocate channel c1 type 'sbt_tape';
allocate channel c2 type 'sbt_tape';
recover database until time = "to_date('08-11-2021 09:00:00','dd-mm-yyyy hh24:mi:ss')";
}
Indeed did it all again without the open and it worked! The action of either opening the database with resetlogs or forcing a standby to be a Primary with “activate physical standby” will cause a new incarnation so you can't move the datafiles over.
You still obviously have the option of using this restored database in its entirety.
Anyway, just like this, the production database got recovered…
On prod db, stop user access and flush the redo
alter system set local_listener=''; alter system switch logfile; shu immediate
On standby server, ship the archivelogs over so we can recover on the backup server
scp hn511:/cln/exp/ora_data3/archivelog/appli /cln/acc/ora_data3/appli/archivelog/ sqlplus / as sysdba recover automatic standby database; auto shu immediate
With the database recovered and shutdown (it only went as far as MOUNT, not OPEN)
scp data_D-APPLI_TS-SYSAUX_FNO-2 hn511:/cln/exp/ora_data3/appli/sysaux01.dbf
On prod db
startup mount
recover datafile 2;
set lines 1000 pages 100
col name for a70
select FILE#,NAME,ENABLED,STATUS from v$datafile;
FILE# NAME ENABLED STATUS
---------- ---------------------------------------------------------------------- ---------- -------
1 /cln/exp/ora_data3/appli/system01.dbf READ WRITE SYSTEM
2 /cln/exp/ora_data3/appli/sysaux01.dbf DISABLED OFFLINE
3 /cln/exp/ora_data3/appli/undotbs01.dbf READ WRITE ONLINE
4 /cln/exp/ora_data3/appli/users01.dbf READ WRITE ONLINE
5 /cln/exp/ora_data3/appli/ts_cre_data_01.dbf READ WRITE ONLINE
6 /cln/exp/ora_data3/appli/ts_cre_index_01.dbf READ WRITE ONLINE
7 /cln/exp/ora_data3/appli/ts_orsyp01.dbf READ WRITE ONLINE
8 /cln/exp/ora_data3/appli/ts_ratr_01.dbf READ WRITE ONLINE
9 /cln/exp/ora_data3/appli/ts_kaupthing_01.dbf READ WRITE ONLINE
alter database datafile 2 online;
select FILE#,NAME,ENABLED,STATUS from v$datafile;
FILE# NAME ENABLED STATUS
---------- ---------------------------------------------------------------------- ---------- -------
1 /cln/exp/ora_data3/appli/system01.dbf READ WRITE SYSTEM
2 /cln/exp/ora_data3/appli/sysaux01.dbf DISABLED ONLINE
3 /cln/exp/ora_data3/appli/undotbs01.dbf READ WRITE ONLINE
4 /cln/exp/ora_data3/appli/users01.dbf READ WRITE ONLINE
5 /cln/exp/ora_data3/appli/ts_cre_data_01.dbf READ WRITE ONLINE
6 /cln/exp/ora_data3/appli/ts_cre_index_01.dbf READ WRITE ONLINE
7 /cln/exp/ora_data3/appli/ts_orsyp01.dbf READ WRITE ONLINE
alter database open;
Database altered.
SQL> col file_name for a70
SQL> select file_name,status,online_status from dba_data_files;
FILE_NAME STATUS ONLINE_
------------------------------------------------------------------- --------- -------
/cln/exp/ora_data3/appli/system01.dbf AVAILABLE SYSTEM
/cln/exp/ora_data3/appli/sysaux01.dbf AVAILABLE ONLINE
/cln/exp/ora_data3/appli/undotbs01.dbf AVAILABLE ONLINE
/cln/exp/ora_data3/appli/users01.dbf AVAILABLE ONLINE
/cln/exp/ora_data3/appli/ts_cre_data_01.dbf AVAILABLE ONLINE
/cln/exp/ora_data3/appli/ts_cre_index_01.dbf AVAILABLE ONLINE
/cln/exp/ora_data3/appli/ts_orsyp01.dbf AVAILABLE ONLINE
/cln/exp/ora_data3/appli/ts_ratr_01.dbf AVAILABLE ONLINE
select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX OFFLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
TS_ORSYP ONLINE
alter tablespace sysaux online;
Tablespace altered.
select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
TS_ORSYP ONLINE
TS_KAUPTHING ONLINE
TS_DB_SIZING ONLINE
TS_RATR ONLINE
All these queries were provided by Oracle Support
spool /tmp/sos_current_database_status_info_&_CONNECT_IDENTIFIER..txt set pagesize 20000 set linesize 180 set pause offset set serveroutput on set feedback on set echo on set numformat 999999999999999 alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss'; archive log list; select INSTANCE_NAME, STATUS from v$instance; select database_role from v$database; select name,dbid,controlfile_type,open_mode,log_mode,checkpoint_change#,archive_change# from v$database; select name,dbid,current_scn,log_mode,open_mode from v$database; select * from v$database_incarnation; col name for a75 select * from v$restore_point; select flashback_on from v$database; select parallel from v$instance; select protection_level from v$database; select * from dba_streams_administrator; select file#,name,status,checkpoint_change#,enabled from v$datafile; select file#,name,status,enabled from v$tempfile; select TS#,NAME,INCLUDED_IN_DATABASE_BACKUP,FLASHBACK_ON from v$tablespace order by TS#; select * from v$recover_file; select * from v$backup; select * from v$log; select * from v$logfile; select sequence#, first_change#, first_time, status from v$archived_log; select file#,name,recover,fuzzy,resetlogs_change#,checkpoint_change#,creation_change#,checkpoint_time,creation_time,RESETLOGS_TIME,status from v$datafile_header; select status, to_char(checkpoint_change#), to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time, count(*) from v$datafile_header group by status, checkpoint_change#, checkpoint_time order by status, checkpoint_change#, checkpoint_time; select count(*),fhsta from x$kcvfh group by fhsta; select count(*),fhrba_seq from x$kcvfh group by fhrba_seq; select count(*),fhscn from x$kcvfh group by fhscn; select count(*),fhafs from x$kcvfh group by fhafs; select min(FHSCN) "LOW FILEHDR SCN" , max(FHSCN) "MAX FILEHDR SCN", max(FHAFS) "Min PITR ABSSCN" from X$KCVFH ; select fhdbn,fhdbi,hxfil,fhsta,fhscn,fhafs,fhrba_seq,fhtnm tbs_name from x$kcvfh; select name, status , sequence#, thread#, TO_CHAR(first_change#, '999999999999999999') as first_change#, TO_CHAR(next_change#, '999999999999999999') next_change#, to_char(completion_time,'DD-MON-YYYY HH24:MI:SS') completion_time from v$archived_log where (select min(checkpoint_change#) from v$datafile_header) between first_change# and next_change#; spool off
run {
allocate channel ch1 type 'sbt_tape'
PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.a(shr.o), BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch1,ThreadCommandLine= -cn CI00031899-hn481 -vm Instance001)"
TRACE 0;
send "BACKUP -jm 32813 -a 2:43280 -cl 18194 -ins 6226 -at 22 -j 73162237 -jt 73162237:4:1:0:35762 -bal 0 -t 2 -ms 1 -data";
setlimit channel ch1 maxopenfiles 8;
backup
incremental level = 1
filesperset = 32
format='73162237_%d_%U'
tag = 'DAILY'
database
include current controlfile spfile ;
resync catalog;
}
to be able to allocate a channel without specifying the full PARMS, libobk needs setting up correctly
ln -s /opt/commvault/Base/libobk.a $ORACLE_HOME/lib/libobk.a
run {
allocate channel c1 type sbt_tape;
backup archivelog all not backed up 1 times delete all input;
}
run {
allocate channel c1 type sbt_tape;
sql 'alter tablespace &&tbs_name offline immediate';
restore tablespace &tbs_name;
recover tablespace &tbs_name;
sql 'alter tablespace &tbs_name online';
}
Set this at unix prompt before starting RMAN
export NLS_DATE_FORMAT="dd-mon-yyyy hh24:mi:ss"
change archivelog all crosscheck;
When only one RMAN session is active, the easiest method for determining the server session ID for an RMAN channel is to execute the following query on the target database while the RMAN job is executing:
COLUMN CLIENT_INFO FORMAT a30 COLUMN SID FORMAT 999 COLUMN SPID FORMAT 9999 SELECT s.SID, p.SPID, s.CLIENT_INFO FROM V$PROCESS p, V$SESSION s WHERE p.ADDR = s.PADDR AND CLIENT_INFO LIKE 'rman%' ;
set pages 100 lines 200 col mb for 999,999,999 col completed for a11 col sortcol noprint select vi.instance_name sid , to_char(completion_time, 'DD-MON-YYYY') completed , completion_time sortcol , type , round(sum(bytes)/1048576) mb , round(sum(elapsed_seconds)/60) min from ( select case when s.backup_type = 'L' then 'ARCHIVELOG' when s.controlfile_included = 'YES' then 'CONTROLFILE' when s.backup_type = 'D' and s.incremental_level = 0 then 'LEVEL 0' when s.backup_type = 'I' and s.incremental_level = 1 then 'LEVEL 1' when s.backup_type = 'D' and s.incremental_level is null then 'FULL' else s.backup_type end type , trunc(s.completion_time) completion_time , p.bytes , s.elapsed_seconds from v$backup_piece p , v$backup_set s where p.status = 'A' and p.recid = s.recid union all select 'datafilecopy' type , trunc(completion_time) completion_time , output_bytes , 0 elapsed_seconds from v$backup_copy_details ) , v$instance vi group by vi.instance_name, to_char(completion_time, 'DD-MON-YYYY'), completion_time, type order by 1,3,4 /
delete noprompt archivelog all backed up 1 times to sbt_tape;
delete noprompt archivelog until time 'sysdate-1' backed up 1 times to device type 'sbt_tape';
delete noprompt archivelog until time 'sysdate-(1*6/24)' backed up 1 times to device type 'sbt_tape';
delete noprompt archivelog until logseq 42000 backed up 1 times to device type 'sbt_tape';
delete noprompt archivelog from sequence 4250 until sequence 4546 like '/cln/acc/ora_data/fra/crelaa2/%';
delete expired archivelog all;
rman<<EORMAN
set echo on
connect catalog ${RMANUSR}/${RMANPWD}@${RMANCAT}
connect target /
configure retention policy to recovery window of ${RMANRETENTION} days;
configure controlfile autobackup on;
# allocate channel for maintenance type 'disk';
allocate channel for maintenance type 'sbt_tape';
crosscheck archivelog all;
crosscheck backup;
crosscheck backup of controlfile;
delete noprompt force obsolete;
delete noprompt force obsolete orphan;
delete noprompt expired backup;
delete noprompt expired copy;
delete noprompt expired archivelog all;
delete noprompt expired backup of controlfile;
report obsolete orphan;
report obsolete;
report need backup;
EORMAN
Reference: https://blog.yannickjaquier.com/oracle/who-is-locking-your-accounts-ora-01017-and-ora-28000-errors.html How to find out where the failed logon attempts are coming from.
Check what is being audited
select * from dba_stmt_audit_opts;
audit network by access;
then
select * from dba_audit_session order by sessionid desc;
create table sys.logon_trigger ( username varchar2(30), userhost varchar2(128), timestamp date );
create or replace trigger sys.logon_trigger
after servererror on database
begin
if (is_servererror(1017)) then
insert into logon_trigger values(sys_context('USERENV', 'AUTHENTICATED_IDENTITY'), sys_context('USERENV', 'HOST'), SYSDATE);
commit;
end if;
end;
/
set lines 200 col userhost for a30 alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS'; select * from sys.logon_trigger order by timestamp desc;
run {
set nocfau; # this undocumented command will ensure that no controlfile autobackup is performed at end of the script
backup as copy database format '/u33/backup/prd1s1/rman/dbf/%U' tag='quick_copy_backup';
backup as copy archivelog all format '/u33/backup/prd1s1/rman/ctl/%U';
backup as copy current controlfile format '/u33/backup/prd1s1/rman/ctl/%U';
}
set lines 1000 select /*+ rule */ to_char(start_time,'DD-MON-YY HH24:MI') starttime , to_char(end_time,'DD-MON-YY HH24:MI') endtime , round((end_time-start_time)*24*60) mins , input_type , status from v$rman_backup_job_details where start_time > trunc(sysdate)-1 order by start_time desc /
set echo on
connect target /;
connect catalog rman/rman@rman;
run
{
allocate channel t0 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
backup database including archivelog;
}
set echo on
connect target /;
connect catalog rman/rman@rman;
run
{
allocate channel t0 type sbt_tape parms 'ENV=(NSR_SERVER=hn6000.cln.be,NSR_CLIENT=hn511,NSR_DATA_VOLUME_POOL=DD1DAILY)';
backup database including archivelog;
}
More info on Commvault commandline and xml templates here The library used here is for AIX
set echo on
connect target /;
connect catalog rman/rman@rman;
run
{
allocate channel t0 type sbt_tape parms 'SBT_LIBRARY=/opt/commvault/Base64/libobk.a(shr.o),ENV=(cvsrcclientname=ci00031701-hn5219)';
backup database including archivelog;
}
…and this works on RHEL
set echo on
connect target /;
connect catalog rman/rman@rman;
run
{
allocate channel t0 type sbt_tape parms 'SBT_LIBRARY=/opt/commvault/Base64/libobk.so,ENV=(cvsrcclientname=ci00028834-hn1018)' ;
backup database including archivelog;
}
run {
allocate channel type 'sbt_tape' format 'LOG_d%d_t%t_s%s_u%u' parms 'env=(tdpo_optfile=/oracle/JDBEOP1/admin/tdpo.opt)';
backup archivelog all not backed up 1 times tag=bu20140808t170528p5685;
delete noprompt archivelog until time 'sysdate-(1 * 72 / 24)' backed up 1 times to device type 'sbt_tape';
}
run {
allocate channel t1 device type 'sbt_tape' format 'ARC_d%d_t%t_s%s_u%u' parms 'env=(tdpo_optfile=c:\\home\\ibmtools\\scripts\\rman\\tdpo_golfp1.opt)';
backup archivelog from logseq 224801 until logseq 224890;
}
In a space emergency, backup all the archivelogs to the configured tape and immediately delete them from disk
backup device type sbt_tape archivelog all delete all input;
or ensure the correct device…
run {
allocate channel c1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
backup device type sbt_tape archivelog all delete all input;
}
restore archivelog from logseq=38400 until logseq=38499;
restore archivelog logseq=25444;
When you get RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time.
RMAN> LIST INCARNATION; DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time ------- ------- -------- ----------- ------- ---------- ---------- (1..7 removed) 8 8 IANH 3640854324 PARENT 2892737 03-11-2009 10:21:42 9 9 IANH 3640854324 CURRENT 2896602 03-11-2009 11:03:05 RMAN> RESET DATABASE TO INCARNATION 8;
Now do the restore again
If the Flash (or Fast) Recovery Area is used for archive redo logs as well as flashback logs, we need to monitor it's size
col fsname heading "Filesystem" for a50
col space_limit heading "Allocated GB" for 999,999,990
col space_used heading "Used GB" for a20
col percused heading "% Used" for 90.0
col space_reclaimable heading "Reclaimable GB" for a20
col percrec heading "% Reclaimable" for 90.0
set linesize 1000
with maxsizes as (
select name
, floor(space_limit/1024/1024/1024) space_limit
from v$recovery_file_dest
)
, useds as (
select name
, ceil(space_used/1024/1024/1024) space_used
from v$recovery_file_dest
)
, reclaimables as (
select name
, (space_reclaimable/1024/1024/1024) space_reclaimable
from v$recovery_file_dest
)
select m.name fsname
, trunc(m.space_limit) space_limit
, trunc(u.space_used) ||' (' ||
round((u.space_used/m.space_limit)*100,2) ||'%)' space_used
, trunc(r.space_reclaimable) ||' (' ||
round((r.space_reclaimable/m.space_limit)*100,2) ||'%)' space_reclaimable
from maxsizes m
, useds u
, reclaimables r
where m.name = u.name
and u.name = r.name
order by m.name
/
and in detail…
set linesize 1000 select * from v$recovery_area_usage where percent_space_used > 0;
size taken up by the flashback logs
select estimated_flashback_size from v$flashback_database_log;
This method cuts down on restore time as the images are lready present, they just need switching to with SWITCH DATABASE TO COPY;
Validate that the database components are free of corruption
backup <something> validate;
or validate that the database components on tape are free of corruption
restore <something> validate;
eg (using Commvault):
connect target /
connect catalog rman/rman@rman
run {
allocate channel c0 type sbt_tape parms 'SBT_LIBRARY=/opt/commvault/Base64/libobk.a(shr.o),ENV=(cvsrcclientname=ci00031669-hn491)' ;
set until time "to_date('2018-08-17 16:00:00','YYYY-MM-DD HH24:MI:SS')";
restore database validate;
}
or
backup database validate; backup archivelog all validate; backup current controlfile validate; backup tablespace users validate; backup datafile 1 validate;
or
restore database validate; restore archivelog all validate; restore spfile validate; restore tablespace users validate;
Use “preview” instead of “validate” to see the list of backup pieces that would be used in the restore process. “preview” actually mimics the performance of the task. “validate” does a brief check.
export NLS_DATE_FORMAT='yyyymmddhh24miss'; echo "restore controlfile preview; restore database preview;" | rman target / | awk '
/Finished restore at /{timestamp=$4}
/Recovery must be done beyond SCN /{if ($7>scn) scn=$7 }
/^ *(Piece )Name: / { sub(/^ *(Piece )Name: /,"") ; files[[$0]]=1 }
END{ for (i in files) print i > "files-"timestamp"-SCN-"scn".txt" }
'
If that approach does not work, try this
vi restval.rman
connect target /
connect catalog rman/rman@rman
run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.cln.be, NSR_CLIENT=hn481, NSR_DATA_VOLUME_POOL=DD1DAILY)';
restore controlfile preview;
restore database preview;
release channel t1;
}
cat restval.rman | NLS_DATE_FORMAT='yyyy-mon-dd hh24:mi:ss' rman | awk '
/Finished restore at /{timestamp=$4}
/Recovery must be done beyond SCN /{if ($7>scn) scn=$7 }
/^ *(Piece )Name: / { sub(/^ *(Piece )Name: /,"") ; files[[$0]]=1 }
END{ for (i in files) print i > "files-"timestamp"-SCN-"scn".txt" }
'
Environment must be set before running…
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
rman <<EORMAN
set echo on
connect target;
restore database validate;
exit
EORMAN
if [[ $? != 0 ]]; then
echo "*** ERROR: RMAN restore validate of database failed"
exit 1;
fi
cmd=$(sqlplus -S "/ as sysdba" <<'EOSQL'
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set newpa none pages 0 head off veri off feed off term off echo off pause off numw 32
select 'restore validate archivelog from scn '||min_first_change#||' until scn '||max_next_change# from v$backup_archivelog_summary;
exit
EOSQL
)
if [[ $? != 0 ]]; then
echo "*** ERROR: $cmd"
exit 1;
fi
rman <<EORMAN
set echo on
connect target;
${cmd};
exit
EORMAN
if [[ $? != 0 ]]; then
echo "*** ERROR: RMAN restore validate of archivelogs failed"
exit 1;
fi
Had a problem where the name of the data file was created incorrectly. It had non-printable characters in it due to backspace character being setup wrongly and someone making a typing mistake.
SQL> alter database datafile 67 offline;
rman target / catalog rman/rman@rman12d
run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.cln.be,NSR_CLIENT=hn491,NSR_DATA_VOLUME_POOL=DD1DAILY)';
set newname for datafile 67 to '/cln/acc/ora_data3/adsa1/ADS_ARCHIVE_05.dbf';
restore datafile 67;
switch datafile 67;
recover datafile 67;
}
SQL> alter database datafile 67 online;
In version 12c file renaming is more like it should be!
alter database move datafile '/cln/exp/ora_data2/clne/data8/ts_thaler_cu_31.dbf' to '/cln/exp/ora_data2/clne/data9/ts_thaler_cu_31.dbf';
connect target /
backup validate datafile 00091;
run {
set maxcorrupt for datafile 91 to 32;
backup validate datafile 00091;
}
blockrecover datafile 91 block 889664;
Channel test
run {
allocate channel c1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
}
rman nocatalog target /
run {
allocate channel c1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
backup database;
}
/usr/bin/nsrdasv -z /ux/HN512/backup/nmda.config/nmda_oracle_hn512_11.2.0.4.cfg \\
-s hn6000.company.be \\
-g CTI_2_ARCLOG_HN6014_LBK \\
-LL \\
-m hn512 \\
-a DIRECT_ACCESS=No \\
-l full
-q \\
-W 78 \\
-N RMAN:/ux/HN512/backup/rman/backup_scripts/backup-lbk_archivelog.rman \\
rman send 'NSR_ENV (NSR_CLIENT=hn512, NSR_DIRECT_ACCESS=No, NSR_GROUP=CTI_2_ARCLOG_HN6014_LBK, NSR_PARENT_JOBID=11667069, \\
NSR_SERVER=hn6000.company.be, NSR_SAVESET_NAME=RMAN:/ux/HN512/backup/rman/backup_scripts/backup-lbk_archivelog.rman, \\
NSR_MIN_DDBOOST_VERSION=2.5.1.1)' \\
cmdfile '/ux/HN512/backup/rman/backup_scripts/backup-lbk_archivelog.rman'
export NLS_LANG=american
export NLS_DATE_FORMAT=YYYY-MM-DD:HH24:MI:SS
rman target sys/sys@${FROM_SID} catalog rman/rman@catalog_db auxiliary / | tee -a $HOME/dup_${FROM_SID}_to_${TO_SID}_$(date +'%Y%m%d').log
run {
allocate auxiliary channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
allocate auxiliary channel t2 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
allocate auxiliary channel t3 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
allocate auxiliary channel t4 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
set until time '${RECOVER_TO_TIME}';
duplicate target database to ${TO_SID}
nofilenamecheck
spfile
parameter_value_convert '/oracle/exp/ora_data2/','/oracle/kap/ora_data1/', '/oracle/exp/ora_bin2/','/oracle/kap/ora_bin1/'
set controlfiles='/oracle/kap/ora_data1/common/db/bilk/control01.ctl', '/oracle/kap/ora_data1/common/db/bilk/control02.ctl', '/oracle/kap/ora_data1/common/db/bilk/control03.ctl'
set db_file_name_convert='/oracle/exp/ora_data2/common/db/bile/','/oracle/kap/ora_data1/common/db/bilk/'
set log_file_name_convert='/oracle/exp/ora_data2/common/db/bile/','/oracle/kap/ora_data1/common/db/bilk/'
;
release channel t1;
release channel t2;
release channel t3;
release channel t4;
}
Also useful to understand te basic idea (without the incremental recovery):
There are several interesting reasons for using backup to copy instead of the default (backupset). Our scenario here is that:
A filesystem needs to be available with enough space to hold the (new) 3Tb database.
The %b format means just the filenames are used.
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
run
{
allocate channel c1 type disk format '/path/to/backup/datafiles/%b';
allocate channel c2 type disk format '/path/to/backup/datafiles/%b';
allocate channel c3 type disk format '/path/to/backup/datafiles/%b';
allocate channel c4 type disk format '/path/to/backup/datafiles/%b';
allocate channel c5 type disk format '/path/to/backup/datafiles/%b';
allocate channel c6 type disk format '/path/to/backup/datafiles/%b';
allocate channel c7 type disk format '/path/to/backup/datafiles/%b';
allocate channel c8 type disk format '/path/to/backup/datafiles/%b';
backup as copy database;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
release channel c7;
release channel c8;
}
This backup failed partially because someone in the past had added a few datafiles to a tablespace but appended a space to the end of the filename so we needed this additional step: The %U format means that a unique filename is generated for each file (but includes the file_id so it can be reassociated with its name).
run {
allocate channel c1 type disk format '/path/to/backup/datafiles/%U';
backup as copy datafile 170;
backup as copy datafile 171;
release channel c1;
}
At intervention time, stop the applications accessing the database, stop the listeners and wait for any ongoing transactions to end.
Run an archivelog backup sending any new logs to tape but don't delete them from disk as they will be needed for recovery.
run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.cln.be,NSR_CLIENT=hn512,NSR_DATA_VOLUME_POOL=DD2DAILY,NSR_SAVESET_BROWSE=1 MONTH,NSR_SAVESET_RETENTION=1 MONTH,NSR_END_ERROR_IGNORE=TRUE)';
crosscheck archivelog all;
backup format 'arc_%d_%I_%t_%s_%p' archivelog all not backed up 2 times;
release channel t1;
}
run {
allocate channel c1 type disk;
backup as copy current controlfile format '/home/oracle/control01.ctl';
release channel c1;
}
Take a text copy as well just in case…
alter database backup controlfile to trace as '$HOME/lbk_control01.sql';
create pfile from spfile;
Copy one member of each group just in case…
select 'cp -p '||member||' /path/to/backup/datafiles/ &' from v$logfile where member like '%a.log';
Change any parameters that have a directory path to reflect the location of the new files. At the very least, change the control_files parameter.
Startup the instance using nomount and specify the modified pfile
startup nomount pfile='path/to/pfile/initSID.ora'
If all has been modified correctly, monting the database should prove successful
alter database mount
set lines 1000 pages 0 trims on feed off col txt for a500 spool rename_files_in_controlfile.sql select 'alter database rename file '''||name||''' to '''||replace(name, '/cln/acc/ora_data2/common/db/bila','/cln/acc/ora_data2/bila') ||''';' txt from v$datafile; spool off host perl -p -i -e 'print if m/^alter/' rename_files_in_controlfile.sql @rename_files_in_controlfile host rm rename_files_in_controlfile.sql
select name from v$datafile where name not like '/path/to/original/datafiles/%'; select name from v$datafile where name like '% %';
alter database disable block change tracking;
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
run {
allocate channel c1 device type disk;
recover database;
release channel c1;
}
alter database open;
set lines 1000 pages 1000 col file_name for a60 select file_name,status,online_status from dba_data_files; select * from v$tempfile; select member from v$logfile;
This is a clever way of restoring a large database quickly with not too much downtime. It could be used as a permanent rolling forward of a database in case of a crash.
The backup is based on image copies not on backupsets.
run {
recover copy of database with tag 'incr_update';
backup incremental level 1 for recover of copy with tag 'incr_update'
database;
}
Merged incremental backups can reduce the time for Oracle database recovery and are suitable primarily for a disk based backup and recovery strategy primarily because the datafile images need to be updated.
The first time the RMAN script runs it creates a level 0 backup of the datafile.
On subsequent runs, it creates level 1 incremental backups of the datafiles, by using the System Change Number as a reference, and applies all level 1 backups taken since the last full backup to the image.
The RMAN script uses the tag to link the incremental backups to the image copy.
From the “Oracle Database Backup and Recovery Administrators Manual”:
The syntax used in the script does not, however, make it clear how the strategy works.
To understand the script and the strategy, it is necessary to understand the effects of these two commands when no datafile copies or incremental backups exist.
Note: Even when the BACKUP INCREMENTAL LEVEL 1 … FOR RECOVER OF COPY command is used with DEVICE TYPE SBT to create a backup on tape, the first time it is used it creates the image copy on disk, and does not write any backup on tape. Subsequent incremental level 1 backups can be created on tape once the image copy is on disk.
A 5Tb database needs to be moved to a different filesystem. Using this method, we can reduce the downtime to minutes instead of a day!
This is the script.
#!/usr/bin/env ksh
# ==============================================================================
# Name : incremental_restore_as_copy.ksh
# Description : Maintains an up-to-date clone copy of a database
#
# Parameters : -i <instance name>
# -f <filesystem where database files should be stored>
#
# Example : incremental_restore_as_copy.ksh -i adst -f /mnt/adst
#
# Modification History
# ====================
# When Who What
# ========= ================= ==================================================
# 19-FEB-19 Stuart Barkley Created
# ==============================================================================
PROGNAME="$(basename $0)"
#
# Reference:
# https://blog.zeddba.com/2019/05/30/rman-incremental-updating-backup-demo-part-1/
# https://yawod.wordpress.com/2020/04/24/rman-incrementally-updated-image-copy-backups/
#
function usage {
echo
echo "Usage: ${PROGNAME} -i <Instance name> -f <Filesystem to store db files>"
echo "eg: ${PROGNAME} -i adst -f /mnt/adst"
echo
exit 1
}
# ------------------
# get the parameters
# ------------------
while getopts "i:f:" OPT
do
case "$OPT" in
i) ORACLE_SID="${OPTARG}";
;;
f) FILESYSTEM="${OPTARG}";
;;
*) usage
;;
esac
done
shift $((OPTIND-1))
[[ "${FILESYSTEM}" == "" ]] && printf "\n%s\n" "Filesystem is a mandatory parameter." && usage
[[ ! -d "${FILESYSTEM}" ]] && printf "\n%s\n" "Filesystem ${FILESYSTEM} does not exist." && usage
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
ORAENV_ASK=NO
grep "^${ORACLE_SID}:" /etc/oratab>/dev/null 2>&1
[[ $? -ne 0 ]] && echo "ERROR: ${ORACLE_SID} is not in oratab" && exit 1
. oraenv >/dev/null
[[ $? -ne 0 ]] && echo "ERROR: Unable to set environment for ${ORACLE_SID}" && exit 1
cat<<EOCAT>/tmp/incremental_restore_as_copy_${ORACLE_SID}.rman
run
{
allocate channel d1 type disk format '${FILESYSTEM}/%b';
allocate channel d2 type disk format '${FILESYSTEM}/%b';
allocate channel d3 type disk format '${FILESYSTEM}/%b';
allocate channel d4 type disk format '${FILESYSTEM}/%b';
allocate channel d5 type disk format '${FILESYSTEM}/%b';
allocate channel d6 type disk format '${FILESYSTEM}/%b';
allocate channel d7 type disk format '${FILESYSTEM}/%b';
allocate channel d8 type disk format '${FILESYSTEM}/%b';
recover copy of database with tag "Disk_Backup";
delete noprompt obsolete recovery window of 2 days device type disk;
backup incremental level 1
for recover of copy
with tag "Disk_Backup" as backupset
format '${FILESYSTEM}/rman_%d_%T_%U_incr'
database;
backup as compressed backupset
tag 'backup_disk_arch'
archivelog all not backed up 1 times
format '${FILESYSTEM}/rman_%d_%T_%U_arch';
backup as compressed backupset
tag 'backup_disk_ctrl'
current controlfile
format '${FILESYSTEM}/rman_%d_%T_%U_ctrl';
backup as compressed backupset
tag 'backup_disk_spfl'
spfile
format '${FILESYSTEM}/rman_%d_%T_%U_spfl';
release channel d1;
release channel d2;
release channel d3;
release channel d4;
release channel d5;
release channel d6;
release channel d7;
release channel d8;
}
EOCAT
rman target / cmdfile="/tmp/incremental_restore_as_copy_${ORACLE_SID}.rman"
This is the log of the first run…
rman target / cmdfile=/tmp/incremental_restore_as_copy_adst.rman
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Apr 21 10:47:15 2022
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: ADST (DBID=3477316030)
RMAN> run
2> {
3> allocate channel d1 type disk format '/mnt/adst/%b';
4> allocate channel d2 type disk format '/mnt/adst/%b';
5> allocate channel d3 type disk format '/mnt/adst/%b';
6> allocate channel d4 type disk format '/mnt/adst/%b';
7> allocate channel d5 type disk format '/mnt/adst/%b';
8> allocate channel d6 type disk format '/mnt/adst/%b';
9> allocate channel d7 type disk format '/mnt/adst/%b';
10> allocate channel d8 type disk format '/mnt/adst/%b';
11> recover copy of database with tag "Disk_Backup";
12> backup incremental level 1
13> for recover of copy
14> with tag "Disk_Backup" as backupset
15> format '/mnt/adst/rman_%d_%T_%U_incr.bkp'
16> database;
17> release channel d1;
18> release channel d2;
19> release channel d3;
20> release channel d4;
21> release channel d5;
22> release channel d6;
23> release channel d7;
24> release channel d8;
25> }
26>
using target database control file instead of recovery catalog
allocated channel: d1
channel d1: SID=771 device type=DISK
allocated channel: d2
channel d2: SID=820 device type=DISK
allocated channel: d3
channel d3: SID=962 device type=DISK
allocated channel: d4
channel d4: SID=1011 device type=DISK
allocated channel: d5
channel d5: SID=1108 device type=DISK
allocated channel: d6
channel d6: SID=1155 device type=DISK
allocated channel: d7
channel d7: SID=1203 device type=DISK
allocated channel: d8
channel d8: SID=1253 device type=DISK
Starting recover at 21-APR-22 10:47:20
no copy of datafile 1 found to recover
no copy of datafile 2 found to recover
no copy of datafile 3 found to recover
no copy of datafile 4 found to recover
no copy of datafile 5 found to recover
no copy of datafile 6 found to recover
no copy of datafile 7 found to recover
no copy of datafile 8 found to recover
no copy of datafile 9 found to recover
...
...
no copy of datafile 320 found to recover
no copy of datafile 321 found to recover
no copy of datafile 322 found to recover
no copy of datafile 323 found to recover
no copy of datafile 324 found to recover
no copy of datafile 325 found to recover
Finished recover at 21-APR-22 10:47:25
Starting backup at 21-APR-22 10:47:25
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 6 found
no parent backup or copy of datafile 7 found
no parent backup or copy of datafile 8 found
no parent backup or copy of datafile 10 found
no parent backup or copy of datafile 11 found
no parent backup or copy of datafile 14 found
no parent backup or copy of datafile 41 found
...
...
no parent backup or copy of datafile 319 found
no parent backup or copy of datafile 146 found
no parent backup or copy of datafile 147 found
no parent backup or copy of datafile 170 found
no parent backup or copy of datafile 12 found
channel d1: starting datafile copy
input datafile file number=00003 name=/cln/tst/ora_data3/adst/undotbs01.dbf
channel d2: starting datafile copy
input datafile file number=00006 name=/cln/tst/ora_data3/adst/ads_idx01.dbf
channel d3: starting datafile copy
input datafile file number=00007 name=/cln/tst/ora_data3/adst/ads_data02.dbf
channel d4: starting datafile copy
input datafile file number=00008 name=/cln/tst/ora_data3/adst/ads_idx02.dbf
channel d5: starting datafile copy
input datafile file number=00010 name=/cln/tst/ora_data3/adst/ads_data03.dbf
channel d6: starting datafile copy
input datafile file number=00011 name=/cln/tst/ora_data3/adst/ads_data04.dbf
channel d7: starting datafile copy
input datafile file number=00014 name=/cln/tst/ora_data3/adst/ads_archive01.dbf
channel d8: starting datafile copy
input datafile file number=00041 name=/cln/tst/ora_data3/adst/undotbs02.dbf
output file name=/mnt/adst/ads_archive01.dbf tag=DISK_BACKUP RECID=161 STAMP=1102589701
channel d7: datafile copy complete, elapsed time: 00:07:36
channel d7: starting datafile copy
input datafile file number=00087 name=/cln/tst/ora_data3/adst/undotbs03.dbf
output file name=/mnt/adst/undotbs01.dbf tag=DISK_BACKUP RECID=166 STAMP=1102589713
channel d1: datafile copy complete, elapsed time: 00:07:51
channel d1: starting datafile copy
input datafile file number=00181 name=/cln/tst/ora_data3/adst/ads_data71.dbf
output file name=/mnt/adst/ads_idx01.dbf tag=DISK_BACKUP RECID=162 STAMP=1102589713
channel d2: datafile copy complete, elapsed time: 00:07:51
channel d2: starting datafile copy
input datafile file number=00182 name=/cln/tst/ora_data3/adst/ads_data72.dbf
output file name=/mnt/adst/ads_data02.dbf tag=DISK_BACKUP RECID=163 STAMP=1102589713
channel d3: datafile copy complete, elapsed time: 00:07:51
channel d3: starting datafile copy
input datafile file number=00183 name=/cln/tst/ora_data3/adst/ads_data73.dbf
output file name=/mnt/adst/ads_idx02.dbf tag=DISK_BACKUP RECID=164 STAMP=1102589713
channel d4: datafile copy complete, elapsed time: 00:07:51
channel d4: starting datafile copy
input datafile file number=00184 name=/cln/tst/ora_data3/adst/ads_data74.dbf
output file name=/mnt/adst/ads_data03.dbf tag=DISK_BACKUP RECID=165 STAMP=1102589713
channel d5: datafile copy complete, elapsed time: 00:07:51
channel d5: starting datafile copy
input datafile file number=00185 name=/cln/tst/ora_data3/adst/ads_data75.dbf
output file name=/mnt/adst/ads_data04.dbf tag=DISK_BACKUP RECID=167 STAMP=1102589713
channel d6: datafile copy complete, elapsed time: 00:07:51
channel d6: starting datafile copy
input datafile file number=00192 name=/cln/tst/ora_data3/adst/ads_data76.dbf
output file name=/mnt/adst/undotbs02.dbf tag=DISK_BACKUP RECID=168 STAMP=1102589714
channel d8: datafile copy complete, elapsed time: 00:07:52
channel d8: starting datafile copy
input datafile file number=00193 name=/cln/tst/ora_data3/adst/ads_data77.dbf
output file name=/mnt/adst/undotbs03.dbf tag=DISK_BACKUP RECID=169 STAMP=1102590264
channel d7: datafile copy complete, elapsed time: 00:09:21
channel d7: starting datafile copy
input datafile file number=00200 name=/cln/tst/ora_data3/adst/ads_data79.dbf
output file name=/mnt/adst/ads_data72.dbf tag=DISK_BACKUP RECID=170 STAMP=1102590280
...
...
channel d4: datafile copy complete, elapsed time: 00:01:23
output file name=/mnt/adst/ts_credo_data_02.dbf tag=DISK_BACKUP RECID=321 STAMP=1102598253
channel d3: datafile copy complete, elapsed time: 00:01:43
output file name=/mnt/adst/ts_xbrl_02.dbf tag=DISK_BACKUP RECID=322 STAMP=1102598477
channel d2: datafile copy complete, elapsed time: 00:05:23
output file name=/mnt/adst/ads_idx16.dbf tag=DISK_BACKUP RECID=323 STAMP=1102598485
channel d1: datafile copy complete, elapsed time: 00:05:34
Finished backup at 21-APR-22 13:21:30
Starting Control File and SPFILE Autobackup at 21-APR-22 13:21:30
piece handle=/cln/tst/ora_bin1/app/oracle/product/12.1.0.2/dbhome_1/dbs/c-3477316030-20220421-0f comment=NONE
Finished Control File and SPFILE Autobackup at 21-APR-22 13:21:33
released channel: d1
released channel: d2
released channel: d3
released channel: d4
released channel: d5
released channel: d6
released channel: d7
released channel: d8
Recovery Manager complete.
It starts the recover and reports “no copy of datafile n found to recover”. This is normal as this is the first run so there are no level 1 backups to apply any recovery.
It then starts the backup for recover. There are messages saying “no parent backup or copy of datafile n found”. This is normal as this is the first run so there are no level 0 backups to apply the level 1 to. This causes a level 0 backup to be taken in preparation for the next time this is run.
The exact same shell is run in all cases.
Recovery Manager: Release 12.1.0.2.0 - Production on Fri Apr 22 15:23:33 2022
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: ADST (DBID=3477316030)
RMAN> run
2> {
3> allocate channel d1 type disk format '/mnt/adst/%b';
4> allocate channel d2 type disk format '/mnt/adst/%b';
5> allocate channel d3 type disk format '/mnt/adst/%b';
6> allocate channel d4 type disk format '/mnt/adst/%b';
7> allocate channel d5 type disk format '/mnt/adst/%b';
8> allocate channel d6 type disk format '/mnt/adst/%b';
9> allocate channel d7 type disk format '/mnt/adst/%b';
10> allocate channel d8 type disk format '/mnt/adst/%b';
11> recover copy of database with tag "Disk_Backup";
12> backup incremental level 1
13> for recover of copy
14> with tag "Disk_Backup" as backupset
15> format '/mnt/adst/rman_%d_%T_%U_incr.bkp'
16> database;
17> release channel d1;
18> release channel d2;
19> release channel d3;
20> release channel d4;
21> release channel d5;
22> release channel d6;
23> release channel d7;
24> release channel d8;
25> }
26>
using target database control file instead of recovery catalog
allocated channel: d1
channel d1: SID=1253 device type=DISK
allocated channel: d2
channel d2: SID=1299 device type=DISK
allocated channel: d3
channel d3: SID=1348 device type=DISK
allocated channel: d4
channel d4: SID=1395 device type=DISK
allocated channel: d5
channel d5: SID=1443 device type=DISK
allocated channel: d6
channel d6: SID=1490 device type=DISK
allocated channel: d7
channel d7: SID=5 device type=DISK
allocated channel: d8
channel d8: SID=51 device type=DISK
Starting recover at 22-APR-2022 15:23:38
no copy of datafile 1 found to recover
no copy of datafile 2 found to recover
no copy of datafile 3 found to recover
no copy of datafile 4 found to recover
no copy of datafile 5 found to recover
no copy of datafile 6 found to recover
no copy of datafile 7 found to recover
no copy of datafile 8 found to recover
no copy of datafile 9 found to recover
no copy of datafile 10 found to recover
no copy of datafile 11 found to recover
no copy of datafile 12 found to recover
...
...
no copy of datafile 320 found to recover
no copy of datafile 321 found to recover
no copy of datafile 322 found to recover
no copy of datafile 323 found to recover
no copy of datafile 324 found to recover
no copy of datafile 325 found to recover
Finished recover at 22-APR-2022 15:23:43
Starting backup at 22-APR-2022 15:23:43
channel d1: starting incremental level 1 datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00011 name=/crelan/tst/ora_data3/adst/ads_data04.dbf
input datafile file number=00185 name=/crelan/tst/ora_data3/adst/ads_data75.dbf
input datafile file number=00209 name=/crelan/tst/ora_data3/adst/ads_data84.dbf
input datafile file number=00249 name=/crelan/tst/ora_data3/adst/ads_data92.dbf
input datafile file number=00282 name=/crelan/tst/ora_data3/adst/ads_idx14.dbf
input datafile file number=00025 name=/crelan/tst/ora_data3/adst/ads_data13.dbf
input datafile file number=00033 name=/crelan/tst/ora_data3/adst/ads_data21.dbf
input datafile file number=00042 name=/crelan/tst/ora_data3/adst/ads_idx04.dbf
input datafile file number=00050 name=/crelan/tst/ora_data3/adst/ads_data34.dbf
input datafile file number=00058 name=/crelan/tst/ora_data3/adst/ads_data42.dbf
input datafile file number=00066 name=/crelan/tst/ora_data3/adst/ads_data50.dbf
input datafile file number=00112 name=/crelan/tst/ora_data3/adst/ads_data55.dbf
input datafile file number=00120 name=/crelan/tst/ora_data3/adst/ads_data63.dbf
input datafile file number=00296 name=/crelan/tst/ora_data3/adst/ads_data106.dbf
input datafile file number=00313 name=/crelan/tst/ora_data3/adst/ads_data119.dbf
input datafile file number=00323 name=/crelan/tst/ora_data3/adst/ads_data125.dbf
input datafile file number=00173 name=/crelan/tst/ora_data3/adst/ads_data67.dbf
input datafile file number=00285 name=/crelan/tst/ora_data3/adst/ads_data97.dbf
input datafile file number=00308 name=/crelan/tst/ora_data3/adst/ads_data114.dbf
input datafile file number=00301 name=/crelan/tst/ora_data3/adst/ads_idx16.dbf
channel d1: starting piece 1 at 22-APR-2022 15:23:48
channel d2: starting incremental level 1 datafile backup set
channel d2: specifying datafile(s) in backup set
input datafile file number=00007 name=/crelan/tst/ora_data3/adst/ads_data02.dbf
input datafile file number=00182 name=/crelan/tst/ora_data3/adst/ads_data72.dbf
input datafile file number=00202 name=/crelan/tst/ora_data3/adst/ads_data80.dbf
input datafile file number=00246 name=/crelan/tst/ora_data3/adst/ads_data89.dbf
input datafile file number=00261 name=/crelan/tst/ora_data3/adst/ads_data95.dbf
input datafile file number=00022 name=/crelan/tst/ora_data3/adst/ads_data10.dbf
input datafile file number=00030 name=/crelan/tst/ora_data3/adst/ads_data18.dbf
input datafile file number=00038 name=/crelan/tst/ora_data3/adst/ads_data26.dbf
input datafile file number=00047 name=/crelan/tst/ora_data3/adst/ads_data31.dbf
input datafile file number=00055 name=/crelan/tst/ora_data3/adst/ads_data39.dbf
input datafile file number=00063 name=/crelan/tst/ora_data3/adst/ads_data47.dbf
...
...
input datafile file number=00111 name=/crelan/tst/ora_data3/adst/ads_data54.dbf
input datafile file number=00119 name=/crelan/tst/ora_data3/adst/ads_data62.dbf
input datafile file number=00295 name=/crelan/tst/ora_data3/adst/ads_data105.dbf
input datafile file number=00312 name=/crelan/tst/ora_data3/adst/ads_data118.dbf
input datafile file number=00322 name=/crelan/tst/ora_data3/adst/ads_data124.dbf
input datafile file number=00172 name=/crelan/tst/ora_data3/adst/ads_data66.dbf
input datafile file number=00175 name=/crelan/tst/ora_data3/adst/ads_data69.dbf
input datafile file number=00305 name=/crelan/tst/ora_data3/adst/ads_data111.dbf
input datafile file number=00147 name=/crelan/tst/ora_data3/adst/ts_cre_index_01.dbf
input datafile file number=00012 name=/crelan/tst/ora_data3/adst/ts_precise.dbf
channel d8: starting piece 1 at 22-APR-2022 15:23:51
channel d2: finished piece 1 at 22-APR-2022 16:09:37
piece handle=/mnt/adst/rman_ADST_20220422_2f0rjfs4_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d2: backup set complete, elapsed time: 00:45:48
channel d1: finished piece 1 at 22-APR-2022 16:09:47
piece handle=/mnt/adst/rman_ADST_20220422_2e0rjfs4_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d1: backup set complete, elapsed time: 00:45:59
channel d3: finished piece 1 at 22-APR-2022 16:09:47
piece handle=/mnt/adst/rman_ADST_20220422_2g0rjfs5_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d3: backup set complete, elapsed time: 00:45:58
channel d5: finished piece 1 at 22-APR-2022 16:09:47
piece handle=/mnt/adst/rman_ADST_20220422_2i0rjfs5_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d5: backup set complete, elapsed time: 00:45:57
channel d4: finished piece 1 at 22-APR-2022 16:09:57
piece handle=/mnt/adst/rman_ADST_20220422_2h0rjfs5_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d4: backup set complete, elapsed time: 00:46:08
channel d6: finished piece 1 at 22-APR-2022 16:09:57
piece handle=/mnt/adst/rman_ADST_20220422_2j0rjfs6_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d6: backup set complete, elapsed time: 00:46:07
channel d8: finished piece 1 at 22-APR-2022 16:10:47
piece handle=/mnt/adst/rman_ADST_20220422_2l0rjfs7_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d8: backup set complete, elapsed time: 00:46:56
channel d7: finished piece 1 at 22-APR-2022 16:11:17
piece handle=/mnt/adst/rman_ADST_20220422_2k0rjfs6_1_1_incr.dbf tag=DISK_BACKUP comment=NONE
channel d7: backup set complete, elapsed time: 00:47:26
Finished backup at 22-APR-2022 16:11:17
Starting Control File and SPFILE Autobackup at 22-APR-2022 16:11:17
piece handle=/crelan/tst/ora_bin1/app/oracle/product/12.1.0.2/dbhome_1/dbs/c-3477316030-20220422-12 comment=NONE
Finished Control File and SPFILE Autobackup at 22-APR-2022 16:11:20
released channel: d1
released channel: d2
released channel: d3
released channel: d4
released channel: d5
released channel: d6
released channel: d7
released channel: d8
Recovery Manager complete.
The recover again starts with “no copy of datafile n found to recover”. This is normal as this is the second run. The first it ran, it created a level 0 backup. This time it created a level 1 backup. So next time (as of run 3) the recover command will have something to apply (the level 1 backup from the previous run).
It then starts the backup for recover. This time the level 1 backup works correctly as we created the level 0 backup last time.
From now on, the script runs without error. The recover applies the backup from the previous run to the datafile copies bringing them up-to-date with the changes that happened during the level 1 backup.
Then the (next) level1 incremental backup runs. These will be applied tomorrow (or whenever the script is run again).
Note: - When you want to use this set of files as a replacement for the originals…
These need to be applied using the first part of the script (the recover) but the backup after that will not be needed.
- !/usr/bin/ksh
- ==============================================================================
- Name : db_space_calculation.ksh
- Description : Works out if there will be enough space to hold a clone of a
- specified database on a particular filesystem
-
- Parameters : none. Script asks questions
-
-
- Modification History
- ====================
- When Who What
- ========= ================= ==================================================
- 18-JUL-17 Stuart Barkley Created
- 17-AUG-17 Stuart Barkley Get control_files parameter
- 21-AUG-17 Stuart Barkley Try to get Networker pool and rman stuff
- ==============================================================================
-
PROGNAME=$(basename $0)
- increment this every time it changes
VERSION=0.4.3
printf "\
%s\
" "${PROGNAME} version $VERSION"
printf "%s" "Enter name of source (clone from) database: "
read FROM_SID
printf "%s" "Enter sys password for ${FROM_SID} database: "
read FROM_SYS_PASSWORD
printf "%s" "Enter name of destination (clone to) database: "
read TO_SID
- ----------------------------------------------------------------------
- if destination database exists, we can use it to get datafile location
- ----------------------------------------------------------------------
grep "^${TO_SID}:" /etc/oratab >/dev/null 2>&1
RETVAL=$?
if [[ ${RETVAL} -eq 0 ]]; then
ORAENV_ASK=NO
export ORACLE_SID=${TO_SID}
. oraenv >/dev/null 2>&1
RETVAL=$((RETVAL+$?))
fi
if [[ ${RETVAL} -ne 0 ]]; then
# destination database does not yet exist, ask user
printf "%s" "Enter datafile directory/filesystem for ${TO_SID} database: "
read TO_DATA_DIR
fi
function calc { awk "BEGIN { print $* }"; }
- --------------------------------------
- get host name of the FROM_SID database
- --------------------------------------
FROM_HOST=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
select host_name
from v$instance
/
EOSQL
`
if [[ $? -ne 0 ]]; then
printf "\
%s\
" "ERROR: Failed to get host name of ${FROM_SID} database."
cat /tmp/results.$$ && rm -f /tmp/results.$$
exit 1
fi
sleep 1
- --------------------------------------------------------
printf "\
%s" "checking size of the ${FROM_SID} database"
- --------------------------------------------------------
DATABASE_SIZE=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
with used_space as
(
select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log
)
, free_space as
(
select sum(bytes) sum_bytes
from dba_free_space
)
select trim(round(sum(used_space.bytes)/1024) - round(free_space.sum_bytes/1024)) kb_used
from free_space
, used_space
group by free_space.sum_bytes
/
EOSQL
`
if [[ $? -ne 0 ]]; then
echo " ...NOK"
printf "\
%s\
" "ERROR: Failed to get size of ${FROM_SID} database."
cat /tmp/results.$$ && rm -f /tmp/results.$$
exit 1
else
echo " ...OK (${DATABASE_SIZE} Kb)" && rm -f /tmp/results.$$
fi
sleep 1
- ---------------------
- get datafile location
- ---------------------
if [[ "${TO_DATA_DIR}" == "" ]]; then
# it was not filled in manually, destination must exist
sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL' | read TO_DATA_DIR
set numwid 15 headi off newpa none feedb off
select distinct substr(name,1,(instr(name,'/',-1,1)-1)) path_name from v$datafile;
EOSQL
fi
- ----------------------------------------------------------------
printf "%s" "checking space available for the ${TO_SID} database"
- ----------------------------------------------------------------
if [[ "$(uname -s)" == "AIX" ]]; then
SPACE_AVAILABLE=$(df -k $TO_DATA_DIR | sed 1d | awk 'BEGIN {i=0} {i=i+$3} END {print i}')
else
# Linux, Darwin, Solaris
SPACE_AVAILABLE=$(df -k $TO_DATA_DIR | sed 1d | awk 'BEGIN {i=0} {i=i+$4} END {print i}')
fi
SPACE_USED=$(du -ks ${TO_DATA_DIR} | awk 'BEGIN {i=0;} {i=i+$1} END {print i}')
- do the calculations in the database to avoid scientific notation issues
sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<EOSQL | read SPACE_AFTER_INSTALL TOTAL_SPACE_AVAILABLE
set numwid 15 headi off newpa none feedb off
select sign(${SPACE_USED} + ${SPACE_AVAILABLE} - ${DATABASE_SIZE}), trim(${SPACE_USED} + ${SPACE_AVAILABLE})
from dual
/
EOSQL
if [[ ${SPACE_AFTER_INSTALL} -le 0 ]]; then
echo " ...NOK"
printf "\
%s\
" "ERROR: Insufficient space."
echo "Size of database ${FROM_SID} : ${DATABASE_SIZE} Kb"
echo "Space available for ${TO_SID}: ${TOTAL_SPACE_AVAILABLE} Kb"
exit 1
else
echo " ...OK, space is sufficient"
fi
sleep 1
printf "\
%s\
" "other bits useful for a sync..."
sleep 1
- ------------------------------------------------
- find the audit file area on source, if available
- ------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'audit_file_dest'
/
EOSQL
`
echo "FROM_ADUMP_DIR='${RETVAL}'"
- -------------------------------------------------
- find the archive log area on source, if available
- -------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'log_archive_dest_1'
/
EOSQL
`
RETVAL=$(echo ${RETVAL}|sed -e 's/LOCATION=//')
echo "FROM_ARCH_DIR='${RETVAL}'"
- -------------------------------------------------------
- work out the RMAN connection details and Networker pool
- -------------------------------------------------------
ssh ${FROM_HOST} "cat /ux/*/backup/rman/backup_scripts/backup-${FROM_SID}*arch*.rman" >/tmp/results.$$ 2>/dev/null
if [[ $? -eq 0 ]]; then
TAPE_POOL=$(cat /tmp/results.$$ | perl -ne 'print $1 if /NSR_DATA_VOLUME_POOL=(\\w+),?/')
echo "TAPE_POOL=${TAPE_POOL}"
FROM_RMAN=$(cat /tmp/results.$$ | perl -ne 'print "FROM_RMANCAT_USR=$1\
FROM_RMANCAT_PWD=$2\
FROM_RMANCAT=$3\
" if /rcvcat\\s+(\\w+)\\/(\\w+)@(\\w+)$/')
echo "${FROM_RMAN}"
else
echo "Could not connect to ${FROM_HOST} to get the RMAN catalog and Networker pool name. Please check manually"
fi
- -----------------------------------------------------
- find the audit file area on destination, if available
- -----------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'audit_file_dest'
/
EOSQL
`
echo "TO_ADUMP_DIR='${RETVAL}'"
- ------------------------------------------------------
- find the archive log area on destination, if available
- ------------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 lines 1000 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'log_archive_dest_1'
/
EOSQL
`
RETVAL=$(echo ${RETVAL}|sed -e 's/LOCATION=//')
echo "TO_ARCH_DIR='${RETVAL}'"
- --------------------------------------------------
- find out where the controlfiles are on destination
- --------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 lines 1000 headi off newpa none feedb off
col value for a300
select value from v$parameter where name = 'control_files'
/
EOSQL
`
echo "CONTROL_FILES=\\"${RETVAL}\\""
- ------------------------------------------------
- work out the DB_FILE_NAME_CONVERT RMAN parameter
- ------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(name,1,(instr(name,'/',-1,1)-1)) path_name from v$datafile
/
EOSQL
`
DB_FILE_NAME_CONVERT="'${RETVAL}'"
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(name,1,(instr(name,'/',-1,1)-1)) path_name from v$datafile
/
EOSQL
`
echo DB_FILE_NAME_CONVERT="\\"${DB_FILE_NAME_CONVERT},'${RETVAL}'\\""
- -------------------------------------------------
- work out the LOG_FILE_NAME_CONVERT RMAN parameter
- -------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(member,1,(instr(member,'/',-1,1)-1)) path_name from v$logfile
/
EOSQL
`
LOG_FILE_NAME_CONVERT="'${RETVAL}'"
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(member,1,(instr(member,'/',-1,1)-1)) path_name from v$logfile
/
EOSQL
`
echo LOG_FILE_NAME_CONVERT="\\"${LOG_FILE_NAME_CONVERT},'${RETVAL}'\\""
On source host
sqlplus / as sysdba create pfile='/backup/init_destsid.ora' from spfile; exit
- !/usr/bin/ksh
ORAENV_ASK=NO
export ORACLE_SID=sourcesid
. oraenv
export NLS_DATE_FORMAT="DD-MM-YY HH24:MI:SS"
rman nocatalog target /<<EORMAN
configure controlfile autobackup on;
configure device type disk parallelism 4;
configure maxsetsize to unlimited;
run {
allocate channel c1 device type disk maxpiecesize 2048M;
allocate channel c2 device type disk maxpiecesize 2048M;
allocate channel c3 device type disk maxpiecesize 2048M;
allocate channel c4 device type disk maxpiecesize 2048M;
backup as compressed backupset database format "/backup/df_d%d_t%t_s%s_r%r";
backup as compressed backupset archivelog all format "/backup/al_d%d_t%t_s%s_r%r";
backup as compressed backupset current controlfile format "/backup/cf_%F";
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}
EORMAN
scp /backup/* dest_server:/big_directory/
On destination server
export NLS_DATE_FORMAT="DD-MM-YY HH24:MI:SS" sqlplus / as sysdba startup nomount pfile='/big_directory/init_destsid.ora';
Make sure there is only one set of backup files in the directory. RMAN could get confused about which controlfile to use even though the documentation says it will take the most recent.
rman auxiliary / duplicate database to destsid backup location '/big_directory' nofilenamecheck;
Maybe find out how many threads there are first and script it.
Find the archivelog sequence number before the backup starts.
select min(sequence#) from v$log where thread#=1;
MIN1=$(echo "select 'xX '||min(sequence#) from v\\\\$log where thread#=1;" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}')
Run the backup switching logfile before and after.
rman target / nocatalog log=autonomous_backup.log
run {
allocate channel...
sql "alter system archive log current";
backup database include current controlfile;
sql "alter system archive log current";
restore database preview;
}
Find the archivelog sequence number now that the backup has finished.
MAX1=$(echo "select 'xX '||max(sequence#) from v\\\\$log where thread#=1;" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}')
Backup just the archivelogs that will be necessary to recover.
rman target / nocatalog log=autonomous_backup_arch.log
run {
allocate channel...
backup archivelog sequence between ${MIN1} and ${MAX1} thread 1;
}
- !/bin/ksh
-
- Inspired by the book RMAN Recipes for Oracle database 11g
export ORACLE_SID=JDBEOP1
echo `date`
HOSTNAME=`hostname`
TODAY=`date +%Y%m%d_%H%M`
ORAENV_ASK=NO
. oraenv
export nls_date_format="dd-mon-yyyy hh24:mi:ss"
sqlplus / as sysdba<<EOSQL
set numwidth 20 lines 2000
col status for a12
select sysdate, current_scn from v\\$database;
select group#,thread#,sequence#,archived,status,first_change#,first_time from v\\$log;
EOSQL
rman target / nocatalog msglog rman_database_${ORACLE_SID}_${TODAY}.log<<EORMAN
run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
allocate channel d5 type disk;
allocate channel d6 type disk;
backup format '/JDBEOP1-CLONE/rman_db_t%t_s%s_p%p' database;
sql 'alter system archive log current';
backup format '/JDBEOP1-CLONE/rman_al_t%t_s%s_p%p' archivelog all;
backup format '/JDBEOP1-CLONE/spfile_t%t_s%s_p%p' spfile;
backup format '/JDBEOP1-CLONE/rman_ctl_t%t_s%s_p%p' current controlfile for standby;
release channel d1;
release channel d2;
release channel d3;
release channel d4;
release channel d5;
release channel d6;
}
EORMAN
echo `date`
ls -altr /JDBEOP1-CLONE
Copy over the backup files from source to destination server
scp -p /CLONEDISK/* bemauerp12:/CLONEDISK/
rsync -uav --progress /CLONEDISK/* bemauerp12:/CLONEDISK/
On source server unmount the disk and comment the appropriate line in /etc/vfstab
umount /CLONEDISK vxdg deport clonedg vi /etc/vfstab - /dev/vx/dsk/clonedg/lv_clonedg_jdbeop1clone /dev/vx/rdsk/clonedg/lv_clonedg_jdbeop1clone /CLONEDISK vxfs 3 yes -
On destination server uncomment the appropriate line in /etc/vfstab and mount the disk
vi /etc/vfstab /dev/vx/dsk/clonedg/lv_clonedg_jdbeop1clone /dev/vx/rdsk/clonedg/lv_clonedg_jdbeop1clone /CLONEDISK vxfs 3 yes - vxdg import clonedg mount /CLONEDISK
Standby database needs to be able to connect to the primary via tns
vi $TNS_ADMIN/tnsnames.ora
PRIMARY ======
(DESCRIPTION ======
(ADDRESS_LIST ======
(ADDRESS = (PROTOCOL = TCP)(HOST = primary)(PORT = 1555))
)
(CONNECT_DATA ======
(SERVER = DEDICATED)
(SERVICE_NAME = JDBEOP1)
)
)
STANDBY ======
(DESCRIPTION ======
(ADDRESS_LIST ======
(ADDRESS = (PROTOCOL = TCP)(HOST = standby)(PORT = 1555))
)
(CONNECT_DATA ======
(SERVER = DEDICATED)
(SERVICE_NAME = JDBEOP1_DR)
)
)
- !/usr/bin/ksh
HOSTNAME=`hostname`
TODAY=`date +%Y%m%d_%H%M`; export TODAY
- ===================
- set the environment
- ===================
export ORACLE_SID=JDBEOP1
ORAENV_ASK=NO
. oraenv
echo $ORACLE_SID
echo $ORACLE_HOME
- ==================================
- kill any previous (failed) attempt
- ==================================
sqlplus / as sysdba <<EOSQL
shutdown abort
EOSQL
- ======================
- start a dummy instance
- ======================
sqlplus / as sysdba <<EOSQL
startup nomount pfile='/oracle/JDBEOP1/admin/change/init_clone.ora'
EOSQL
- =====================
- start the duplication
- =====================
echo `date`
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
rman nocatalog cmdfile=duplicate4standby.cmd msglog=duplicate_${HOSTNAME}_${ORACLE_SID}_${TODAY}.log
echo `date`
Used for the initial start of the standby database in the above script
DB_NAME=JDBEOP1 DB_UNIQUE_NAME=JDBEOP1_DR
RMAN command file used in the above script
connect target sys/&syspasswordonprimary@JDBEOP1
connect auxiliary /
run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate auxiliary channel a1 device type disk;
allocate auxiliary channel a2 device type disk;
set newname for tempfile 1 to '/oracle/JDBEOP1/oradata1/temp01.dbf';
set newname for tempfile 2 to '/oracle/JDBEOP1/oradata2/temp03.dbf';
set newname for datafile 518 to '/oracle/JDBEOP1/arcdata/ARCENG_05.dbf';
set newname for datafile 555 to '/oracle/JDBEOP1/arcdata/PRODARCT_09.dbf';
set newname for datafile 550 to '/oracle/JDBEOP1/arcdata/PRODARCI_08.dbf';
duplicate target database for standby
nofilenamecheck
dorecover
spfile set db_unique_name = 'JDBEOP1_DR'
set sga_max_size = '124748364800'
set sga_target = '124748364800'
set db_cache_size = '100000000000'
set fal_server = 'JDBEOP1'
set fal_client = 'JDBEOP1_DR'
set db_block_checksum = 'FULL'
set db_lost_write_protect = 'TYPICAL'
set db_recovery_file_dest = '/oracle/JDBEOP1/recovery_area'
set log_archive_dest_1 = 'LOCATION="USE_DB_RECOVERY_FILE_DEST"'
set standby_archive_dest = 'LOCATION="USE_DB_RECOVERY_FILE_DEST"'
;
}
Harnesses the power of an RMAN image copy backup to generate clones that can be used for dev/qa etc.
Only the blocks different from those in the RMAN datafiles are kept in the cloned datafiles making huge space savings.
Includes the following methods…
backup current controlfile;
backup as copy current controlfile format '/tmp/MOCMMSP2.ctl';
backup current controlfile to trace format '/tmp/MOCMMSP2.ctl';
or from SQL prompt…
alter database backup controlfile to trace as '/tmp/database_cf.sql';
Monitor progress of RMAN backups
select sid , serial# , context , sofar , totalwork , round(sofar/totalwork*100,2) "%_complete" from v$session_longops where 1=1 and opname like 'RMAN%' and opname not like '%aggregate%' and totalwork != 0 and sofar != totalwork /
Run this one from sys on the database server
set pages 500 lines 200 col operation for a12 col backup for a27 col start_time for a17 col end_time for a17 col input_type for a12 col status for a12 col duration for 90.99 col days_ago for 90 select /*+ rule */ vrbsd.operation , vrbsd.command_id backup , to_char(vrbsd.start_time,'DD-MON-YY HH24:MI') start_time , to_char(vrbsd.end_time,'DD-MON-YY HH24:MI') end_time , vrbd.input_type type , vrbsd.status status , (vrbd.elapsed_seconds/3600) duration , trunc(sysdate)-trunc(vrbsd.start_time) days_ago from v$rman_backup_job_details vrbd , v$rman_backup_subjob_details vrbsd where 1=1 and vrbd.session_key = vrbsd.session_key and vrbsd.start_time > sysdate-&days_back order by vrbsd.start_time /
Run from the RMAN catalog owner, this checks to see how long ago a full backup was taken from all databases in the catalog
set lines 300 pages 66
col status for a15
select s.db_name "Database"
, s.db_key dbid
, s.status "Status"
, max(s.start_time) "Start time"
, max(round(s.mbytes_processed/1024)) "Processed data (GB)"
, max(round((end_time-start_time)*60*24)) "Duration (mins)"
, trunc(sysdate)-trunc(s.start_time) "Days since last backup"
from rc_rman_status s
where 1=1
and s.operation = 'BACKUP'
and s.object_type = 'DB FULL'
and s.start_time ======
(
select max(md.start_time)
from rc_rman_status md
where 1=1
and md.operation = 'BACKUP'
and md.object_type = 'DB FULL'
and md.db_name = s.db_name
)
group by s.db_name
, s.db_key
, s.object_type
, s.operation
, s.status
, trunc(sysdate)-trunc(s.start_time)
order by trunc(sysdate)-trunc(s.start_time) desc
/
Run from the RMAN catalog owner, this shows the backup history for a particular database in the catalog
set lines 300 pages 100
col status for a15
col objtype for a15
select s.db_name "Database"
, s.db_key dbid
, s.object_type objtype
, s.status "Status"
, s.start_time "Start time"
, round(s.mbytes_processed) "Processed data(Mb)"
, round((end_time-start_time)*60*24) "duration (min)"
, round((sysdate-s.start_time)*24) "Hours since last backup"
from rc_rman_status s
where 1=1
and db_name = '&db_name'
and s.operation = 'BACKUP'
and s.object_type not in ('CONTROLFILE', 'ARCHIVELOG')
order by trunc(sysdate)-trunc(s.start_time) desc
/
Run from the RMAN catalog owner, this shows the archivelog backup history for a particular database in the catalog
set lines 300 pages 100 col status for a15 col objtype for a15 select s.db_name "Database" , s.db_key dbid , s.object_type objtype , s.status "Status" , s.start_time "Start time" , round(s.mbytes_processed) "Processed data(Mb)" , round((end_time-start_time)*60*24) "duration (min)" , round((sysdate-s.start_time)*24) "Hours since last backup" from rc_rman_status s where 1=1 and db_name = '&db_name' and s.operation = 'BACKUP' and s.object_type ='ARCHIVELOG' order by trunc(sysdate)-trunc(s.start_time) desc /
Sometimes the catalog needs cleaning up. Maybe the backup scripts are not doing a crosscheck and delete properly.
list backupset of database summary completed between 'SYSDATE-3000' and 'SYSDATE-${RETENTION_DAYS}';
If this produces results, records exist that go back before the retention period and may need deleting (depending on the policy).
change backupset ${key} delete;
If you get this error…
RMAN-06091: no channel allocated for maintenance (of an appropriate type)
Allocate a fake tape channel (but not in a run block!)
Apparently you can - didn't work for me
allocate channel for maintenance device type sbt parms 'SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=/tmp)'; configure retention policy to recovery window of 28 days; delete force obsolete;
Not really necessary as RMAN can work it out but if you need to know…
select media , to_char(completion_time,'DD-MON-YYYY HH24:MI:SS') backed_up from rc_backup_piece where 1=1 and db_id = 4053457814 and completion_Time > sysdate - 2 order by 2
Check / Set it
set lines 1000 select * from v$block_change_tracking; show parameter db_create_file_dest alter system set db_create_file_dest='/oracle/SID/oradata1' scope=both;
or check from the Operating System for the CTWR background process
Here we see 3 of the 8 databases on this machine have block change tracking enabled
hn5114 /nas/software/oracle/scripts/synchro $ps -ef | grep -i [[c]]twr oracle 3016096 1 0 Jul 17 - 11:46 ora_ctwr_adst oracle 132874 1 0 Aug 23 - 1:08 ora_ctwr_lbkrt oraebst 62392598 1 0 Jul 17 - 10:12 ora_ctwr_EBST
Enable it
alter database enable block change tracking; alter database enable block change tracking using file '/oracle/SID/oradata1/change_track.dbf';
Disable it
alter database disable block change tracking;
Rename / Move it
alter database disable block change tracking; alter database enable block change tracking using file '<something different>';
set lines 1000 pages 100 select file# , used_change_tracking , incremental_level , datafile_blocks , block_size , blocks_read , round((blocks_read/datafile_blocks) * 100,2) "% blocks read" , blocks blocks_written , round((blocks/datafile_blocks)*100,2) "% blocks written" from v$backup_datafile / set lines 80
When block change tracking is off, you can see the whole datafile is read all the time
FNO BCT INCR BLKS BLKSZ READ %READ WRTN %WRTN
---------- --- ---------- ---------- ---------- ---------- ---------- ---------- ----------
5 NO 0 4194302 8192 4194302 100 4147400 98.88
9 NO 0 4194302 8192 4194302 100 4063231 96.88
14 NO 0 4194302 8192 4194302 100 4174935 99.54
15 NO 0 4194302 8192 4194302 100 3862303 92.08
6 NO 0 4194176 8192 4194176 100 4174875 99.54
7 NO 0 4194176 8192 4194176 100 4098595 97.72
8 NO 0 4194176 8192 4194176 100 4171089 99.45
10 NO 0 4194176 8192 4194176 100 4177569 99.6
16 NO 0 3801088 8192 3801088 100 3009977 79.19
11 NO 0 2726400 8192 2726400 100 2642237 96.91
12 NO 0 2700800 8192 2700800 100 2603729 96.41
13 NO 0 1958400 8192 1958400 100 1948633 99.5
3 NO 0 1132160 8192 1132160 100 1132159 100
When block change tracking is on, you can see the datafile is read minimally
FNO BCT INCR BLKS BLKSZ READ %READ WRTN %WRTN
---------- --- ---------- ---------- ---------- ---------- ---------- ---------- ----------
26 YES 1 2483584 8192 14717 .59 3489 .14
24 YES 1 1388160 8192 9501 .68 2187 .16
25 YES 1 1272576 8192 11225 .88 2348 .18
56 YES 1 831232 8192 14941 1.8 3986 .48
61 YES 1 816640 8192 9323 1.14 2160 .26
65 YES 1 778624 8192 14939 1.92 6513 .84
70 YES 1 754688 8192 10693 1.42 2513 .33
91 YES 1 576000 8192 1521 .26 292 .05
97 YES 1 448000 8192 3903 .87 3620 .81
41 YES 1 422400 8192 12731 3.01 3288 .78
44 YES 1 409600 8192 20113 4.91 5593 1.37
48 YES 1 396800 8192 19541 4.92 4374 1.1
89 YES 1 243200 8192 7025 2.89 2158 .89
93 YES 1 12800 8192 1 .01 1 .01
94 YES 1 12800 8192 1 .01 1 .01
If you still see 100% read even though BCT is YES, it probably means that a full incremental level 0 has not happened yet
If the status of an RMAN backup is stuck in RUNNING, the next backup will not start up. Reset the view with this…
exec sys.dbms_backup_restore.resetcfilesection(28);
execute dbms_backup_restore.resetconfig;
select timestamp_to_scn(to_timestamp('08-JAN-2015 14:24:34','DD-MON-YYYY HH24:MI:SS')) scn from dual;
select scn_to_timestamp(1234567890) timestamp from dual;
COLUMN EVENT FORMAT a10 COLUMN SECONDS_IN_WAIT FORMAT 999 COLUMN STATE FORMAT a20 COLUMN CLIENT_INFO FORMAT a30 SELECT p.SPID , EVENT , SECONDS_IN_WAIT SEC_WAIT , sw.STATE , CLIENT_INFO FROM V$SESSION_WAIT sw , V$SESSION s , V$PROCESS p WHERE 1=1 and (sw.EVENT LIKE 'sbt%' or sw.EVENT LIKE '%MML%') AND s.SID = sw.SID AND s.PADDR = p.ADDR ;
create tablespace rmanAGRDWHP1 datafile '/oracle/RMANV11/oradata1/rmanTDPOTEST.dbf' size 5M autoextend on;
alter database datafile '/oracle/RMANV11/oradata1/rmanTDPOTEST.dbf' autoextend on next 5M maxsize unlimited;
create user rmanTDPOTEST identified by threeguesses
temporary tablespace temp
default tablespace rmanTDPOTEST
quota unlimited on rmanTDPOTEST;
grant connect, resource, recovery_catalog_owner to rmanTDPOTEST;
rman target /
rman target / catalog catowner/catpass@catdb
Problem with this method is that anyone doing a 'ps -ef' will see the catalog password! Do it this way and the password will not be visible…
rman target / Recovery Manager: Release 11.2.0.3.0 - Production on Sun Jun 8 10:49:57 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: EUDSTGP2 (DBID=1013086362) RMAN> connect catalog catowner/catpass@catdb; connected to recovery catalog database
list backup of database; list backup of spfile; list backup of controlfile; LIST ARCHIVELOG ALL; LIST BACKUP OF ARCHIVELOG ALL; LIST BACKUP OF DATAFILE 1; LIST BACKUP SUMMARY; LIST INCARNATION; LIST BACKUP BY FILE; LIST BACKUP OF DATAFILE 11 SUMMARY; LIST BACKUP OF ARCHIVELOG FROM SEQUENCE 1437; LIST CONTROLFILECOPY "/tmp/cntrlfile.copy"; LIST BACKUPSET OF DATAFILE 1; LIST FAILURE; LIST FAILURE 641231 detail;
show all;
report need backup;
report obsolete;
Obsolete - No longer needed as they are older than the retention period. They will go back longer if there was no full backup within the retention period.
Expired - The backup file exists in the control file / catalog but no longer exists in reality.
show retention; configure retention policy to recovery window of 28 days; crosscheck backup; list backup summary; delete noprompt expired archivelog all; delete noprompt expired backup; delete noprompt obsolete; list backup summary;
crosscheck backup; crosscheck copy; crosscheck backup of database; crosscheck backup of controlfile; crosscheck archivelog all;
Scenario
Customer wants a complete database restored to a different server so that they can retrieve data that is no longer available on the source database.
It is not possible to restore from an export as the required dump files are no longer available.
Source: PROD1 on prod004
Destination: PRODX on test002
Method
Request Unix team to attach a temporary filesystem big enough to hold the complete database (and an extra bit for contingency) to the destination server
Either attach the storage directly to the destination server or nfs mount it (with hard mount options) from another server
Request Unix team to restore (or copy from source) the Oracle software to the mounted filesystem
The whole of the Oracle source code can be copied from the source server to the destination server. The contents of the dbs directory can be removed.
Recreate the /oracle/SID/admin directory (contains necessary tdpo files)
For this customer, we keep all the TSM related files in this directory. These include the tdpo.opt, dsm.opt and TSM.PWD files.
test002:/oracle/PROD1/admin# ll -tr total 62 -rwxrwx--- 1 oracle dba 257 Jul 13 12:58 tdpo.opt -rwxrwx--- 1 oracle dba 814 Jul 13 12:58 dsm.opt -rwxrwx--- 1 oracle dba 268 Jul 13 12:58 inclexcl.lst drwxrwx--- 2 oracle dba 96 Jul 13 13:18 tdpoerror_PROD1 -rwxrwx--- 1 oracle dba 184 Jul 13 13:18 TSM.PWD -rwxrwx--- 1 oracle dba 0 Nov 12 11:59 dsmerror.log drwxrwx--- 2 oracle dba 1024 Nov 12 16:50 change
Update dsm.sys with SID stanza from source server
The stanza from the source server relating to this database should be copied into the dsm.sys on the destination server so that tdpo can access the TSM server.
SErvername TSM_PROD1_TDPO TCPPort 1518 TCPServeraddress tsm-bxt TCPBuffsize 31 TCPWindowsize 1024 LARGECOMMBUFFERS YES TXNBytelimit 10000 PASSWORDACCESS GENERATE PASSWORDDIR /oracle/PROD1/admin MAXCMDRETRIES 4 INCLEXCL /oracle/PROD1/admin/inclexcl.lst QUERYSCHEDPERIOD 3 RETRYPERIOD 30 SCHEDLOGNAME /oracle/PROD1/admin/dsmsched.log ERRORLOGNAME /oracle/PROD1/admin/dsmerror.log SCHEDLOGRETENTION 15 D SCHEDMODE POLLING NODENAME SO_U_PROD1_ORA_X_BAX DIRMC D01NOS_14
Test the tdpo access from destination server
Use tdpoconf to test access to the TSM server and to check the related files are all usable.
tdpoconf showenv -tdpo_opt=/oracle/PROD1/admin/tdpo.opt
Add destination SID to oratab
Add a line with the new SID
echo "PRODX:/oracle/PROD1/product/11204:Y" >> /etc/oratab
Create a minimal init.ora file
This is just a dummy file so that we can start an idle instance.
echo "DB_NAME=dummy" > /oracle/PROD1/admin/change/init_dummy.ora
Check that the desired data can be restored (validate)
Use an RMAN preview command to ensure that files can be restored as far back as the required date.
If you have no access to the catalog, you will need to use “set dbid=<DBID>;”
rman target / catalog catuser/catpass@catdb
run {
set until time "to_date('18-OCT-14 10:10:00','DD-MON-YY HH24:MI:SS')";
restore controlfile preview;
}
Create an RMAN command file to perform the work
This syntax does not require a connection to the target database
connect catalog catuser/catpass@catdb;
connect auxiliary /
run {
allocate auxiliary channel d1 device type disk;
allocate auxiliary channel a1 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a2 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a3 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a4 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a5 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
duplicate database PROD1 dbid 2992702833 to PRODX
until time "to_date('18-OCT-14 10:10:00','DD-MON-YY HH24:MI:SS')"
spfile
set db_unique_name = 'PRODX'
nofilenamecheck
;
}
Create a shell to run the process
- !/usr/bin/ksh
HOSTNAME=`hostname`
export TODAY=`date +%Y%m%d_%H%M`
export ORACLE_SID=PRODX
ORAENV_ASK=NO
. oraenv
echo $ORACLE_SID
echo $ORACLE_HOME
sqlplus / as sysdba <<EOSQL
shutdown abort
EOSQL
sqlplus / as sysdba <<EOSQL
startup nomount pfile='/oracle/PROD1/admin/change/init_dummy.ora'
EOSQL
echo `date`
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
rman cmdfile=/oracle/PROD1/admin/change/dupdb.cmd msglog=/oracle/PROD1/admin/change/dupdb_${HOSTNAME}_${ORACLE_SID}_${TODAY}.log
echo `date`
When media failure strikes, look in the alertlog for more info if you can still get to it.
If some/all datafiles in one tablespace are damaged, you can recover the tablespace.
rman target / startup mount; restore tablespace users; recover tablespace users; alter database open;
or while database is open…
rman target / sql 'alter tablespace users offline immediate'; restore tablespace users; recover tablespace users; sql 'alter tablespace data_ts online';
or until a particular moment in time (TSPITR)
RMAN> run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn511,NSR_DATA_VOLUME_POOL=DD1DAILY)';
recover tablespace "nh_PFT_0v_quartz_ts" until time "to_date('27-NOV-14 09:25:00','DD-MON-YY HH24:MI:SS')" auxiliary destination '/cln/exp/ora_data3/devaux';
}
NOTE 1: If you do not specify auxiliary destination, it will try to use the same location as the original datafiles - and will conflict!
NOTE 2: There is a bug which ignores the fact that a tape channel has been allocated.
The workaround is to use CONFIGURE to allocate a channel. Do not forget to reset it later.
rman target /
configure channel device type 'SBT_TAPE' PARMS 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn511,NSR_DATA_VOLUME_POOL=DD1DAILY)';
run {
sql 'alter tablespace TS_BAMBOO offline immediate';
recover tablespace "TS_BAMBOO" until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')" auxiliary destination '/cln/exp/ora_data3/devaux';
sql 'alter tablespace TS_BAMBOO online';
}
configure channel device type 'SBT_TAPE' clear;
Output of the above script:
rman cmdfile=tspitr_restore_devtools.rman
Recovery Manager: Release 12.1.0.2.0 - Production on Wed Mar 21 15:59:20 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
RMAN>
1> connect target *
2> connect catalog *
3> run {
4> recover tablespace TS_BAMBOO until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')" auxiliary destination '/cln/exp/ora_data3/devaux';
5> }
6>
connected to target database: DEVTOOLS (DBID=2968626886)
connected to recovery catalog database
Starting recover at 21-MAR-18
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=485 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=509 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: NMDA Oracle v8.2.1
Creating automatic instance, with SID='aslo'
initialization parameters used for automatic instance:
db_name=DEVTOOLS
db_unique_name=aslo_pitr_DEVTOOLS
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/cln/exp/ora_bin1/app/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/cln/exp/ora_data3/devaux
log_archive_dest_1='location=/cln/exp/ora_data3/devaux'
- No auxiliary parameter file used
starting up automatic instance DEVTOOLS
Oracle instance started
Total System Global Area 2684354560 bytes
Fixed Size 3788008 bytes
Variable Size 654312216 bytes
Database Buffers 1979711488 bytes
Redo Buffers 46542848 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully
contents of Memory Script:
{
- set requested point in time
set until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')";
- restore the controlfile
restore clone controlfile;
- mount the controlfile
sql clone 'alter database mount clone database';
- archive current online log
sql 'alter system archive log current';
- avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
- resync catalog
resync catalog;
}
executing Memory Script
executing command: SET until clause
Starting restore at 21-MAR-18
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=161 device type=DISK
allocated channel: ORA_AUX_SBT_TAPE_1
channel ORA_AUX_SBT_TAPE_1: SID=166 device type=SBT_TAPE
channel ORA_AUX_SBT_TAPE_1: NMDA Oracle v8.2.1
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: restoring control file
channel ORA_AUX_SBT_TAPE_1: reading from backup piece c-2968626886-20180321-09
channel ORA_AUX_SBT_TAPE_1: piece handle=c-2968626886-20180321-09 tag=TAG20180321T085709
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
output file name=/cln/exp/ora_data3/devaux/DEVTOOLS/controlfile/o1_mf_fc4ww0f3_.ctl
Finished restore at 21-MAR-18
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
starting full resync of recovery catalog
full resync complete
contents of Memory Script:
{
- set requested point in time
set until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')";
plsql <<<--
declare
sqlstatement varchar2(512);
pdbname varchar2(30);
offline_not_needed exception;
pragma exception_init(offline_not_needed, -01539);
begin
pdbname := null; -- pdbname
sqlstatement := 'alter tablespace '|| 'TS_BAMBOO' ||' offline immediate';
krmicd.writeMsg(6162, sqlstatement);
krmicd.execSql(sqlstatement, 0, pdbname);
exception
when offline_not_needed then
null;
end; >>>;
- set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 2 to new;
set newname for clone tempfile 1 to new;
set newname for datafile 6 to
"/cln/exp/ora_data3/devtools/bamboo_dev_01.dbf";
set newname for datafile 8 to
"/cln/exp/ora_data3/devtools/bamboo_dev_02.dbf";
- switch all tempfiles
switch clone tempfile all;
- restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 3, 2, 6, 8;
switch clone datafile all;
}
executing Memory Script
executing command: SET until clause
sql statement: alter tablespace TS_BAMBOO offline immediate
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_temp_%u_.tmp in control file
Starting restore at 21-MAR-18
using channel ORA_AUX_DISK_1
using channel ORA_AUX_SBT_TAPE_1
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00006 to /cln/exp/ora_data3/devtools/bamboo_dev_01.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971292740_68520_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971292740_68520_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:06:45
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00002 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293571_68522_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293571_68522_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:45
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00001 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293622_68524_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293622_68524_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:45
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00003 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293662_68526_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293662_68526_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00008 to /cln/exp/ora_data3/devtools/bamboo_dev_02.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293677_68527_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293677_68527_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
Finished restore at 21-MAR-18
datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=971367011 file name=/cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_system_fc4xd5vb_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=971367011 file name=/cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_undotbs1_fc4xf9rs_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=971367011 file name=/cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_sysaux_fc4xbkwr_.dbf
contents of Memory Script:
{
- set requested point in time
set until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')";
- online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 3 online";
sql clone "alter database datafile 2 online";
sql clone "alter database datafile 6 online";
sql clone "alter database datafile 8 online";
- recover and open resetlogs
recover clone database tablespace "TS_BAMBOO", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile 1 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 2 online
sql statement: alter database datafile 6 online
sql statement: alter database datafile 8 online
Starting recover at 21-MAR-18
using channel ORA_AUX_DISK_1
using channel ORA_AUX_SBT_TAPE_1
starting media recovery
archived log for thread 1 with sequence 13773 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13773_884618118.arc
archived log for thread 1 with sequence 13774 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13774_884618118.arc
archived log for thread 1 with sequence 13775 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13775_884618118.arc
archived log for thread 1 with sequence 13776 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13776_884618118.arc
archived log for thread 1 with sequence 13777 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13777_884618118.arc
archived log for thread 1 with sequence 13778 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13778_884618118.arc
archived log for thread 1 with sequence 13779 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13779_884618118.arc
archived log for thread 1 with sequence 13780 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13780_884618118.arc
archived log for thread 1 with sequence 13781 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13781_884618118.arc
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13773_884618118.arc thread=1 sequence=13773
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13774_884618118.arc thread=1 sequence=13774
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13775_884618118.arc thread=1 sequence=13775
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13776_884618118.arc thread=1 sequence=13776
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13777_884618118.arc thread=1 sequence=13777
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13778_884618118.arc thread=1 sequence=13778
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13779_884618118.arc thread=1 sequence=13779
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13780_884618118.arc thread=1 sequence=13780
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13781_884618118.arc thread=1 sequence=13781
media recovery complete, elapsed time: 00:00:41
Finished recover at 21-MAR-18
database opened
contents of Memory Script:
{
- make read only the tablespace that will be exported
sql clone 'alter tablespace TS_BAMBOO read only';
- create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as //
/cln/exp/ora_data3/devaux//";
- create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as //
/cln/exp/ora_data3/devaux//";
}
executing Memory Script
sql statement: alter tablespace TS_BAMBOO read only
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ///cln/exp/ora_data3/devaux//
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ///cln/exp/ora_data3/devaux//
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_aslo_able":
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_aslo_able" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_aslo_able is:
EXPDP> /cln/exp/ora_data3/devaux/tspitr_aslo_93713.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace TS_BAMBOO:
EXPDP> /cln/exp/ora_data3/devtools/bamboo_dev_01.dbf
EXPDP> /cln/exp/ora_data3/devtools/bamboo_dev_02.dbf
EXPDP> Job "SYS"."TSPITR_EXP_aslo_able" successfully completed at Wed Mar 21 16:12:30 2018 elapsed 0 00:01:02
Export completed
contents of Memory Script:
{
- shutdown clone before import
shutdown clone abort
- drop target tablespaces before importing them back
sql 'drop tablespace TS_BAMBOO including contents keep datafiles cascade constraints';
}
executing Memory Script
Oracle instance shut down
sql statement: drop tablespace TS_BAMBOO including contents keep datafiles cascade constraints
Performing import of metadata...
IMPDP> Master table "SYS"."TSPITR_IMP_aslo_adcd" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_aslo_adcd":
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job "SYS"."TSPITR_IMP_aslo_adcd" successfully completed at Wed Mar 21 16:13:36 2018 elapsed 0 00:00:52
Import completed
contents of Memory Script:
{
- make read write and offline the imported tablespaces
sql 'alter tablespace TS_BAMBOO read write';
sql 'alter tablespace TS_BAMBOO offline';
- enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
- resync catalog
resync catalog;
}
executing Memory Script
sql statement: alter tablespace TS_BAMBOO read write
sql statement: alter tablespace TS_BAMBOO offline
sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;
starting full resync of recovery catalog
full resync complete
Removing automatic instance
Automatic instance removed
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_temp_fc4xjs4o_.tmp deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/onlinelog/o1_mf_3_fc4xjmgh_.log deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/onlinelog/o1_mf_2_fc4xjgl4_.log deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/onlinelog/o1_mf_1_fc4xj9s8_.log deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_sysaux_fc4xbkwr_.dbf deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_undotbs1_fc4xf9rs_.dbf deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_system_fc4xd5vb_.dbf deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/controlfile/o1_mf_fc4ww0f3_.ctl deleted
auxiliary instance file tspitr_aslo_93713.dmp deleted
Finished recover at 21-MAR-18
Recovery Manager complete.
/home/oracle $sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 21 16:16:52 2018
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
TS_STASH ONLINE
TS_BAMBOO OFFLINE
TS_SONAR ONLINE
8 rows selected.
SQL> alter tablespace TS_BAMBOO online;
Tablespace altered.
SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
TS_STASH ONLINE
TS_BAMBOO ONLINE
TS_SONAR ONLINE
8 rows selected.
SQL> exit
…or even better if flashbask is enabled (this takes seconds instead of hours) - but this is the whole database not just a tablespace!!
SQL> flashback database to timestamp to_timestamp ('27-NOV-14 09:25:00','DD-MON-YY HH24:MI:SS');
Flashback complete.
SQL> alter database open resetlogs;
Database altered.
SQL>
Used the purge option? Pisser - no flashback. However, from 12c, it is possible (causing RMAN to work very hard) to restore a table.
It's possible in any version doing the steps manually (so long as RMAN is being used and database is in archivelog mode), but 12c automates it down to 1 line!
recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';
To get a list of datafiles and their numbers, use 'report schema' or 'select file#,name from v$datafile;' (this queries the control file)
rman target / startup mount; restore datafile 1; recover datafile 1; alter database open;
or while database is open…
rman target / sql 'alter datafile 3 offline'; restore datafile 3; recover datafile 3; sql 'alter tdatafile 3 online';
rman target /
startup mount;
run {
set newname for datafile 2 to '/oracle/SID/users.dbf';
restore tablespace users;
switch datafile all;
recover tablespace users;
alter database open;
}
If all else fails, restore the database to a time before the bad thing happened, then recover it by rolling forward with the archivelogs.
Brief list of actions involved. Not all steps will be needed. Depends on same/different server restore:
- create dsm.sys stanza to setup access to TSM server - create tdpo.opt to allow Oracle access to TSM - create dsm.opt to link tdpo.opt to dsm.sys - create other required logging directories/files (as mentioned in dsm.sys stanza) - start idle Oracle instance - restore recent controlfile from TSM - shutdown instance and restart in mount mode - restore database from backup - recover database and roll forward to desired point-in-time
Can do it by time, scn, logseq.
Can do it in one step or two.
export ORACLE_SID=<SID>
. oraenv
sqlplus / as sysdba
startup mount
exit
cat start_restore.ksh<<EORMAN
rman target /
connect catalog catusr/catpwd@catdb;
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
set until time "to_date('20-MAR-14 10:00:00','DD-MON-YY HH24:MI:SS')";
restore database;
release channel c1;
}
EORMAN
Add this if connecting to a catalog connect catalog catusr/catpwd@catdb;
cat start_recover.ksh<<EORMAN
rman target /
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
recover database until time "to_date('20-MAR-14 10:00:00','DD-MON-YY HH24:MI:SS')";
release channel c1;
}
EORMAN
or by scn…
rman target / catalog catusr/catpwd@catdb
run {
set until scn=10328571660050;
recover database;
}
or
export ORACLE_SID=<SID>
. oraenv
rman target / catalog catusr/catpwd@catdb
startup mount;
run {
set until logseq 101145;
restore database;
recover database;
}
Assuming database is started in NOMOUNT mode…
Restore controlfile(s) to original location(s)
RESTORE CONTROLFILE;
Restore the controlfile but only to a location other than those mentioned in the CONTROL_FILES parameter
RESTORE CONTROLFILE ... TO 'filename';
Restore controlfile(s) from specified file to original locations unless to 'filename' is specified (then only to this location).
RESTORE CONTROLFILE ... FROM 'piece handle'; or RESTORE CONTROLFILE ... FROM 'piece handle' to 'filename'; or RESTORE CONTROLFILE ... FROM TAG 'tag'; or RESTORE CONTROLFILE ... FROM TAG 'tag' to 'filename';
Note: If controlfile was backed up with autobackup format set to non-default location, format must be specified for restore with: SET CONTROLFILE AUTOBACKUP FORMAT '<format>';
If catalog is not available, following is not possible. RMAN cannot know from where to restore it
RESTORE CONTROLFILE;
Restore controlfile(s) to original locations
RESTORE CONTROLFILE FROM AUTOBACKUP;
Restore the controlfile to specified location
RESTORE CONTROLFILE FROM AUTOBACKUP... TO 'filename';
Restore controlfile(s) from specified file to original locations unless to 'filename' is specified (then only to this location). Note: SET DBID must be used here.
SET DBID=<dbid>; RESTORE CONTROLFILE ... FROM 'piece handle'; or RESTORE CONTROLFILE ... FROM 'piece handle' to 'filename'; or
If catalog is not available, following is not possible. Catalog needs to be available to be able to restore from a tag
RESTORE CONTROLFILE ... FROM TAG 'tag'; or RESTORE CONTROLFILE ... FROM TAG 'tag' to 'filename';
RMAN>
connected to target database: INSA (not mounted)
RMAN> connect catalog *
connected to recovery catalog database
RMAN> run {
2> set dbid=3273853992;
3> allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn492,NSR_DATA_VOLUME_POOL=DD2DAILY)' debug 1;
4> restore controlfile;
5> }
executing command: SET DBID
database name is "INSA" and DBID is 3273853992
allocated channel: t1
channel t1: SID=155 device type=SBT_TAPE
channel t1: NMDA Oracle v8.2.1
Starting restore at 21-MAR-18
channel t1: starting datafile backup set restore
channel t1: restoring control file
channel t1: reading from backup piece c-3273853992-20180319-0b
channel t1: piece handle=c-3273853992-20180319-0b tag=TAG20180319T103116
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:25
output file name=/cln/acc/ora_data4/insa/control01.ctl
output file name=/cln/acc/ora_data4/insa/control02.ctl
output file name=/cln/acc/ora_data4/insa/control03.ctl
Finished restore at 21-MAR-18
released channel: t1
RMAN> **end-of-file**
RMAN> exit
rman
RMAN> connect target /
connected to target database: INSA (DBID=3273853992, not open)
RMAN> run {
2> allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn492,NSR_DATA_VOLUME_POOL=DD2DAILY)' debug 1;
3> restore database;
4> }
using target database control file instead of recovery catalog
allocated channel: t1
channel t1: SID=155 device type=SBT_TAPE
channel t1: NMDA Oracle v8.2.1
Starting restore at 21-MAR-18
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00006 to /cln/acc/ora_data4/insa/INS_DATA01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970341263_5178_1
channel t1: piece handle=dbfull_INSA_3273853992_970341263_5178_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:11:25
...
...
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00023 to /cln/acc/ora_data4/insa/INS_DATA13.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353194_5201_1
channel t1: piece handle=dbfull_INSA_3273853992_970353194_5201_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:25
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00003 to /cln/acc/ora_data4/insa/indx01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353229_5202_1
channel t1: piece handle=dbfull_INSA_3273853992_970353229_5202_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00005 to /cln/acc/ora_data4/insa/users01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353254_5203_1
channel t1: piece handle=dbfull_INSA_3273853992_970353254_5203_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00004 to /cln/acc/ora_data4/insa/tools01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353261_5204_1
channel t1: piece handle=dbfull_INSA_3273853992_970353261_5204_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
Finished restore at 21-MAR-18
released channel: t1
RMAN> exit
SQL>
SQL> select open_mode from v$database;
OPEN_MODE
--------------------
MOUNTED
SQL> select controlfile_type from v$database;
CONTROL
-------
BACKUP
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
/home/oracle $rman
Recovery Manager: Release 11.2.0.4.0 - Production on Wed Mar 21 13:33:19 2018
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
RMAN> exit
/home/oracle $rman
Recovery Manager: Release 11.2.0.4.0 - Production on Wed Mar 21 13:44:20 2018
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target /
connected to target database: INSA (DBID=3273853992, not open)
RMAN> connect catalog rman/rman@rman12d;
connected to recovery catalog database
RMAN> run {
2> allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn492,NSR_DATA_VOLUME_POOL=DD2DAILY)' debug 1;
3> recover database ;
4> }
allocated channel: t1
channel t1: SID=155 device type=SBT_TAPE
channel t1: NMDA Oracle v8.2.1
Starting recover at 21-MAR-18
starting media recovery
channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=1242
channel t1: reading from backup piece arclog_INSA_3273853992_970346643_5184_1
channel t1: piece handle=arclog_INSA_3273853992_970346643_5184_1 tag=TAG20180309T204403
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
archived log file name=/cln/acc/ora_data2/archivelog/insaalog1_1242_960291562.arc thread=1 sequence=1242
...
...
channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=1338
channel t1: reading from backup piece arclog_INSA_3273853992_971166624_5584_1
channel t1: piece handle=arclog_INSA_3273853992_971166624_5584_1 tag=TAG20180319T083024
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
archived log file name=/cln/acc/ora_data2/archivelog/insaa/log1_1338_960291562.arc thread=1 sequence=1338
channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=1339
channel t1: reading from backup piece arclog_INSA_3273853992_971173822_5588_1
channel t1: piece handle=arclog_INSA_3273853992_971173822_5588_1 tag=TAG20180319T103022
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
archived log file name=/cln/acc/ora_data2/archivelog/insaa/log1_1339_960291562.arc thread=1 sequence=1339
unable to find archived log
archived log thread=1 sequence=1340
released channel: t1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/21/2018 14:21:52
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 1340 and starting SCN of 274665299833
RMAN> alter database open resetlogs;
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN> exit
Upgrade has failed part way
Catalog is still on old ORACLE_HOME
Database is already on new ORACLE_HOME
shutdown immediate
vi /etc/oratab
rman target / connect catalog user/pwd@cat startup nomount; list backup of controlfile; restore controlfile from autobackup;
startup mount;
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/SID/admin/tdpo.opt)';
set until time "to_date('11-DEC-2015 10:30:00','DD-MON-YY HH24:MI:SS')";
restore database;
release channel c1;
}
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/SID/admin/tdpo.opt)';
set until time "to_date('11-DEC-2015 10:30:00','DD-MON-YY HH24:MI:SS')";
recover database;
release channel c1;
}
alter database open resetlogs;
One of the new Backup and Recovery features of Oracle 11g Release 2 (11gR2) is the ability to duplicate a database without connecting to the target database.
In RMAN terminology, the “target” database is the one you wish to duplicate.
In this case, the only requirement is that you provide a disk backup location where RMAN can find all the backups, data file copies, archived logs, and control file copies for database duplication.
This database duplication enhancement is helpful when it is not possible to connect to the target database and the recovery catalog.
This came in handy for me recently on a customer project. They wanted me to test a disaster recovery scenario in which the production database server was completely gone and we needed to restore it from tape onto another server.
This is a fairly typical DR situation, but in this case it was made more difficult because the directory structure on the test server didn’t match that of the production server.
Specifically, on the production server, everything was located on the E: drive while on the Test server, the only drive that exists is C:.
This means that all file locations need to be changed as part of the duplicate procedure.
This can all be done in one single RMAN command.
Note: One thing that makes this process easier is to enable controlfile autobackups on the production database.
This is a good practice in general and should be configured for all databases.
rman target / nocatalog configure controlfile autobackup on;
Here are the steps that I followed:
Step1) Copy the most recent backup files and archivelog files to the test server.
In this case, we restored from tape to the location c:\\prod backup files. The backup files and archivelogs come from the following locations:
E:\\app\\flash_recovery_area\\prod\\ARCHIVELOG\\2013_03_21
E:\\app\\flash_recovery_area\\prod\\BACKUPSET\\2013_03_21
E:\\app\\flash_recovery_area\\prod\\AUTOBACKUP\\2013_03_21
Step 2) Create a file called INITPROD2.ORA in %ORACLE_HOME%\\database containing one line: db_name=PROD2
Step 3) At a command prompt:
set oracle_sid=prod2 oradim –new –sid prod2 sqlplus / as sysdba startup nomount exit rman auxiliary / msglog=c:\\restore.log
Step 4) At the RMAN prompt:
duplicate database to prod2
spfile
set control_files='C:\\app\\oracle\\oradata\\prod2\\control01.ctl'
set db_file_name_convert='E:\\app\\oradata\\prod','C:\\app\\oracle\\oradata\\prod2′,'E:\\app\\prod_tbs','C:\\app\\oracle\\oradata\\prod2′
set log_file_name_convert='E:\\app\\oradata\\prod','C:\\app\\oracle\\oradata\\prod2′,'C:\\oracle\\oradata\\prod','C:\\app\\oracle\\oradata\\prod2′
set db_recovery_file_dest='c:\\app\\administrator\\flash_recovery_area'
set diagnostic_dest='c:\\app\\administrator\\diag'
backup location "c:\\prod backup files\\";
That's all it takes! As you can see, I changed all file location parameters and converted datafile and logfile names with the _CONVERT parameters.
You can set any parameters for the new database in this way. For example you may want to disable all database jobs in the duplicate database by setting job_queue_processes=0.
Or, change the size of the SGA for a smaller server. RMAN will create the PROD2 database, perform as much recovery as possible given the available archive logs and open the database with the resetlogs option.
This is a really nice feature of RMAN.
If you backup the controlfile with:
alter database backup controlfile to trace as '/oracle/${OSID}/admin/${NSID}_controlfile.sql';
This example found in backup controlfile
-- Configure snapshot controlfile filename
EXECUTE SYS.DBMS_BACKUP_RESTORE.CFILESETSNAPSHOTNAME('/oracle/product/11.2.0.3/dbs/snapshot_controlfile_WM820T.ctl');
-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 31 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','''SBT_TAPE''');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE //SBT_TAPE// FORMAT //LOG_%d_t%T_s%s_u%U// PARMS //ENV=(TDPO_OPTFILE=/oracle/WM820T/admin/tdpo.opt)''');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE// PARALLELISM 1 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','''SBT_TAPE// TO //LOG_%d_controlfile_%F.rman''');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('SNAPSHOT CONTROLFILE NAME','TO //snapshot_controlfile_WM820T.ctl''');
This script can generate 'set newname' statements for use when duplicating a database for standby.
If the files are scattered all over the place, this will gather them together and spread the datafiles evenly over the available space.
-- =============================================================================
-- Filename : reorganise_datafiles.sql
-- Description : Shares the datafiles on one host over several
-- filesystems on another host
--
-- Notes : Fill the l_datafile_tab and optionally the l_archive_tab
-- with the names of the new filesystems
--
-- Modification History
-- ====================
-- Who When What
-- ================= ========= =================================================
-- Stuart Barkley 22-OCT-12 Created
-- Stuart Barkley 29-OCT-12 Added tempfiles to cursor c_row
-- Stuart Barkley 01-NOV-12 Added code to treat archive log files separately
-- =============================================================================
spool reorganise_datafiles
set serveroutput on format wrapped size unlimited
set termo on
set trims on
set lines 200
set pages 1000
set feedb off
set verif off
create or replace type l_filesystem_obj as object
(
name varchar2(100)
, bigness number
)
/
create or replace type l_filesystem_tabtype is table of l_filesystem_obj
/
declare
-- =========================================================
-- add/delete/replace the lines in this table with the names
-- of the new mount points for normal datafiles
-- =========================================================
l_datafile_tab l_filesystem_tabtype := l_filesystem_tabtype(
l_filesystem_obj( '/oracle/JDBEOP1/oradata1/' ,0)
, l_filesystem_obj( '/oracle/JDBEOP1/oradata2/' ,0)
, l_filesystem_obj( '/oracle/JDBEOP1/oradata3/' ,0)
, l_filesystem_obj( '/oracle/JDBEOP1/oradata4/' ,0)
);
-- =========================================================
-- add/delete/replace the lines in this table with the names
-- of the new mount points for archive log datafiles
-- =========================================================
l_archive_tab l_filesystem_tabtype := l_filesystem_tabtype(
l_filesystem_obj( '/oracle/JDBEOP1/arcdata/' ,0)
);
-- =====================================
-- gather info about data and temp files
-- =====================================
cursor c_datafiles is
select file_id file_id
, file_name full_file_name
, substr(file_name,instr(file_name,'/',-1)+1) file_name
, round(sum(bytes)/1024/1024/1024,2) gb
, 'data' file_type
from dba_data_files ddf
where 1=1
and upper(file_name) not like '%ARC%'
group by file_id
, file_name
union
select file_id file_id
, file_name full_file_name
, substr(file_name,instr(file_name,'/',-1)+1) file_name
, round(sum(bytes)/1024/1024/1024,2) gb
, 'temp' file_type
from dba_temp_files ddf
where 1=1
group by file_id
, file_name
order by 4 desc
;
-- ===================================
-- gather info about archive log files
-- ===================================
cursor c_archivefiles is
select file_id file_id
, file_name full_file_name
, substr(file_name,instr(file_name,'/',-1)+1) file_name
, round(sum(bytes)/1024/1024/1024,2) gb
, 'data' file_type
from dba_data_files ddf
where 1=1
and upper(file_name) like '%ARC%'
group by file_id
, file_name
order by 4 desc
;
l_running_total pls_integer := 0;
l_fs_num binary_integer := 1;
l_total_db_size number;
begin
dbms_output.enable(null);
for r_row in c_datafiles
loop
-- ===========================================
-- when we get to the last bucket, start again
-- ===========================================
if l_fs_num > nvl(l_datafile_tab.count,0) then
l_fs_num := 1;
end if;
-- ==============================
-- update the size of this bucket
-- ==============================
l_datafile_tab(l_fs_num).bigness := l_datafile_tab(l_fs_num).bigness + r_row.gb;
-- =====================
-- print out a good line
-- =====================
dbms_output.put_line ( 'set newname for '||r_row.file_type||'file '||r_row.file_id||' to '''||l_datafile_tab(l_fs_num).name||r_row.file_name||''';' );
l_fs_num := l_fs_num+1;
end loop; -- c_datafiles
-- =========================================
-- datafiles done, now the archive log files
-- =========================================
l_fs_num := 1;
for r_row in c_archivefiles
loop
-- ===========================================
-- when we get to the last bucket, start again
-- ===========================================
if l_fs_num > nvl(l_archive_tab.count,0) then
l_fs_num := 1;
end if;
-- ==============================
-- update the size of this bucket
-- ==============================
l_archive_tab(l_fs_num).bigness := l_archive_tab(l_fs_num).bigness + r_row.gb;
-- =====================
-- print out a good line
-- =====================
dbms_output.put_line ( 'set newname for '||r_row.file_type||'file '||r_row.file_id||' to '''||l_archive_tab(l_fs_num).name||r_row.file_name||''';' );
l_fs_num := l_fs_num+1;
end loop; -- c_archivefiles
-- =========
-- summarise
-- =========
dbms_output.put_line (null);
dbms_output.put_line ( 'Summary');
dbms_output.put_line ( '=======');
for l_fs_num in 1 .. nvl(l_datafile_tab.count,0)
loop
dbms_output.put_line ( 'Mount point '||l_datafile_tab(l_fs_num).name||' will have a total size of '||l_datafile_tab(l_fs_num).bigness||' Gb');
end loop;
for l_fs_num in 1 .. nvl(l_archive_tab.count,0)
loop
dbms_output.put_line ( 'Mount point '||l_archive_tab(l_fs_num).name||' will have a total size of '||l_archive_tab(l_fs_num).bigness||' Gb');
end loop;
dbms_output.put_line (null);
exception
when others then
dbms_output.put_line(sqlerrm);
raise_application_error(-20000,'Problem');
end;
/
spool off
drop type l_filesystem_tabtype
/
drop type l_filesystem_obj
/
Reference: pythian.com
Reproduced here in case it disappears from the internets.
A lot of metadata about the RMAN backup jobs can be found in the V$RMAN_% views.
These views show past RMAN jobs as well as jobs currently running.
Once the jobs complete backup sets, metadata about the sets and pieces are also added to the control file and can be accessed through the V$BACKUP_% views.
For the queries in this post I need only four of those views:
Query 1 (Backup jobs' status and metadata)
set lines 220
set pages 1000
col cf for 9,999
col df for 9,999
col elapsed_seconds heading "ELAPSED|SECONDS"
col i0 for 9,999
col i1 for 9,999
col l for 9,999
col input_type for a13
col dow for a11
col start_time for a23
col end_time for a19
col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES"
col session_recid for 999999 heading "SESSION|RECID"
col session_stamp for 99999999999 heading "SESSION|STAMP"
col status for a10 trunc
col time_taken_display for a10 heading "TIME|TAKEN"
col output_instance for 9999 heading "OUT|INST" noprint
select j.session_recid
, j.session_stamp
, to_char(j.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time
, to_char(j.end_time, 'yyyy-mm-dd hh24:mi:ss') end_time
, (j.output_bytes/1024/1024) output_mbytes
, j.status
, j.input_type
, decode(to_char(j.start_time, 'd')
, 1, 'Sunday', 2, 'Monday'
, 3, 'Tuesday', 4, 'Wednesday'
, 5, 'Thursday', 6, 'Friday'
, 7, 'Saturday') dow
, j.elapsed_seconds
, j.time_taken_display
, x.cf, x.df, x.i0, x.i1, x.l
, ro.inst_id output_instance
from V$RMAN_BACKUP_JOB_DETAILS j
left outer join (select
d.session_recid, d.session_stamp,
sum(case when d.controlfile_included = 'YES' then d.pieces else 0 end) CF,
sum(case when d.controlfile_included = 'NO'
and d.backup_type||d.incremental_level = 'D' then d.pieces else 0 end) DF,
sum(case when d.backup_type||d.incremental_level = 'D0' then d.pieces else 0 end) I0,
sum(case when d.backup_type||d.incremental_level = 'I1' then d.pieces else 0 end) I1,
sum(case when d.backup_type = 'L' then d.pieces else 0 end) L
from
V$BACKUP_SET_DETAILS d
join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count
where s.input_file_scan_only = 'NO'
group by d.session_recid, d.session_stamp) x
on x.session_recid = j.session_recid and x.session_stamp = j.session_stamp
left outer join (select o.session_recid, o.session_stamp, min(inst_id) inst_id
from GV$RMAN_OUTPUT o
group by o.session_recid, o.session_stamp)
ro on ro.session_recid = j.session_recid and ro.session_stamp = j.session_stamp
where j.start_time > trunc(sysdate)-&NUMBER_OF_DAYS
order by j.start_time;
Query 2 (Backup set details)
set lines 220 set pages 1000 col backup_type for a4 heading "TYPE" col controlfile_included heading "CF?" col incremental_level heading "INCR LVL" col completion_time for a23 col pieces for 999 heading "PCS" col elapsed_seconds heading "ELAPSED|SECONDS" col device_type for a10 trunc heading "DEVICE|TYPE" col compressed for a4 heading "ZIP?" col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES" col input_file_scan_only for a4 heading "SCAN|ONLY" select d.bs_key, d.backup_type, d.controlfile_included, d.incremental_level, d.pieces, to_char(d.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time, to_char(d.completion_time, 'yyyy-mm-dd hh24:mi:ss') completion_time, d.elapsed_seconds, d.device_type, d.compressed, (d.output_bytes/1024/1024) output_mbytes, s.input_file_scan_only from V$BACKUP_SET_DETAILS d join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count where session_recid = &SESSION_RECID and session_stamp = &SESSION_STAMP order by d.start_time;
Query 3 (Backup job output)
set lines 200 pages 1000 verif off headi off newpa none feedb off select output from gv$rman_output where session_recid = &SESSION_RECID and session_stamp = &SESSION_STAMP order by recid;
1. Please login with rman log and trace >> rman target <un/pw@target_db> catalog <un/pw@catalog_db> debug trace='/tmp/rman_trace.trc' log='/tmp/rman_log.log';
2. Please rename the log and the tracefiles following your naming convention.
3. Select two datafiles as samples, one biggest datafile and one smallest datafile.
4. Take two samples of backup one onto u03 and one onto u01 naming the log and the tracefiles accordingly. You can follow this example :
run
{
allocate channel d1 device type disk format 'd:/backups/%U.bkp';
backup database include current controlfile;
release channel d1;
}
5.During the backup, we will hanganalyze the take the system state dump. Performance team will analyze them
Hang Analyzer and System state Dump:
===================================
$ sqlplus /nolog
connect / as sysdba
oradebug setmypid
oradebug unlimit
oradebug tracefile_name
oradebug hanganalyze 3
wait 90 seconds
oradebug hanganalyze 3
exit
$ sqlplus /nolog
connect / as sysdba
oradebug setmypid
oradebug unlimit
oradebug tracefile_name
oradebug dump systemstate 255
wait 90 seconds
oradebug dump systemstate 255
wait 90 seconds
oradebug dump systemstate 255
exit