This is an old revision of the document!
Table of Contents
Oracle_errors
ORA-00600
ORA-19051: Cannot use fast path insert for this XMLType table
Think LD_LIBRARY_PATH on Linux or LIBPATH on AIX.
0@@
Problem
When the 11g DB was started, LD_LIBRARY_PATH and LIBPATH were still pointing to paths in the 10g Home.
Solution
There may have been some odd values setup in LD_LIBRARY_PATH before the call to oraenv (if oraenv was indeed called!)
1@@
They should now be correct.
ORA-27092: size of file exceeds file size limit of the process
Think ulimit…
2@@
3@@
Check the fs_size limit for this user in /etc/security/limits<br /> Compare this with another user, say oracle. Probably -1 (unlimited)
SP2-1503: Unable to initialize Oracle call interface
When trying to login to the database via SQL*Plus, these messages a displayed
4@@
Seems to be a problem unique to AIX (in this case).<br /> Check to see what the LIBPATH variable is. It may still contain a path to an old ORACLE_HOME…
5@@
Disco! Unset it and rerun.
6@@
UDE-31623: operation generated ORACLE error 31623
When using Data pump, get following errors almost immediately:
7@@
Solution 1:<br /> In one case, I simplified the parameter file that was used. Maybe related to Streams Pool size, who knows?!<br /> I removed these 2 lines and the export started working:
8@@
Solution 2:<br /> There is a note about this on Metalink, 1080775.1 which suggests that Streams Pool Size needs setting.
9@@
From Oracle Documentation
10@@
ORA-00020: maximum number of processes (150) exceeded
Check the current resource limits with:
11@@
Unfortunately this limit cannot be increased without a restart.<br /> It may be a necessary to increase the limit with
12@@
followed by a shutdown/startup<br /> Or it may be some user is running away with the limit.<br /> Check to see who is running what with: http://www.mailnest.com/dbawiki/index.php/Handy_scripts#See_what_statements_all_users_are_running See_what_statements_all_users_are_running
ORA-01555: snapshot too old: rollback segment number nn with name "xxx" too small
<br />
Problem
A long running transaction has run out of space to store its read-consistent image
Solution 1
If it's a one-off, maybe leave it alone. It may have been a user leaving a session open overnight without committing or rolling back.
Solution 2
Make sure there's enough space to allow the transaction to finish.
Things to check
How much undo did the biggest transaction take? What is the undo retention parameter set to?
13@@
Set the retention to about 20% bigger than the maxquerylen.
14@@
and set the undo retention guarantee (being aware of the consequences)
15@@
16@@
Optimise Oracle UNDO Parameters
The ideal undo_retention would be enough to hold rollback for the longest transaction that happened so far (can't see into the future)<br /> So current undo_retention = current value of undo / (db_block_size * undo blocks per sec)<br />
- Undo Blocks per Second
17@@
Optimal Undo Retention
18@@
- Calculate Needed UNDO Size for given Database Activity
19@@
It might be that the Export still fails with ORA-01555 even though retention seems long enough.<br /> Are you running an export with the flashback_time parameter? If so, the undo_retention should be longer than the total export running time!<br />
Check LOB undo retention
The retention settings for LOBS is different for the rest of the database.<br /> When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is NOT modified.<br /> If you query the RETENTION column of the DBA_LOBS view, you will notice the old UNDO_RETENTION value still remains after you have modified the UNDO_RETENTION parameter.<br />
20@@
21@@
22@@
To change the LOB segment’s RETENTION value to match the new UNDO_RETENTION value, do the following:
23@@
By momentarily changing the LOB to use PCTVERSION and then back to RETENTION, the new value of UNDO_RETENTION will be used.<br /> You need to do this for all LOB segments that you intend to modify.
24@@
There is a table DBA_LOBS that displays information about all the LOB columns in the database, but it doesn't actually store any LOB data, just metadata. The actual LOB segments in the database have system-generated names that take the form SYS_LOB*identifier*$$. For example if you identify a segment named SYS_LOBidentifier$$ that is consuming space, you can find out what column of what table that LOB column maps to using the DBA_LOBS table:
25@@
The Undo Advisor PL/SQL Interface
You can activate the Undo Advisor by creating an undo advisor task through the advisor framework.<br /> The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'.<br /> The analysis is based on Automatic Workload Repository snapshots, which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT.<br /> In the following example, the START_SNAPSHOT is “1” and END_SNAPSHOT is “2”.
26@@
After you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor in Enterprise Manager.<br /> This information is also available in the DBA_ADVISOR_* data dictionary views (DBA_ADVISOR_TASKS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_FINDINGS, DBA_ADVISOR_RECOMMENDATIONS, and so on).
References
Error 6 initializing SQL*Plus
Using sqlplus or rman under Cygwin, ORACLE_HOME still needs to be windows format!<br />
27@@
so…
28@@
ORA-2002: error while writing to audit trail
Check the space left in the audit tablespace with:
29@@
If plenty of space, it can be due to Oracle clearing out the recyclebin at the same time.<br /> If not, extend the datafile…
30@@
If that is not possible, clean out some audit data or add a datafile.
ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446
31@@
Solution
32@@
ORA-00845: MEMORY_TARGET not supported on this system
- Reference: arjudba.blogspot.be
Problem Description
33@@
Cause
- Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET.
- On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system.
- And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET.
- The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process.
- The MEMORY_TARGET parameter in 11g comes for (SGA_TARGET+PGA_AGGREGATE_TARGET) which was in 10g.
- And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET parameter which was in 10g.
- The ORA-00845:can arises for the following two reasons on linux system.
1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET or 2)If the shared memory is not mapped to /dev/shm directory.
Solution
34@@
ORA-24247: network access denied by access control list (ACL)
35@@
Your application will encounter an ORA-24247 error if it relies on one of the network packages and no proper ACL has been created. For the use of the following packages it is mandatory to have an ACL for the application user in place in 11g: *UTL_TCP *UTL_SMTP *UTL_MAIL *UTL_HTTP *UTL_INADDR Here is a very good explanation of how to setup the necessary ACLs: http://blog.whitehorses.nl/2010/03/17/oracle-11g-access-control-list-and-ora-24247/ whitehorses.nl
ORA-00600: internal error code, arguments: [[qmx:|no ref]]
Search “Master Note for Oracle XML Database (XDB) Install / Deinstall (Doc ID 1292089.1)” for more info.
Problem:
Sometimes one or more of the following errors can be encountered when installing or upgrading XDB, configuring APEX, running an Export, or selecting from xdb.xdb$resource or sys.dba_network_acls:<br />
- ORA-31159: XML DB is in an invalid state
- ORA-00600: internal error code, arguments: to load XDB library
- ORA-00600: internal error code, arguments: no ref
- ORA-00600: internal error code, arguments: qmtGetColumnInfo1
- ORA-00600: internal error code, arguments: qmtb_init_len
- ORA-00600: internal error code, arguments: qmtGetBaseType
- ORA-00600: internal error code, arguments: qmtInit1
- ORA-07445: exception encountered: core dump _memcpy()+224 [SIGSEGV] not mapped to object
- ORA-19051 Cannot Use Fast Path Insert For This XMLType Table
- ORA-31011: XML parsing failed
Errors of this sort generally occur when the init routines for the internal XDB functions are run in an invalid environment causing memory corruption.<br /> <br /> This can happen if the database was ever started with the LD_LIBRARY_PATH (LIBPATH for AIX or SHLIB_PATH for HP) pointing to the wrong $ORACLE_HOME/lib directory rather than to the correct location for the instance.<br /> The LD_LIBRARY_PATH/LIBPATH/SHLIB_PATH environment variable is used to resolve the location of the shared library “libxdb.so (libxdb.sl on HP)”.
Solution:
- Stop Listener
- Stop database
- Ensure $ORACLE_HOME/lib is at the start of LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH
- If AIX, run /usr/sbin/slibclean as root
- Start database
- Start listener
ORA-00201: control file version nn.nn.nn.nn incompatible with ORACLE version
Trying to restore a database after complete failure<br /> first step is to startup the database in nomount mode and restore the controlfile - succeeded.<br /> second step, startup the database in mount mode ready for database restore…
36@@
The problem was there was no compatible parameter in the simple init file I used to start up the instance.<br /> Adding compatible parameter to init file to be same version (10.2.0.3) allowed the db to mount.
TNS-00525: Insufficient privilege for operation
37@@
This would suggest the user attempting to start the listener has no access to either the Oracle binaries or the TNS listener/tnsnames files.<br />
But… checkout the /tmp/.oracle directory <br />
That hidden gem shows who started the listener last time. Delete the relevant entry and try again!<br />
To start from a clean sheet, stop all listeners, delete this directory and restart the listeners.
ORA-12003: materialized view or zonemap "KPK"."EVD01_NEW" does not exist
Here's one you shouldn't see very often.<br /> Trying to drop a tablespace but prevented from doing so because a materialized view has referential contraints on a table in this tablespace.
38@@
Ok, nice of Oracle to tell us what we have to do. Simple enough. Let's drop the materialized view.
39@@
Aah. Not so obvious then.
40@@
It exists… but as a table.<br /> Something has gone wrong in the database somewhere. As it turns out after some deeper investigation, this database is a copy of production but “with modifications”.<br /> After the cloning process, a schema needed renaming. Oracle have a procedure for this but it is very long-winded and time-consuming. It uses export and import with transportable tablespaces to do the remap of the schema.<br /> An unsupported workaround to rename schema is to simply update the username in sys.user$ and restart the database.<br /> This works for the most part but now we see a consequence of that action.
41@@
Ahaa, there's the materialized view but the owner is different from the one described by the database error message above. This one is the owner as it was on production.<br /> This means the owner of materialized views has been denormalised for some reason. Reading the (complex) view text of dba_mviews, we see the owner column is based on sys.snap$.sowner<br />
42@@
For future reference when updating the username in sys.user$, update the snapshot owner in this table as well!
43@@
Try again
44@@
yep, and the tablespace?
45@@
ORA-12162: TNS:net service name is incorrectly specified
Probably nothing to do with TNS. Check your ORACLE_SID is set correctly!
ORA-00054: resource busy and acquire with NOWAIT specified
46@@
47@@
then
48@@
ORA-12518: TNS:listener could not hand off client connection
The database you are trying to connect to is probably on its knees. Check server memory is not exhausted.
ORA-12520: TNS:listener could not find available handler for requested type of server
Strangely this could be related to number of processes being exhausted.
49@@
ORA-12541: TNS: no listener
Either very basic - start the listener!<br /> or, as in our case - not so obvious.<br /> We use virtual IP addresses so that the IP address stays the same after failover to standby.<br /> Due to some confused configuration, we got the network adapter on two different servers to have the same IP address.<br /> Stop the IP address on the bad server (assign another one) and all is fine again.
ORA-12514: TNS:listener does not currently know of service
50@@
Been altering tnsnames.ora or listener.ora?
- Check the local_listener parameter
Try resetting it to itself. Strange but if you are not using port 1521, this can work wonders!<br /> Assuming listener is called LISTENER_WM9T:
51@@
- Try and connect with the service name or directly with
52@@
Excellent description of how Oracle TNS listeners work and a checklist on how to fix this error<br />
ORA-12547: TNS lost contact
If
53@@
fails with above error, try
54@@
If this works, it is most probably a permissions error.<br /> Re-run the
55@@
script that was run as part of the original installation. This will reset the permissions on some important files.
ORA-02374: conversion error loading table "BLENGADM"."ZZ_EVTCTA"
This is typical of importing data with Datapump when the tables in the source database were created with “byte” semantics and the destination is “char” semantics.
56@@
Problem
The root cause is the nls_length_semantics of the table columns being BYTE where one character is assigned one byte. But after conversion in a multi-byte database, one character is larger than one byte (could be 2, 3 or 4) and no longer fits.
Solution
- import the metadata
Run the impdp command with content=metadata_only. This will create all the tables (still in their original form) but not the data.
- alter the BYTE columns in the affected tables to CHAR semantics
57@@
- Import the data
Rerun the impdp ensuring table_exists_action is not REPLACE (use truncate or append). This now imports the data into the modified tables.
This script does the donkey work of altering table column semantics from byte to char
The author of this script is unknown but he/she deserves a medal. It saves countless hours of work and is so well written.
58@@
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
Also can be <tt>ORA-54033: column to be modified is used in a virtual column expression</tt><br /> Nothing especially to do with partitions - just any ddl operation that affects the table.<br /> Hidden columns do not normally show up in a desc unless colinvisible is on
59@@
But these sneaky system-generated ones don't show up even with that setting on! You need to show them with this:
60@@
That last row means extended statistics have been generated for 2 columns in this table.<br /> You can see this because the name begins with SYS_STS. If it begins with SYS_<something else> then it is not extended statistics.<br /> It may be an unused column. Original here
61@@
Solution
Drop the extended stats, do what you need to do and recreate the extended stats.
62@@
The stats columns can also be seen in their own table
63@@
ORA-14404: partitioned table contains partitions in a different tablespace
As part of an RMAN duplicate with a skip tablespace, this error is produced
64@@
and when attempting to backup the database, we get errors
65@@
Problem
Tables in the tablespace have partitions in another tablespace so this one cannot be dropped.<br /> Check which table partitions are in which tablespaces
66@@
67@@
Solution
If the intention is to keep all the tables then run this query to generate statements to move the relevant tables out of this tablespace
68@@
If however, the intention is to drop the tablespace anyway, the tables need to be dropped before the tablespace can be dropped.<br /> But …tables with partitions in different tablespaces cannot be dropped. The partitions need to be dropped first!<br /> Run the following query to generate statements to drop all the 'problem' tables. When these have gone, the tablespace can be dropped.
- drop table partitions
69@@
- drop tables
70@@
- drop tablespace
71@@
Find all table partitions in tablespaces with missing datafiles
72@@
Use exchange partition to alter metadata of the tablespaces without moving the actual data
- this is an explanation of what you are looking for!<br />
This is a way of doing the exchange with a neat PL/SQL procedure. It might not do what you want though so understand it before running it!
73@@
Workaround
While working on the solution, a backup can be made using the “skip inaccessible” option. Put this in an rman cmdfile…
74@@
Another way of deleting the offending objects
Taken from Recover from a failed RMAN duplicate<br /> Need to check it before running it blindly!
75@@
ORA-28112: failed to execute policy function
when exporting or importing using Data Pump< /br> During expdp is accompanied by “ORA-31693: Table data object “SYSMAN”.”MGMT_IP_REPORT_DEF“ failed to load/unload and is being skipped due to error:”
Reason
This happens due to audit policies being defined on the table but the policy function is defined in another schema.
Solution
Either drop the policy after importing the table with:
76@@
or exclude the policies during export by adding this to the .par file:
77@@
or even better, give the necessary grants to the exporting user…
78@@
The sysdba role has this automatically.
ORA-17629: Cannot connect to the remote database server
Active duplication fails to connect even though dns connection exists in tnsnames.ora<br /> <br /> Problem:
79@@
<br /> Solution:<br /> <br /> Update the tnsnames.ora of both destination and source ORACLE_HOMEs!
ORA-01950: no privileges on tablespace 'SYSAUX'
80@@
Cause:<br /> In this case, it seems SYSMAN has no rights to create objects in the SYSAUX tablespace<br /> Solution:<br />
81@@
More information here
ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job
82@@
Cause:
Possibly some or all of OLAP module has got itself deinstalled.
Solution:
Try:
- Recompiling invalid objects. @?/rdbms/admin/utlrp
- Reinstalling DBMS_CUBE packages by (re)running catproc.sql. @?/rdbms/admin/catproc followed by @?/rdbms/admin/utlrp
- Reinstalling OLAP. @?/olap/admin/olap.sql SYSAUX TEMP; But see Metalink note ID 296187.1 for full instructions.
- Check metalink for bugs. See ID 345198.1 and possibly ID 453796.1
ORA-14063: Unusable index exists on unique/primary constraint key
ORA-39083: Object type CONSTRAINT failed to create with error:<br /> During an import datapump (impdp), import failed because indexes were in error before the tables were exported. This causes the import of rows to be skipped.<br /> If the export cannot be done again (having compiled the indexes), then the metadata can be imported and the indexes compiled before importing the data.
- Import the metadata
83@@
- Find the bad indexes and recompile them
84@@
- Import the data
85@@
ORA-09925: Unable to create audit trail file
86@@
or
87@@
Problem
This will be almost certainly one of two issues. Either the directory is not writable by the id that started the instance or the directory just does not exist.
Solution
Note there can be a difference between what you see when you type:
88@@
and when you type:
89@@
due to someone doing an “alter system” without the “scope=spfile” or “scope=both”
90@@
ORA-09925: Unable to create audit trail file
When starting SQL*Plus with no db processes running (expecting to see “Connected to an idle instance”), we get this:
91@@
- Filesystem permissions are fine
- 'strings spfile<SID>.ora | grep audit' shows the correct directory
- Running on AIX
AIX is the clue here… need to check for semaphores and locked shared memory objects
92@@
RMAN-06023: no backup or copy of datafile 1 found to restore
93@@
Seems pretty straightforward? SCN was set too early for the retention period. But no, not this time.<br />
94@@
SCN was only 12 hours ago with a retention of 28 days! The backup piece dated 2018-05-15 01:06:22 is clearly available and on the device type set_tape.<br /> The trick here was to look back through the logs and look more closely at what the file restore message was saying…
95@@
It was not the fact that it could not find the data on tape, it was that it could not save the file to the location indicated!! In the RMAN DUPLICATE script, I had all the file paths redirected (a mixture of convert parameters and spfile set) to a /restore filesystem.<br /> But I forgot one!<br /> A handy thing to do is to search out the file paths with this to see if something is still wrong:
96@@
This should show any paths still not getting caught by the DUPLICATE script.
RMAN-06136: ORACLE error from auxiliary database: ORA-06502: PL/SQL: numeric or value error
When cloning a database, the duplicate fails with this error.
Problem
This duplicate had no target connection but did have a catalog and auxiliary connection. In the catalog (which was the catalog where the target was registerd), there was also an entry in that catalog with the same name as the auxiliary - left over from the days when these 2 instances were Primary and Standby in a Dataguard configuration.
97@@
Solution
Connect to the catalog and unregister the name of the auxiliary database. It shouldn't be there anyway.
98@@
ORA-39181: Only partial table data may be exported due to fine grain access control on "owner"."<table_name>"
Reason
This happens when applications are designed with fine-grained access.
Solution
Grant the necessary privileges to the exporting user…
99@@
ORA-04030: out of process memory when trying to allocate 2520 bytes
100@@
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp")
101@@
Database had already crashed so nothing to do but:<br />
102@@
but for reference, the causes could well be the applications running on the database are not using bind variables…<br /> From asktom.oracle.com
103@@
and from linked question
104@@
Advice from Oracle concerning a concrete (12c) example of unable to allocate 52824 bytes of shared memory ("shared pool",...
This was from a ticket raised for a Data Pump export that consistently failed with:
105@@
106@@
ORA-00392: log 1 of thread 1 is being cleared, operation not allowed
After performing an active duplication, this happened when trying to open the database.
107@@
108@@
Check the alert log! Always a good idea to get extra info. In here we find the answer
109@@
So the clue is that the file cannot be created at the operating system level.<br /> Doing a
df -k /oracle/oradata/testdb/redo
shows us that the filesystem was created too small for the destination database.<br /> In this case, we do not want the redolog directory to be as big as on the source db as there will be almost no movement on the destination.<br /> I should have specified the LOGFILE parameter in the DUPLICATE clause to setup new, smaller redolog files.<br /> As this duplicate takes over 12 hours, I didn't want to do it again so I created symbolic links in the redolog directory pointing out to a larger filesystem with more space.<br /> This allowed the creation of the redolog files. They can then be resized once the database has been opened.
110@@
RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied
111@@
This is normally very straight-forward and self-explanatory.<br /> The puzzle here is that a connection from SQL*Plus works!
112@@
Eventually worked out that the rman binary in the databases ORACLE_HOME on the source machine needed relinking!
113@@
ORA-00230: operation disallowed: snapshot control file enqueue unavailable
114@@
From the doc…
115@@
or more directly…
116@@
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
117@@
If this happens, check to see if the nsr daemon is running…<br /> Good system:
118@@
Bad system:
119@@
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
If using Cygwin…
120@@
- Check the tdpo.opt file is where you think it is!
- Check the path to the opt file looks like this (needs to be Windows format):
121@@
- Check the content of the opt file is Windows format paths!
RMAN-20033: control file SEQUENCE# too low
122@@
From forums.oracle.com<br /> This error generally happens if an archivelog backup happens while the RMAN database backup is running and if controlfile autobackup is configured.<br /> The database backup is usually successful and it is just the controlfile backup that fails.
ORA-27302: failure occurred at: slgpn
123@@
Problem
When doing a “<tt>backup as copy database</tt>”, got these errors after 145 of 147 datafiles had been backed up!
Investigation
I had read elsewhere that this could be something to do with the format of the autobackup of the control file so I changed the format as suggested and reran the backup.<br />
No luck. It ended exactly the same way after 145 files. I decided to compare the original list of datafiles with those that had been backed up in order to track down the 2 failures. Maybe I could see something odd with them.<br />
Sure enough, these 2 files were different… there was a space at the end of the filenames
124@@
Solution
Run the backup of the other 2 datafiles with a different format that did not rely on the filename.
125@@
The other 145 datafiles were backed up with a format of '/path/to/datafiles/%b' which preserves the filenames so all that needs to be done is rename the other 2 after the backup finishes.<br /> If they are going to be used in a restore scenario, they should be renamed in the controlfile also.
126@@
RMAN errors after crosscheck backup and delete noprompt obsolete and delete noprompt expired backup
127@@
Use the FORCE option with the DELETE command<br /> The FORCE command tells RMAN to clean the information out of the catalog regardless of whether it can find it on the media or not.
128@@
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20003: target database incarnation not found in recovery catalog
Problem:
129@@
Solution:
Strangely, this means, in my particular case, that block change tracking is turned on in the target databse and needs turning off in the auxiliary while the DUPLICATE is running.<br /> It must be done while the auxiliary is in mount mode and restoring the datafiles (before recovery starts)<br /> It is reported to be a bug fixed in 11.1 but somehow, we're still (sometimes) getting it in 12.1.0.2.
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
130@@
This error can come from the fact that the database files (more likely archivelogs) are being deleted by the operating system so RMAN has no clue of their whereabouts.<br /> Possibly result of a database duplication and the catalog has not updated properly?<br />
Solution
131@@
And then maybe take a backup to be on the safe side.
RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause
Problem:
132@@
Usually happens after restarting a failed RMAN DUPLICATE database. The spfile has been regenerated on the auxiliary to a level where it has too much detail in it.
Solution:
133@@
Re-run the DUPLICATE. It should zoom past the already recovered datafiles.
PSDRPC returns significant error 1013
134@@
Action
Check the alert log on the auxiliary instance.
135@@
This is a known bug. it is fixed in 11.2.0.3 but… this customer does not apply patches!
Solution
While the DUPLICATE is running (and in MOUNT mode), log on to the auxiliary instance and disable block change tracking.
136@@
A documented workaround is to set db_filename_convert parameter in the DUPLICATE clause of the run block but I tried this and it failed again.<br /> The obvious solution is to patch the database but…
Database trigger to capture ORA errors
ORA-19809: limit exceeded for recovery files
139@@
140@@
The solution here is to increase the value of db_recovery_file_dest_size but it might also indicate a problem with tapes if the archivelogs are being backed up to tape via tdpo.
141@@
Looking into int further, saw RMAN parameters missing…
142@@
RMAN-00554: initialization of internal recovery manager package failed
Getting this error after migrating a database from one machine to another and attaching to a nice new version 12c catalog on the new server
143@@
We cannot connect to the catalog via rman but there is no problem if connecting via SQL*Plus!<br /> Solution: relink the rman executable.
Using AIX commands genld, genkld and slibclean to avoid library file locking errors (libjox) (Doc ID 739963.1)
When installing interim (one-off) patches, CPU patches or patchsets, you may encounter some errors with regards to files being locked or oui/opatch being unable to copy files<br /> even though the databases, listeners and all other Oracle processes associated with the ORACLE_HOME to be patched were stopped.<br /> This could be as result of a process which requires termination or an additional file needing to be unloaded from the system cache.<br /> Run following commands to clean up…
144@@
If genld returns data then a currently executing process has something open in the ORACLE_HOME directory, therefore terminate the process as required/recommended.
145@@
If the genkld command returns a list of shared objects currently loaded onto the OS system cache then please remove the entries from the OS system cache by running the slibclean command as root user:
146@@
ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
This seems to be related to orphaned Datapump tables. Generally seems to be a problem when stats jobs are running during the night (dbms_stats)<br />
- Check no datapump jobs are running
147@@
- Find orphaned tables
148@@
Check that any rows seen are actually external tables
149@@
- Drop the tables if they are not currently being used (no datapump jobs running)
150@@
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
Database is restored and now we want to recover it but we see this…
151@@
This means that although we have restored the database, we are still missing information to complete the recovery. This information is contained in the archived redo logs which must also be restored (from tape if they are no longer on disk)<br /> We can see here that we need an archive log with sequence number 69763. But what is the file called and where should it be restored to?<br /> For this we can go to SQL*Plus and get the database to tell us
152@@
and cancel straight away.
153@@
Now we see that we need to find a file called arch_TRSCRP1_1_69763.arc and put it in the directory /oracle/TRSCRP1/archive.<br /> This will continue up until the time specified for the recover so restore a whole bunch of them.<br /> Repeat the “recover database using backup controlfile until cancel;” command and type AUTO or Enter until you've got to where you need to be.<br /> Type CANCEL when point is reached and
154@@
