ORA-00704: bootstrap process failure

Bootstarp error indicates version mismatch between datafile headers and oracle binaries, which are being used to start the database.

I’m starting the database using Oracle 12.1.0.2 binaries, where as datafiles are from 11.2.0.4.

$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Thu Oct 27 07:18:50 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
SQL> startup
ORACLE instance started.
Total System Global Area 1.8388E+10 bytes
Fixed Size 5366360 bytes
Variable Size 4898950568 bytes
Database Buffers 1.3422E+10 bytes
Redo Buffers 61739008 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-00604: error occurred at recursive SQL level 2
ORA-00904: “I”.”UNUSABLEBEGINNING#”: invalid identifier
Process ID: 5177412
Session ID: 1704 Serial number: 59059

Now how to check if there is really a mismatch? First we create a trace file with datafile header dump.

SQL> alter session set tracefile_identifier=’datafile_hdr’;
SQL> alter session set events ‘immediate trace name file_hdrs level 10’;

From trace file, extract section “V10 STYLE FILE HEADER” & check the value “Compatibility Vsn”.

V10 STYLE FILE HEADER:
Compatibility Vsn = 123456789=0xb200400

Current value b200400 <———> b 2 0 0 4 0 0 (Hex) – 11 2 0 4 0 0 (Decimal). Expected value was c100200.

Posted in Oracle Administration, Oracle Database Upgrade, Startup issues | Tagged , , , | Leave a comment

PRVG-2027 : Owner of file “filename” is inconsistent across nodes.

CVU (Cluster Verification Utility) logged multiple PRVG-2027 errors in CRS alret log. This environment was recently patched with latest PSU, which failed while applying changes to GMIR.

$GRID_HOME/bin/crsctl binary ownership should be root:dba.

db01:+ASM3:/u01/app/oracle>ls -al /u01/product/12.1.0.2/grid/bin/crsctl
-rwxr-xr-x 1 oracle dba 9545 Oct 27 07:05 /u01/product/12.1.0.2/grid/bin/crsctl
db02:+ASM4:/u01/app/oracle>ls -al /u01/product/12.1.0.2/grid/bin/crsctl
-rwxr-xr-x 1 root dba 9545 Oct 25 11:29 /u01/product/12.1.0.2/grid/bin/crsctl

List of patches and patch level on both nodes are same.

db01:+ASM3:>kfod op=patches
—————
List of Patches
===============
19769480
20299023
20831110
21359755
21436941
21948354
22291127
23054246
23854735
24006101
24007012
db01:+ASM3:>kfod op=patchlvl
——————-
Current Patch level
===================
1505651481
db02:+ASM4:>kfod op=patches
—————
List of Patches
===============
19769480
20299023
20831110
21359755
21436941
21948354
22291127
23054246
23854735
24006101
24007012
db02:+ASM4:>kfod op=patchlvl
——————-
Current Patch level
===================
1505651481

Node db01, I could see the enrty related to PREPATCH, but no entry for POSTPATCH.

Invoking “/u01/product/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS”

Node db02

Invoking “/u01/product/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS”
Invoking “/u01/product/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS”

So ROOTCRS_POSTPATCH never got executed on db01, which explains difference in permissions

On db01

# crsctl stop crs
# $GRID_HOME/crs/install/rootcrs.sh -patch
db01:+ASM3:/u01/app/oracle>ls -al /u01/product/12.1.0.2/grid/bin/crsctl
-rwxr-xr-x 1 root dba 9545 Oct 27 07:05 /u01/product/12.1.0.2/grid/bin/crsctl
Posted in Oracle Administration, Oracle OPatch | Tagged , , , , | Leave a comment

PRVG-2029 : Octal permissions of file “filename” are inconsistent across nodes

Following entries were logged in CRS/GRID alert.log

2016-10-23 16:24:49.048 [SRVM(4849748)]CRS-10051: CVU found following errors with Clusterware setup :
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_asym.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_base.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_base_non_fips.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_ecc.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_ecc_accel_fips.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_ecc_accel_non_fips.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libccme_ecc_non_fips.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libcryptocme.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libldapjclnt12.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]
PRVG-2029 : Octal permissions of file “/u01/product/12.1.0.2/grid/lib/libowm2.so” are inconsistent across nodes. [Found = “{0644=[db01], 0755=[db02]}”]

This would typically happen, when we are doing opatch operation. One of the node is patched and another one is in progress or not patched. Some file permissions may change as part of PSU patching.

Check patch levels using opatch lsinventory. In the last section “Patching Level” column would show current state.

Patch level status of Cluster nodes :

Patching Level Nodes
————– —–
0 db01
1505651481 db02

This means db02 is patched and on higher level. We can ignore these messages, if they are logged when patching is in progress.

But if they are logged post patching i.e. “Patching Level” are same, we should contact Oracle support.

Posted in Oracle OPatch | Tagged , | Leave a comment

datapatch failing with ORA-20006: Number of RAC active instances and opatch jobs configured are not same

datapatch failing with ORA-20006: Number of RAC active instances and opatch jobs configured are not same

db02:orcl01:/u01/product/12.1.0.2/database/OPatch>./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Tue Oct 25 13:01:10 2016
Copyright (c) 2016, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_6619568_2016_10_25_13_02_22/sqlpatch_invocation.log
SQL Patching arguments:
verbose: 1
force: 0
prereq: 0
upgrade_mode_only:
oh:
bundle_series:
ignorable_errors:
bootstrap:
skip_upgrade_check:
userid:
pdbs:
Connecting to database…OK
catcon: ALL catcon-related output will be written to /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_6619568_2016_10_25_13_02_22/sqlpatch_catcon__catcon_6619568.lst
catcon: See /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_6619568_2016_10_25_13_02_22/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_6619568_2016_10_25_13_02_22/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions…done
verify_queryable_inventory returned ORA-20006: Number of RAC active instances and opatch jobs configured are not same
Queryable inventory could not determine the current opatch status.
Execute ‘select dbms_sqlpatch.verify_queryable_inventory from dual’
and/or check the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_6619568_2016_10_25_13_02_22/sqlpatch_invocation.log
for the complete error.
Prereq check failed, exiting without installing any patches.
Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_6619568_2016_10_25_13_02_22/sqlpatch_invocation.log
for information on how to resolve the above errors.
SQL Patching tool complete on Tue Oct 25 13:02:29 2016
SQL> select dbms_sqlpatch.verify_queryable_inventory from dual;
VERIFY_QUERYABLE_INVENTORY
———————————————————————————
ORA-20006: Number of RAC active instances and opatch jobs configured are not same
SQL> select dbms_qopatch.get_pending_activity from dual;
ERROR:
ORA-20006: Number of RAC active instances and opatch jobs configured are not same
ORA-06512: at “SYS.DBMS_QOPATCH”, line 1222
no rows selected
SQL> select NODE_NAME, INST_ID, INST_JOB from opatch_inst_job;
NODE_NAME INST_ID INST_JOB
—————– ———- ————————-
db01 2 Load_opatch_inventory_2
db02 1 Load_opatch_inventory_1
SQL> select job_name ,state from dba_scheduler_jobs where job_name like ‘%OPATCH%’ ;
JOB_NAME STATE
———————— —————
LOAD_OPATCH_INVENTORY_2 DISABLED

Delete all existing job entries

SQL> exec DBMS_SCHEDULER.DROP_JOB(‘LOAD_OPATCH_INVENTORY_2’);
PL/SQL procedure successfully completed.

Delete rows from opatch_inst_job

SQL> delete from opatch_inst_job;
2 rows deleted.
SQL> commit;
Commit complete.

Re-run datapatch

db02:orcl02:/u01/product/12.1.0.2/database/OPatch>./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Tue Oct 25 13:06:59 2016
Copyright (c) 2016, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5506020_2016_10_25_13_06_59/sqlpatch_invocation.log
Connecting to database…OK
Bootstrapping registry and package to current versions…done
Determining current state…done
Current state of SQL patches:
Bundle series PSU:
ID 161018 in the binary registry and not installed in the SQL registry
Adding patches to installation queue and performing prereq checks…
Installation queue:
Nothing to roll back
The following patches will be applied:
24006101 (Database Patch Set Update : 12.1.0.2.161018 (24006101))
Installing patches…
Patch installation complete. Total patches installed: 1
Validating logfiles…
Patch 24006101 apply: SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/24006101/20648640/24006101_apply_ORCL_2016Oct25_13_11_23.log (no errors)
SQL Patching tool complete on Tue Oct 25 13:18:34 2016
Posted in datapatch issues, Oracle Administration, Oracle OPatch | Tagged , , | Leave a comment

datapatch failing with ORA-27477: “SYS”.”LOAD_OPATCH_INVENTORY_X” already exists

datapatch is failing with ORA-27477, due to existing job_name in dba_scheduler_jobs

db02:orcl01:/u01/product/12.1.0.2/database/OPatch>./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Tue Oct 25 12:55:45 2016
Copyright (c) 2016, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3997884_2016_10_25_12_55_45/sqlpatch_invocation.log
Connecting to database…OK
Bootstrapping registry and package to current versions…done
Queryable inventory could not determine the current opatch status.
Execute ‘select dbms_sqlpatch.verify_queryable_inventory from dual’
and/or check the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3997884_2016_10_25_12_55_45/sqlpatch_invocation.log
for the complete error.
Prereq check failed, exiting without installing any patches.
SQL> select dbms_sqlpatch.verify_queryable_inventory from dual;
VERIFY_QUERYABLE_INVENTORY
——————————————————————————–
ORA-27477: “SYS”.”LOAD_OPATCH_INVENTORY_1″ already exists
SQL> select job_name,state, start_date from dba_scheduler_jobs where job_name like ‘LOAD_OPATCH%’;
JOB_NAME STATE START_DATE
————————— ————— ————————————
LOAD_OPATCH_INVENTORY DISABLED 25-OCT-16 06.00.50.791058 AM +00:00
LOAD_OPATCH_INVENTORY_1 STOPPED 25-OCT-16 07.11.49.606570 AM +00:00
LOAD_OPATCH_INVENTORY_2 DISABLED 25-OCT-16 07.11.50.543851 AM +00:00

Clear all existing job entries

SQL> exec DBMS_SCHEDULER.DROP_JOB(‘LOAD_OPATCH_INVENTORY’);
PL/SQL procedure successfully completed.
SQL> exec DBMS_SCHEDULER.DROP_JOB(‘LOAD_OPATCH_INVENTORY_1’);
PL/SQL procedure successfully completed.
SQL> exec DBMS_SCHEDULER.DROP_JOB(‘LOAD_OPATCH_INVENTORY_2’);
PL/SQL procedure successfully completed.
SQL> select job_name,state, start_date from dba_scheduler_jobs where job_name like ‘LOAD_OPATCH%’;
no rows selected

Re-run datapatch

db02:orcl01:/u01/product/12.1.0.2/database/OPatch>./datapatch -verbose
Posted in datapatch issues, Oracle Administration, Oracle OPatch | Tagged , , | Leave a comment

/tmp/CVU_1x.x.x.x.x_oracle/exectask: not found

After few failed execution of opatchauto, for current run I started getting “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh -getver”. The output from the command was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh[22]: /tmp/CVU_12.1.0.2.0_oracle/exectask: not found”

Looks like after earlier failures, it did not cleanup the directories/flies and this opatchauto run was trying to execute exectask, which was missing.

root@db01[/root]> /u01/product/12.1.0.2/grid/OPatch/opatchauto apply /OracleInstallable/24412235 -oh /u01/product/12.1.0.2/grid
OPatchauto session is initiated at Tue Oct 25 03:37:59 2016
System initialization log file is /u01/product/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-10-25_03-38-13AM.log.
Failed:
Verifying shared storage accessibility
Version of exectask could not be retrieved from node “db01”
ERROR:
An internal error occurred within cluster verification framework
The command executed was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh -getver”. The output from the command was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh[22]: /tmp/CVU_12.1.0.2.0_oracle/exectask: not found”
Version of exectask could not be retrieved from node “db02”
ERROR:
An internal error occurred within cluster verification framework
The command executed was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh -getver”. The output from the command was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh[22]: /tmp/CVU_12.1.0.2.0_oracle/exectask: not found”
ERROR:
Framework setup check failed on all the nodes
Verification cannot proceed
Verification of shared storage accessibility was unsuccessful on all the specified nodes.
NODE_STATUS::db01:EFAIL
The result of cluvfy command contain EFAIL NODE_STATUS::db01:EFAIL
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.
OPatchauto session completed at Tue Oct 25 03:38:45 2016
Time taken to complete the session 0 minute, 54 seconds
Topology creation failed.

Solution – Remove /tmp/CVU_12.1.0.2.0_<grid_owner> on all nodes (in case of RAC install) and execute the cluvfy command again.

Posted in Oracle OPatch | Tagged , , , , | Leave a comment

ORA-06598: insufficient INHERIT PRIVILEGES privilege

Recently while upgrading 11204 database to 12102, got following error.

XDB SGA reset to NULL.
Tue Oct 25 05:37:34 2016
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl01/trace/orcl01_ora_5898844.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at “XDB.DBMS_CSX_INT”, line 1

This problem is caused due to BUG 19315691: CHANGE XMLTYPE DATATYPE TO CLOB FOR BUNDLE_DATA COLUMN IN REGISTRY$PATCH.

Not much details available on the bug, but as per oracle it’s fixed in Oracle 12.1.0.2.4 Proactive BP. INHERIT PRIVILEGE error is thrown because XMLTYPE datatype is not available to be used as a column datatype until AFTER XDB has been upgraded.

As we can see XML component is upgraded after ( Oct 25 06:13:17 2016) this error was thrown (Oct 25 05:37:34 2016).

Tue Oct 25 06:13:17 2016
SERVER COMPONENT id=XML: status=VALID, version=12.1.0.2.0, timestamp=2016-10-25 06:13:17
Tue Oct 25 06:14:21 2016
XDB installed.
XDB initialized.

Workaround – (Re)create the table and view after the upgrade is complete.

SQL> sqlplus / as sysdba
SQL> drop table REGISTRY$SQLPATCH;
Table dropped.
SQL> @/u01/product/12.1.0.2/database/rdbms/admin/catsqlreg.sql
Session altered.
Table created.
View created.
Synonym created.
Grant succeeded.
PL/SQL procedure successfully completed.
Grant succeeded.
Synonym created.
Session altered.

Post table recreation, validate using datapatch -verbose.

db01:orcl01:/u01/product/12.1.0.2/database/OPatch>./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Mon Oct 24 08:48:19 2016
Copyright (c) 2014, Oracle. All rights reserved.
Connecting to database…OK
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_7799494.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions…done
Determining current state…done
Current state of SQL patches:
Adding patches to installation queue and performing prereq checks…
Installation queue:
Nothing to roll back
Nothing to apply
SQL Patching tool complete on Mon Oct 24 08:49:12 2016
Posted in 12c Upgrade issues, Oracle Bugs | Tagged , , , | Leave a comment