/tmp/CVU_1x.x.x.x.x_oracle/exectask: not found

After few failed execution of opatchauto, for current run I started getting “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh -getver”. The output from the command was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh[22]: /tmp/CVU_12.1.0.2.0_oracle/exectask: not found”

Looks like after earlier failures, it did not cleanup the directories/flies and this opatchauto run was trying to execute exectask, which was missing.

root@db01[/root]> /u01/product/12.1.0.2/grid/OPatch/opatchauto apply /OracleInstallable/24412235 -oh /u01/product/12.1.0.2/grid
OPatchauto session is initiated at Tue Oct 25 03:37:59 2016
System initialization log file is /u01/product/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-10-25_03-38-13AM.log.
Failed:
Verifying shared storage accessibility
Version of exectask could not be retrieved from node “db01”
ERROR:
An internal error occurred within cluster verification framework
The command executed was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh -getver”. The output from the command was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh[22]: /tmp/CVU_12.1.0.2.0_oracle/exectask: not found”
Version of exectask could not be retrieved from node “db02”
ERROR:
An internal error occurred within cluster verification framework
The command executed was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh -getver”. The output from the command was “/tmp/CVU_12.1.0.2.0_oracle/exectask.sh[22]: /tmp/CVU_12.1.0.2.0_oracle/exectask: not found”
ERROR:
Framework setup check failed on all the nodes
Verification cannot proceed
Verification of shared storage accessibility was unsuccessful on all the specified nodes.
NODE_STATUS::db01:EFAIL
The result of cluvfy command contain EFAIL NODE_STATUS::db01:EFAIL
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.
OPatchauto session completed at Tue Oct 25 03:38:45 2016
Time taken to complete the session 0 minute, 54 seconds
Topology creation failed.

Solution – Remove /tmp/CVU_12.1.0.2.0_<grid_owner> on all nodes (in case of RAC install) and execute the cluvfy command again.

Posted in Oracle OPatch | Tagged , , , , | Leave a comment

ORA-06598: insufficient INHERIT PRIVILEGES privilege

Recently while upgrading 11204 database to 12102, got following error.

XDB SGA reset to NULL.
Tue Oct 25 05:37:34 2016
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl01/trace/orcl01_ora_5898844.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at “XDB.DBMS_CSX_INT”, line 1

This problem is caused due to BUG 19315691: CHANGE XMLTYPE DATATYPE TO CLOB FOR BUNDLE_DATA COLUMN IN REGISTRY$PATCH.

Not much details available on the bug, but as per oracle it’s fixed in Oracle 12.1.0.2.4 Proactive BP. INHERIT PRIVILEGE error is thrown because XMLTYPE datatype is not available to be used as a column datatype until AFTER XDB has been upgraded.

As we can see XML component is upgraded after ( Oct 25 06:13:17 2016) this error was thrown (Oct 25 05:37:34 2016).

Tue Oct 25 06:13:17 2016
SERVER COMPONENT id=XML: status=VALID, version=12.1.0.2.0, timestamp=2016-10-25 06:13:17
Tue Oct 25 06:14:21 2016
XDB installed.
XDB initialized.

Workaround – (Re)create the table and view after the upgrade is complete.

SQL> sqlplus / as sysdba
SQL> drop table REGISTRY$SQLPATCH;
Table dropped.
SQL> @/u01/product/12.1.0.2/database/rdbms/admin/catsqlreg.sql
Session altered.
Table created.
View created.
Synonym created.
Grant succeeded.
PL/SQL procedure successfully completed.
Grant succeeded.
Synonym created.
Session altered.

Post table recreation, validate using datapatch -verbose.

db01:orcl01:/u01/product/12.1.0.2/database/OPatch>./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Mon Oct 24 08:48:19 2016
Copyright (c) 2014, Oracle. All rights reserved.
Connecting to database…OK
catcon: ALL catcon-related output will be written to /tmp/sqlpatch_catcon__catcon_7799494.lst
catcon: See /tmp/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /tmp/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions…done
Determining current state…done
Current state of SQL patches:
Adding patches to installation queue and performing prereq checks…
Installation queue:
Nothing to roll back
Nothing to apply
SQL Patching tool complete on Mon Oct 24 08:49:12 2016
Posted in 12c Upgrade issues, Oracle Bugs | Tagged , , , | Leave a comment

ORA-00600: internal error code, arguments: [kwqitnmphe:ltbagi]

Recently started seeing this error on few of the RAC environments.

Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl01/trace/orcl01_q002_5570998.trc:
ORA-00600: internal error code, arguments: [kwqitnmphe:ltbagi], [1], [0], [], [], [], [], [], [], [], [], []

As per metalink this is a known Bug 20987661 – Doc ID 20987661.8

Workaround –

DECLARE
             po dbms_aqadm.aq$_purge_options_t;
BEGIN
             po.block := FALSE;
            DBMS_AQADM.PURGE_QUEUE_TABLE(
            queue_table => ‘SYS.SYS$SERVICE_METRICS_TAB’,
            purge_condition => NULL,
            purge_options => po);
END;
/

This works until the database is restarted i.e. we’ll have to execute it every time, we restart the database.

I’ve seen this in 11204 and 12102 both. As per Oracle it’s fixed in version 12.2

Posted in 12c Upgrade issues, Oracle Administration, Oracle Bugs, Oracle ORA-600 errors | Tagged , | Leave a comment

OPatch 12c: emocmrsp missing from OPatch/ocm/bin

So far while applying PSU or one off patches, we use to pass -ocmrf parameter.  I could not think of a scenario where we are going to expose database servers to external world.

Before applying latest PSU, I downloaded latest opatch p6880880_122010_AIX64-5L.zip from metalink. Post unzip, I wanted to create ocm.rsp. But when checked, OPatch/ocm/bin was empty. Binary emocmrsp was missing.

OPatch/ocm/bin>ls -al
total 0

As per Metalink Doc ID 2161861.1

This enhancement to OPatch exists in 12.2.0.1.5 release and later. The option -ocmrf is used to provide OPatch the OCM responses during a silent install. Since OCM is no longer packaged with OPatch, the -ocmrf is no longer needed on the command line.

As per Metalink Doc ID 1591616.1

Note: as latest opatch doesn’t contain OCM anymore, the option “-ocmrf” is unnecessary if latest opatch is being used, refer to the following for details:
note 2161861.1 – OPatch: Behavior Changes starting in OPatch 12.2.0.1.5 and 11.2.0.3.14 releases

So this is expected behavior.

Simply use following CLI for applying the patch

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/24412235

Posted in Oracle One-Off Patch, Oracle OPatch, Oracle PSU | Tagged , , | Leave a comment

oracle.ops.mgmt.rawdevice.OCRException: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]

This message is seen quite frequently while upgrading grid to 12c.  Do we we have to take any action on this?

2016-10-21 05:34:27: Trying to get the value of key: SYSTEM.rootcrs.checkpoints.firstnode in OCR.
2016-10-21 05:34:27: setting ORAASM_UPGRADE to 1
2016-10-21 05:34:27: Check the existence of key pair with key name: SYSTEM.rootcrs.checkpoints.firstnode in OCR.
2016-10-21 05:34:27: setting ORAASM_UPGRADE to 1
2016-10-21 05:34:27: Resetting cluutil_trc_suff_pp to 0
2016-10-21 05:34:27: Invoking “/u01/product/12.1.0.2/grid/bin/cluutil -exec -keyexists -key checkpoints.firstnode”
2016-10-21 05:34:27: trace file=/u01/app/oracle/crsdata/{nodename}/crsconfig/cluutil0.log
2016-10-21 05:34:27: Running as user oracle: /u01/product/12.1.0.2/grid/bin/cluutil -exec -keyexists -key checkpoints.firstnode
2016-10-21 05:34:27: s_run_as_user2: Running /bin/su oracle -c ‘ echo CLSRSC_START; /u01/product/12.1.0.2/grid/bin/cluutil -exec -keyexists -key checkpoints.firstnode ‘
2016-10-21 05:34:28: Removing file /tmp/heaSavMbL
2016-10-21 05:34:28: Successfully removed file: /tmp/heaSavMbL
2016-10-21 05:34:28: pipe exit code: 256
2016-10-21 05:34:28: /bin/su exited with rc=1
2016-10-21 05:34:28: oracle.ops.mgmt.rawdevice.OCRException: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]

As per Oracle, this error is harmless. If CRSD is down, the error will be reported while a client tries to access OCR keys; when this happens, root script will use fallback mechanism to access the keys.

Fallback mechanism is reading the required contents from the file, created using OCRDUMP before starting the upgrade.

2016-10-21 05:34:28: Cannot get OCR key with CLUUTIL, try using OCRDUMP.
2016-10-21 05:34:28: Check OCR key using ocrdump
2016-10-21 05:34:52: ocrdump output: [SYSTEM.rootcrs.checkpoints.firstnode]
ORATEXT : START
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_ALL_ACCESS, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : system}

************ After sometime, when CRSD is up *********************

2016-10-21 05:38:53: Install node: {nodename}
2016-10-21 05:38:53: Current first node: {nodename}
2016-10-21 05:38:53: Trying to get the value of key: SYSTEM.rootcrs.checkpoints.firstnode in OCR.
2016-10-21 05:38:53: setting ORAASM_UPGRADE to 1
2016-10-21 05:38:53: Check the existence of key pair with key name: SYSTEM.rootcrs.checkpoints.firstnode in OCR.
2016-10-21 05:38:53: setting ORAASM_UPGRADE to 1
2016-10-21 05:38:53: Invoking “/u01/product/12.1.0.2/grid/bin/cluutil -exec -keyexists -key checkpoints.firstnode”
2016-10-21 05:38:53: trace file=/u01/app/oracle/crsdata/{nodename}/crsconfig/cluutil3.log
2016-10-21 05:38:53: Running as user oracle: /u01/product/12.1.0.2/grid/bin/cluutil -exec -keyexists -key checkpoints.firstnode
2016-10-21 05:38:53: s_run_as_user2: Running /bin/su oracle -c ‘ echo CLSRSC_START; /u01/product/12.1.0.2/grid/bin/cluutil -exec -keyexists -key checkpoints.firstnode ‘
2016-10-21 05:38:55: Removing file /tmp/teaRyvMbX
2016-10-21 05:38:55: Successfully removed file: /tmp/teaRyvMbX
2016-10-21 05:38:55: pipe exit code: 0
2016-10-21 05:38:55: /bin/su successfully executed

As we can see it exited with  success code 0.

So we do not have to take any action, for this particular error.

Posted in 12c Upgrade issues, Oracle Cluster Ready Services, Oracle Real Application Cluster, Oracle Software Install / Deinstall | Tagged , , , | Leave a comment

Tempfile file# in alert.log

While investigating “Resize operation completed” messages from alert.log, I came across a file number (#1023) which was much higher and kind of not expected.

Resize operation completed for file# 1023, old size 4198400K, new size 4249600K
Resize operation completed for file# 1023, old size 4249600K, new size 4300800K

Made following checks on v$datafile & v$tempfile, but they also had file number ranging from 1-6

SQL> select file# from v$datafile;
FILE#
———-
1
2
3
4
5
6
6 rows selected.
SQL> select file# from v$tempfile;
FILE#
———-
1

I explicitly used following commends to resize the file size, so that “Resize operation completed” can be checked again (i.e. it will log the message in alert.log).

SQL> alter database datafile ‘datafile name’ resize <newsize>;
SQL> alter database tempfile ‘tempfile name’ resize <newsize>;

Message against file# 1023 was logged, when TEMPFILE was resized. After checking few things found that it’s starting TEMPFILE numbering with value DB_FILES parameter value +1, +2 and so on.

SQL> show parameter db_files
NAME TYPE                               VALUE
——————– ——————————
db_files integer                        1022

Currently DB_FILES is set to 1022, so it started TEMPFILE numbering as DB_FILES + 1 = 1023

To test this further, added another tempfile

SQL> alter tablespace temp add tempfile ‘+DATADG’ size 10m ;
Tablespace altered.
SQL> select file# from v$tempfile;
FILE#
———-
1
2
SQL> alter database tempfile ‘+DATADG/ORCL/TEMPFILE/temp.627.925659727’ resize 20m;
Database altered.

New message in alert.log “Resize operation completed for file# 1024, old size 10240K, new size 20480K”, DB_FILES + 2 = 1024.

Posted in Oracle Administration | Tagged , , | Leave a comment

Oracle clone failing with INS-40426

Recently I’ve seen this error in one of the clone operations.

— Copied the binary tar and executed

export ORACLE_HOME=/u01/app/oracle/product/12.1.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH; export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LIBPATH; export TNS_ADMIN=$ORACLE_HOME/network/admin;

runInstaller -silent -clone ORACLE_HOME=/u01/app/oracle/product/12.1.0/grid ORACLE_BASE=/u01/app/oracle ORACLE_HOME_NAME=Ora12102_Grid

This invoked runInstaller from  /u01/app/oracle/product/12.1.0/grid.

Oracle documentation was not really helpful. Metalink did not had any references to this.

INS-40426: Grid installation option has not been specified.

Cause: n/a
Action: n/a

But when executed from /u01/app/oracle/product/12.1.0/grid/oui/bin, it worked.

This error is caused due to “-formCluster” parameters which is passed in $ORACLE_HOME/runInstaller

This script defaults BUNDLE to value “crs”, which executes a CLI with a parameter -formCluster

BUNDLE=crs
….
….
….
case “$BUNDLE” in

crs)
$CMDDIR/install/.oui $* -formCluster -J-Doracle.install.setup.workDir=$CWDDIR -J-D${CVU_OS_SETTINGS}

Posted in 12c Upgrade issues, Oracle Database Cloning, Oracle Software Install / Deinstall | Tagged , , , , , , , , | Leave a comment