Way to success...

--"Running Away From Any PROBLEM Only Increases The DISTANCE From The SOLUTION"--.....--"Your Thoughts Create Your FUTURE"--.....--"EXCELLENCE is not ACT but a HABIT"--.....--"EXPECT nothing and APPRECIATE everything"--.....

Sunday, February 28, 2016

Oracle Cluster commands


To stop all cluster services,ASMm, database and instance:

Login as root 

export GRID_HOME=/oradata/oracle/TSTDB/bin/
cd $GRID_HOME

[root@rac01 ]# ./crsctl stop cluster

To start all cluster services,ASM, database and instance:

Login as root 

export GRID_HOME=/oradata/oracle/TSTDB/bin/
cd $GRID_HOME

[root@rac01 ]# ./crsctl start cluster




Login as root 

export GRID_HOME=/oradata/oracle/TSTDB/bin/
cd $GRID_HOME

$ ./crsctl check cluster -all [verify cluster status on all nodes]
$ ./crsctl stop cluster -all [stop cluster on all nodes]
$ ./crsctl start cluster -all [start cluster on all nodes]
$ ./crsctl check cluster -n <nodename> [verify the cluster status on a particular remote node]



To check CRS (Cluster ready service) status:

Login as root

export GRID_HOME=/oradata/oracle/TSTDB/bin/
cd $GRID_HOME

[root@rac01 ]# ./crsctl check crs





To check status of node apps - where node is the name of the node where the applications are running:



Login as root

cd /oradata/oracle/db/tech_st/11.2.0/bin/

[root@rac01 ]# ./srvctl status nodeapps -n node


crs_stat: Displays CRS resource status:


Login as root

export GRID_HOME=/oradata/oracle/TSTDB/bin/
cd $GRID_HOME

[root@rac01 ]# ./crs_stat -t



To start particular resource:

Login as root

export GRID_HOME=/oradata/oracle/TSTDB/bin/
cd $GRID_HOME

[root@rac01 ]# ./crsctl start resource ora.oc4j











RAC Database Startup, Stop and Status using srvctl


RAC Database Startup in NORMAL mode:

Run below command from primary node as a oracle owner:

srvctl stop database -d DB_NAME

srvctl start database -d DB_NAME

srvctl status database -d DB_NAME


RAC Database Startup in MOUNT /NOMOUNT / RESTRICT mode:

Startup in mount mode:

srvctl start database -d DB_NAME -o mount

Startup in nomount mode:

srvctl start database -d DB_NAME -o nomount

Startup in restrict mode:

srvctl start database -d DB_NAME -o restrict


START / STOP Specific RAC Instance:

To Stop specific RAC instance:

srvctl stop instance -d DB_NAME -i INSTANCE_NAME


To Start specific RAC instance:

srvctl start instance -d DB_NAME -i INSTANCE_NAME


PRVF-4664 : Found inconsistent name resolution entries for SCAN name


Issue:

INFO: PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 192.168.1.50) failed
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 192.168.1.40) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan"
INFO: Verification of SCAN VIP and Listener setup failed




Cause:

In my case DNS is configured, still I was getting this error due to the scan host entries added into the host file.

[root@rac02 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

#Public IP
192.168.1.10   rac01.dba.com  rac01
192.168.1.20   rac02.dba.com  rac02

#Private IP
192.168.2.10  rac01-priv.dba.com        rac01-priv
192.168.2.20  rac02-priv.dba.com        rac02-priv

#Virtual IP
192.168.1.11 rac01-vip.dba.com          rac01-vip
192.168.1.21 rac02-vip.dba.com          rac02-vip

#SCAN IP
192.168.1.30   rac-scan.dba.com        rac-scan
192.168.1.40   rac-scan.dba.com        rac-scan
192.168.1.50    rac-scan.dba.com        rac-scan
[root@rac02 ~]#


Solution:

Solution would be to remove/comment the scan host entries in the host file and reconfigure the SCAN Listener.

Comment the SCAN host entries as shown below: 

[root@rac02 ~]# cat /etc/hosts

#SCAN IP
#192.168.1.30   rac-scan.dba.com        rac-scan
#192.168.1.40   rac-scan.dba.com        rac-scan
#192.168.1.50    rac-scan.dba.com        rac-scan


Steps to reconfigure the scan listener:

[grid@rac01 ~]$ cd /u01/11.2.0/grid
[grid@rac01 grid]$ cd bin
[grid@rac01 bin]$ ./srvctl stop scan_listener
[grid@rac01 bin]$ ./srvctl stop scan
[grid@rac01 bin]$ ./srvctl config scan
SCAN name: rac-scan, Network: 1/192.168.1.0/255.255.255.0/eth1

SCAN VIP name: scan1, IP: /rac-scan/192.168.1.30


[grid@rac01 bin]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1561
[grid@rac01 bin]$
[grid@rac01 bin]$ exit
logout
[root@rac01 rpm]# cd ..
[root@rac01 grid]# cd /u01/11.2.0/grid/bin


[root@rac01 bin]# ./srvctl modify scan -h

Modifies the SCAN name.

Usage: srvctl modify scan -n <scan_name>
    -n <scan_name>           Domain name qualified SCAN name
    -h                       Print usage
[root@rac01 bin]# ./srvctl modify scan -n rac-scan


[root@rac01 bin]# ./srvctl config scan
SCAN name: rac-scan, Network: 1/192.168.1.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /rac-scan/192.168.1.40
SCAN VIP name: scan2, IP: /rac-scan/192.168.1.50
SCAN VIP name: scan3, IP: /rac-scan/192.168.1.30


[root@rac01 bin]# ./srvctl modify scan_listener -u
[root@rac01 bin]# ./srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1561
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1561
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1561


[root@rac01 bin]# ./srvctl start scan_listener
[root@rac01 bin]# ./srvctl config scan
SCAN name: rac-scan, Network: 1/192.168.1.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /rac-scan/192.168.1.40
SCAN VIP name: scan2, IP: /rac-scan/192.168.1.50
SCAN VIP name: scan3, IP: /rac-scan/192.168.1.30
[root@rac01 bin]#