Real Application Clusters (RAC) is an clustering solution whereby multiple Oracle instances can communicate with one database located on shared storage.
Datafiles and controlfiles are shared but each instance will have its own redo log files and undo tablespace.
Use crsctl to manage the clusterware
Use srvctl to manage the cluster
Know nothing about RAC? Go through the RAC Attack labs step-by-step installation for 11g RAC_Attack!
or 12c RACAttack for an excellent (hands on) introduction.
A virtual IP address is an IP address setup to “float” between the real IP addresses associated with each instance in the cluster. If the IP address assigned to the virtual IP becomes unavailable, the IP address from the still available instance will be assigned to the virtual IP thus maintaining database availability for the users.
A Voting Disk is a file on a shared filesystem. It maintains a list of available cluster nodes and can resolve split-brain scenarios.
All instances write to the voting disk to indicate that they are still active.
The Cluster Registry (OCR) is used to store cluster wide settings and status information such as node names, IP and VIP addresses, voting disk locations, node applications, database names, instance names, listener names, etc.
A RAC cluster consists of the following daemons
This file points to the clusters local node repository and depending on the OS, it is in one of these 2 locations
cat /etc/oracle/olr.loc cat /var/loc/oracle/olr.loc
This file is managed by OHASD and will point to the olrconfig_loc and the crs_home
/etc/oracle/scls_scr/$HOSTNAME
crsctl status resource -t crsctl status server crsctl check cluster crsctl check crs crsctl query css votedisk crsctl query crs activeversion crsctl query crs releaseversion crsctl query crs softwareversion
oifcfg iflist -p
cat /etc/oracle/scls_scr/$HOSTNAME/oracle/ohasdstr crsctl check has crsctl config has crsctl start has crsctl stop has crsctl disable has crsctl enable has
crsctl stat res ora.asm
srvctl status diskgroup -g DATA crsctl stat res ora.DATA.dg
srvctl config database -d <database>
if not…
srvctl add database <database> srvctl enable database <database>
srvctl modify database -d <database> -diskgroup "DATA,FRA"
To set the correct home, can also search /etc/oratab for the relevant ASM instance and use . oraenv
export ORA_CRS_HOME=$(awk -F: '/+ASM/ {print $2}' /etc/oratab)
$ORA_CRS_HOME/bin/crsctl start crs
$ORA_CRS_HOME/bin/crsctl stop crs
$ORA_CRS_HOME/bin/crsctl disable crs
$ORA_CRS_HOME/bin/crsctl enable crs
$ORA_CRS_HOME/bin/crsctl status resource -t or $ORA_CRS_HOME/bin/crsctl stat res -t
or
crsctl status res |grep -v "^$"|awk -F "=" 'BEGIN {print " "} {printf("%s",NR%4 ? $2"|" : $2"\
")}'|sed -e 's/ *, /,/g' -e 's/, /,/g'|\\
awk -F "|" 'BEGIN { printf "%-40s%-35s%-20s%-50s\
","Resource Name","Resource Type","Target ","State" }{ split ($3,trg,",") split ($4,st,",")}{for (i in trg) {printf "%-40s%-35s%-20s%-50s\
",$1,$2,trg[[i]],st[[i]]}}'
crsctl stat res ora.proddb11.db -p | grep ORACLE_HOME
If all resources for a home need to go down (e.g. for patching), this should do it.
Save the state so that the start command knows what to start and what not to.
srvctl stop home -o /oracle/product/11.2.0.4 -s /oracle/state_file_11204 or srvctl stop home -oraclehome /oracle/product/11.2.0.4 -statefile /oracle/state_file_11204 or srvctl stop home -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 -statefile /home/oracle/home_state_20170111 -node sdtcsynoda02-rac -stopoption IMMEDIATE -force
Before running the startup, the state file can be edited and instances added/removed as it is a simple text file
srvctl start home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /oracle/state_file_12102 -node sdtcsynoda02-rac
srvctl start|stop instance -d <db_unique_name> -i <inst_name_list> [[-o start_options]]|[[-o stop_options]] [[-c connect_str | -q]]
srvctl stop instance –d <db_name> –i <instance_name>
srvctl start instance –n <node_name> e.g. srvctl start prodctl –n asmnode1
srvctl stop database –db <db_name> e.g. srvctl stop database -db ACCINGPX
srvctl start database –db <db_name> e.g. srvctl start database -db ACCINGPX
srvctl start database –db <db_name> -o <start option> e.g. srvctl start database -db ACCINGPX -o mount
srvctl stop asm –n <rac node> e.g. srvctl stop asm –n asmnode1 -o immediate
srvctl start nodeapps –n <rac node> srvctl stop nodeapps –n <rac node> srvctl status nodeapps –n <rac node>
srvctl start|stop listener -n node_name [[-l listener_name_list]] e.g. srvctl start listener -n node2 srvctl start listener -n node2 -l DATAGUARD_LISTENER srvctl stop LISTENER_SCAN1
srvctl stop database command implicity does a srvctl stop services (because services are dependent on database).
However, a subsequent srvctl start database requires an explicit srvctl start service
srvctl start service -d <db_unique_name> [[-s service_name_list [[-i <inst_name>]] [[-o <start_options>]] [[-c connect_str | -q]] srvctl stop service -d db_unique_name [[-s service_name_list [[-i inst_name]] [[-c connect_str | -q]] [[-f]]
srvctl status asm -n sdtcsynoda01-rac srvctl status asm -n sdtcsynoda02-rac
srvctl status database -d TSTEV3 -v
Instance TSTEV31 is running on node sdtcsynoda01-rac Instance TSTEV32 is running on node sdtcsynoda02-rac
srvctl status instance -d TSTEV3 -i TSTEV31 -v
Instance TSTEV31 is running on node sdtcsynoda01-rac. Instance status: Open.
srvctl status instance -d TSTEV3 -i TSTEV32 -v
Instance TSTEV32 is running on node sdtcsynoda02-rac with online services DOTS. Instance status: Open.
srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A 139.184.201.1/255.255.255.0/hme0
srvctl add database -d ORACLE -o $ORACLE_HOME eg. srvctl add database -d MASTASPO -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -r PHYSICAL_STANDBY -s MOUNT
srvctl add instance -d ORACLE -i RAC01 -n myclust-1 srvctl add instance -d ORACLE -i RAC02 -n myclust-2 srvctl add instance -d ORACLE -i RAC03 -n myclust-3 eg. srvctl add instance -d MASTASPO -i MASTASPO -n sdtcoda01-rac
srvctl add service -d ORACLE -s STD_BATCH -r RAC01 -a RAC02
srvctl add service -d ORACLE -s STD_BATCH -r RAC01 -a RAC02 -P PRECONNECT
srvctl remove database -d ORACLE
srvctl remove instance -d ORACLE -i RAC03 srvctl remove instance -d ORACLE -i RAC04
srvctl remove service -d ORACLE -s STD_BATCH
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
srvctl remove nodeapps -n myclust-4
srvctl modify instance -d ORACLE -n myclust-4
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 –r
srvctl relocate service -d orac -s CRM -i RAC04 -t RAC01
srvctl relocate db -d orac -n server02
srvctl enable database -d ORACLE
srvctl enable instance -d ORACLE -i RAC01, RAC02
srvctl enable service -d ORACLE -s ERP,CRM
srvctl enable service -d ORACLE -s CRM -i RAC03
srvctl disable database -d ORACLE
srvctl disable instance -d ORACLE -i RAC01, RAC02
srvctl disable service -d ORACLE -s ERP,CRM
srvctl disable service -d ORACLE -s CRM -i RAC02
srvctl status service -d TSTEV3 -s DOTS -v
srvctl status nodeapps
This will be TRUE if RAC database
show parameter cluster
or, the GV$ views will show more than 1 record
select * from gv$instance;
INST_ID INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO CON_ID INSTANCE_MO EDITION FAMILY
---------- --------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
2 2 DEV32 oda02-rac 12.1.0.2.0 20-JUL-16 OPEN YES 2 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE
1 1 DEV31 oda01-rac 12.1.0.2.0 13-OCT-16 OPEN YES 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE
2 rows selected.
or
set serverout on
begin
if dbms_utility.is_cluster_database then
dbms_output.put_line('Running in RAC mode.');
else
dbms_output.put_line('Running in EXCLUSIVE mode.');
end if;
end;
/
or
select * from v_$active_instances;
or
select * from v_$thread;
select inst_id , count(*) sessions from gv$session where type = 'USER' group by inst_id /