Table of Contents
Oracle RAC Overview
Real Application Clusters (RAC) is an clustering solution whereby multiple Oracle instances can communicate with one database located on shared storage.
Datafiles and controlfiles are shared but each instance will have its own redo log files and undo tablespace.
Use crsctl to manage the clusterware
Use srvctl to manage the cluster
Know nothing about RAC? Go through the RAC Attack labs step-by-step installation for 11g RAC_Attack!
or 12c RACAttack for an excellent (hands on) introduction.
How to Backup and Restore a RAC Database
RAC components
Virtual IP
A virtual IP address is an IP address setup to “float” between the real IP addresses associated with each instance in the cluster. If the IP address assigned to the virtual IP becomes unavailable, the IP address from the still available instance will be assigned to the virtual IP thus maintaining database availability for the users.
Voting Disk
A Voting Disk is a file on a shared filesystem. It maintains a list of available cluster nodes and can resolve split-brain scenarios.
All instances write to the voting disk to indicate that they are still active.
Cluster Registry
The Cluster Registry (OCR) is used to store cluster wide settings and status information such as node names, IP and VIP addresses, voting disk locations, node applications, database names, instance names, listener names, etc.
Clusterware processes
A RAC cluster consists of the following daemons
- crsd – Cluster Resource Services Daemon
- cssd – Cluster Synchronisation Services Daemon
- evmd – Event Manager Daemon
RAC commands
Location of Oracle Local Registry configuration file (olr.loc)
This file points to the clusters local node repository and depending on the OS, it is in one of these 2 locations
cat /etc/oracle/olr.loc cat /var/loc/oracle/olr.loc
This file is managed by OHASD and will point to the olrconfig_loc and the crs_home
Location of cluster config files
/etc/oracle/scls_scr/$HOSTNAME
Some cluster control commands to check the status / health of the clusterware
crsctl status resource -t crsctl status server crsctl check cluster crsctl check crs crsctl query css votedisk crsctl query crs activeversion crsctl query crs releaseversion crsctl query crs softwareversion
List network interfaces available to this cluster node
oifcfg iflist -p
Is Oracle Restart (High Availability Service) enabled?
cat /etc/oracle/scls_scr/$HOSTNAME/oracle/ohasdstr crsctl check has crsctl config has crsctl start has crsctl stop has crsctl disable has crsctl enable has
Is ASM autostart enabled?
crsctl stat res ora.asm
Are diskgroups registered?
srvctl status diskgroup -g DATA crsctl stat res ora.DATA.dg
Is database registered and autostart enabled?
srvctl config database -d <database>
if not…
srvctl add database <database> srvctl enable database <database>
Set ASM diskgroups to be used by database
srvctl modify database -d <database> -diskgroup "DATA,FRA"
Stop and start the clusterware services
To set the correct home, can also search /etc/oratab for the relevant ASM instance and use . oraenv
export ORA_CRS_HOME=$(awk -F: '/+ASM/ {print $2}' /etc/oratab)
$ORA_CRS_HOME/bin/crsctl start crs
$ORA_CRS_HOME/bin/crsctl stop crs
Prevent cluster services from starting up on boot
$ORA_CRS_HOME/bin/crsctl disable crs
Re-enable clusterware services
$ORA_CRS_HOME/bin/crsctl enable crs
List the resources under cluster control
$ORA_CRS_HOME/bin/crsctl status resource -t or $ORA_CRS_HOME/bin/crsctl stat res -t
or
crsctl status res |grep -v "^$"|awk -F "=" 'BEGIN {print " "} {printf("%s",NR%4 ? $2"|" : $2"\
")}'|sed -e 's/ *, /,/g' -e 's/, /,/g'|\\
awk -F "|" 'BEGIN { printf "%-40s%-35s%-20s%-50s\
","Resource Name","Resource Type","Target ","State" }{ split ($3,trg,",") split ($4,st,",")}{for (i in trg) {printf "%-40s%-35s%-20s%-50s\
",$1,$2,trg[[i]],st[[i]]}}'
Find the ORACLE_HOME from a database cluster metadata file
crsctl stat res ora.proddb11.db -p | grep ORACLE_HOME
Stop all the cluster resources for a particular ORACLE_HOME
If all resources for a home need to go down (e.g. for patching), this should do it.
Save the state so that the start command knows what to start and what not to.
srvctl stop home -o /oracle/product/11.2.0.4 -s /oracle/state_file_11204 or srvctl stop home -oraclehome /oracle/product/11.2.0.4 -statefile /oracle/state_file_11204 or srvctl stop home -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 -statefile /home/oracle/home_state_20170111 -node sdtcsynoda02-rac -stopoption IMMEDIATE -force
Before running the startup, the state file can be edited and instances added/removed as it is a simple text file
srvctl start home -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -s /oracle/state_file_12102 -node sdtcsynoda02-rac
General syntax for starting and stopping a RAC instance
srvctl start|stop instance -d <db_unique_name> -i <inst_name_list> [[-o start_options]]|[[-o stop_options]] [[-c connect_str | -q]]
Stop one instance of a RAC database
srvctl stop instance –d <db_name> –i <instance_name>
Start an ASM instance on a specific node
srvctl start instance –n <node_name> e.g. srvctl start prodctl –n asmnode1
Stop all instances of a RAC database
srvctl stop database –db <db_name> e.g. srvctl stop database -db ACCINGPX
Start all instances of a RAC database
srvctl start database –db <db_name> e.g. srvctl start database -db ACCINGPX
Start a database in mount mode
srvctl start database –db <db_name> -o <start option> e.g. srvctl start database -db ACCINGPX -o mount
Stop ASM on a RAC node
srvctl stop asm –n <rac node> e.g. srvctl stop asm –n asmnode1 -o immediate
Start / Stop all node applications on a RAC node
srvctl start nodeapps –n <rac node> srvctl stop nodeapps –n <rac node> srvctl status nodeapps –n <rac node>
Start / Stop a listener on a RAC node
srvctl start|stop listener -n node_name [[-l listener_name_list]] e.g. srvctl start listener -n node2 srvctl start listener -n node2 -l DATAGUARD_LISTENER srvctl stop LISTENER_SCAN1
Start / Stop services on a RAC node
srvctl stop database command implicity does a srvctl stop services (because services are dependent on database).
However, a subsequent srvctl start database requires an explicit srvctl start service
- service name list is optional and if not provided, the SRVCTL starts all of the database's services
- -c connect_str Connect string (default: / as sysdba)
- -q Query connect string from standard input
srvctl start service -d <db_unique_name> [[-s service_name_list [[-i <inst_name>]] [[-o <start_options>]] [[-c connect_str | -q]] srvctl stop service -d db_unique_name [[-s service_name_list [[-i inst_name]] [[-c connect_str | -q]] [[-f]]
Status of an ASM instance on a RAC node
srvctl status asm -n sdtcsynoda01-rac srvctl status asm -n sdtcsynoda02-rac
Status of the database, all instances and all services
srvctl status database -d TSTEV3 -v
Instance TSTEV31 is running on node sdtcsynoda01-rac Instance TSTEV32 is running on node sdtcsynoda02-rac
Status of named instances with their current services (if on local node)
srvctl status instance -d TSTEV3 -i TSTEV31 -v
Instance TSTEV31 is running on node sdtcsynoda01-rac. Instance status: Open.
srvctl status instance -d TSTEV3 -i TSTEV32 -v
Instance TSTEV32 is running on node sdtcsynoda02-rac with online services DOTS. Instance status: Open.
Add a new node
srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A 139.184.201.1/255.255.255.0/hme0
Add a new database
srvctl add database -d ORACLE -o $ORACLE_HOME eg. srvctl add database -d MASTASPO -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -r PHYSICAL_STANDBY -s MOUNT
Add named instances to an existing database
srvctl add instance -d ORACLE -i RAC01 -n myclust-1 srvctl add instance -d ORACLE -i RAC02 -n myclust-2 srvctl add instance -d ORACLE -i RAC03 -n myclust-3 eg. srvctl add instance -d MASTASPO -i MASTASPO -n sdtcoda01-rac
Add a service to an existing database with preferred instances (-r) and available instances (-a). Use basic failover to the available instances
srvctl add service -d ORACLE -s STD_BATCH -r RAC01 -a RAC02
Add a service to an existing database with preferred instances (-r) and available instances (-a). Use preconnect at the available instances
srvctl add service -d ORACLE -s STD_BATCH -r RAC01 -a RAC02 -P PRECONNECT
Remove the applications for a database
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database
srvctl remove instance -d ORACLE -i RAC03 srvctl remove instance -d ORACLE -i RAC04
Remove the service
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node
srvctl remove nodeapps -n myclust-4
Modify an instance to execute on another node
srvctl modify instance -d ORACLE -n myclust-4
Modify a service to execute on another node
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
Modify an instance to be a preferred instance for a service
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 –r
Relocate a service from one instance to another
srvctl relocate service -d orac -s CRM -i RAC04 -t RAC01
Switchover / failover a database from one node (server) to another
srvctl relocate db -d orac -n server02
Enable CRS resources (resources can be up or down when these commands are issued)
Enable the database
srvctl enable database -d ORACLE
Enable the named instances
srvctl enable instance -d ORACLE -i RAC01, RAC02
Enable the service
srvctl enable service -d ORACLE -s ERP,CRM
Enable the service at the named instance
srvctl enable service -d ORACLE -s CRM -i RAC03
Disable CRS resources (resources must be down when these commands are issued)
Disable the database globally
srvctl disable database -d ORACLE
Disable the named instances
srvctl disable instance -d ORACLE -i RAC01, RAC02
Disable the service(s) globally
srvctl disable service -d ORACLE -s ERP,CRM
Disable the service on the named instance
srvctl disable service -d ORACLE -s CRM -i RAC02
Status of a named services
srvctl status service -d TSTEV3 -s DOTS -v
Status of all nodes supporting database applications
srvctl status nodeapps
How to tell if database is a RAC cluster
This will be TRUE if RAC database
show parameter cluster
or, the GV$ views will show more than 1 record
select * from gv$instance;
INST_ID INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO CON_ID INSTANCE_MO EDITION FAMILY
---------- --------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
2 2 DEV32 oda02-rac 12.1.0.2.0 20-JUL-16 OPEN YES 2 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE
1 1 DEV31 oda01-rac 12.1.0.2.0 13-OCT-16 OPEN YES 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE
2 rows selected.
or
set serverout on
begin
if dbms_utility.is_cluster_database then
dbms_output.put_line('Running in RAC mode.');
else
dbms_output.put_line('Running in EXCLUSIVE mode.');
end if;
end;
/
or
select * from v_$active_instances;
or
select * from v_$thread;
Show session distribution across the RAC nodes
select inst_id , count(*) sessions from gv$session where type = 'USER' group by inst_id /
