==== Oracle Snap Management Utility (SMU) ====
can be accessed via command line Interface (CLI) as well as the standard Browser User Interface (BUI)
cat /opt/oracle/smu/etc/smu.conf
locate the line "sshd.port=" and use this with the root user and password
ssh -p
==== Create a clone database from an RMAN backup using SMU ====
Copied from [[https://frankdba.wordpress.com/|frankdba]] in case his website goes down. This is too useful to lose!\\
Source database PRD1S1\\
Target database ACC1\\
Two NFS mounts in /etc/fstab
10.60.30.10:/export/acc1-ctl /u33/backup/prd1s1/rman/ctl nfs rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,tcp,actimeo=0 0 0
10.66.30.10:/export/acc1-dbf /u33/backup/prd1s1/rman/dbf nfs rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,tcp,actimeo=0 0 0
Make sure these mount points exist
mkdir -p /u33/backup/prd1s1/rman/ctl
mkdir -p /u33/backup/prd1s1/rman/dbf
Mount them as user root
mount /u33/backup/prd1s1/rman/ctl
mount /u33/backup/prd1s1/rman/dbf
Ensure the export directories on the ZFS have the correct permissions. Apparently they can be exported as user nobody instead of the oracle database owner.
zfs01:> shares
zfs01:> list #this will show project names
ACC01
ACC02
zfs01:> select ACC01
zfs01:> show
..
..
Filesystems:
NAME SIZE MOUNTPOINT
acc1-ctl 1.91G /export/acc1-ctl
acc1-dbf 8.57T /export/acc1-dbf
zfs01:shares ACC01> get default_user
default_user = 1001 #UID of user oracle on compute node
zfs01:shares ACC01> get default_group
default_group = 1002 #GID of dba group on compute node
- To adjust use:
zfs01:shares ACC01> set default_user = 1003
Backup the database with RMAN
run {
set nocfau; # NO Control File AUto backup (we take a specific backup of control file)
backup as copy database format '/u33/backup/prd1s1/rman/dbf/%U' tag='FRANK';
backup as copy archivelog all format '/u33/backup/prd1s1/rman/ctl/%U';
backup as copy current controlfile format '/u33/backup/prd1s1/rman/ctl/%U';
}
Note: for this procedure to work, the database has to be backed up using 'as copy'
Ensure we have all the redo necessary for the recover by sending some more archivelogs over and backing them up.\\
On the primary database...
SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> alter system checkpoint;
run {
set nocfau;
backup as copy archivelog all format '/u33/backup/prd1s1/rman/ctl/%U';
}
From SMU, deprovision (drop) any existing database of same name just in case
smu> tasks add -F deprovision acc1
Create an application for the database creation
smu> accounts add -t APPLICATION -p ORACLE_DATABASE -o cluster_database=false -o host=db01 -o oracle_sid=ACC1 -o password=***** -o port=1521 -o storage=zfs01 ACC1
Create the database
smu> tasks add -F import -o oracle_home=/u01/app/oracle/product/11.2.0.4/dbhome_1 /export/acc1-dbf,export/acc1-ctl ACC1
Watch the progress using the task id from the import above
smu> tasks tail -f 455
Getting storage appliance node name
Getting network interface address(s)
Verifying target appliance instance does not already exist
Creating snapshots of backup shares
Finding share(s)
Taking snapshots of share(s)
Cloning snapshot(s)
Finding share(s)
Cloning share snapshot(s)
Mounting the clone share(s)
Mounting the backup
Scanning files in the backup
Getting the backup block size
Starting temporary instance in nomount stage
Shutting down instance
Creating revised temporary pfile
Starting temporary instance in mount stage
Calculating the max SCN and FRA size
Getting backup configuration
Dismounting the backup
Shutting down temporary instance
Configuring and starting clone instance
Adding entry to oratab
Creating the required database directories
Creating the password file
Creating the pfile
Starting the database in nomount stage
Creating the control file
Recovering the database
Opening the database and resetting the logs
Adding the temp file(s)
Dropping the cluster views
Dropping extra undo tablespaces
--- TASK SUCCEEDED ---