Oracle RAC = Real Application Clusters Introduced with 9i, replacing OPS, Oracle Parallel Server. [It looks independent of Oracle DataGuard, which looks like a replication product] ASM = Automatic Storage Management Disk groups with automation/hiding for file-names and mount-points. 10g. All Cluster members must be binary-compatible (but may have different speeds, sizes, etc.) One ASM instance per node (host?). Nodes linked by a private network. Must also be inter-accessible from the public network. crs_stat CVS Cluster Verification Utility svrctl SVRCTL Server Control manages OCR Oracle Cluster Registry + both regular components + nodeapps. With 10g and later, requires Clusterware's Infrastructure component, or another certified cluster product. CRSCTL Cluster Ready Services Control. Manages the Clusterware daemons. nodeapps = { ONS: Oracle Notification Service GSD: Global Services Daemon VIP Virtual IP } clusterware daemons = { CSS: Cluster Synchronization Services (needed for plain ASM) CRS: Cluster-Ready Services EVM: Event Manager } Voting Disks Need an odd number for node arbitration. 280MB ea. They are actually just files. May use just one with redundancy outside of Oracle. OCR Oracle Cluster Registry. 280MB ea. Just need to be redundant. Must set up shared, redundant Voting disks + OCR before installing RAC. Need to use same n/w devices on all hosts. I.e. public eth0, priv. eth1 etc. Suggest name -priv. + need another public addr ipalias like -vip. Both public addrs must be publicly defined. But don't configure the vip addr, RAC does that for you. Clients use the *-vip address for the service. Make ssh rsa and ds key sets with meaningful pass phrase as owner on each node. Set up for passwordless cross-access as the owner. Time needs to be synced on nodes. Wed Jun 11 11:45:59 EDT 2014 Running tutorial http://www.lab128.com/rac12_installation_using_vb/article_text.html except following tutorial http://dbaora.com/install-oracle-12c-12-1-0-1-on-centos-6-udev-disks-nfs-disks-kmod-oracleasm-disks/ Where I got raised sysctl and limits.conf values and separate grid user and groups from. Also the idea to use separate ORACLE_BASEs for Oracle DB and Grid. TODO: Run grid installer are user "grid" Attempting: groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin add grp asmdba to oracl useradd -u 54322 -g oinstall -G asmdba,asmoper,asmadmin,dba grid Obtain RPMs [Forget adding Oracle repository. Causes hairy dependency problems.] Download kmod-oracleasm from rhn.redhat.com Download oracleasmlib from OTN (link in a knowledgebase article) Download oracleasm-support from OTN (link in a knowledgebase article, updated to latest version (must be logged in to get it): https://oss.oracle.com/projects/oracleasm-support/dist/files/RPMS/rhel5/x86/2.1.8/oracleasm-support-2.1.8-1.el5.i386.rpm Requires 7.6 GiB partition for grid's ORACLE_HOME (net 7064 MiB or similar). Only uses about 6 M in its ORACLE_BASE. # Define names in DNS: n hostnames, racX-scan (resolving to 3 addrs), n hostname-vip Modify VM + 7 GiB disk + a second E1000 nic [If going to clone VM, don't create shared disk until after clone below] ONLY FOR FIRST VM: Create shared disk Independent/persistent 4 GiB EAGER-zeroed thick disk on green3tb: From ESXI server: # vmkfstools -c 4G -d eagerzeroedthick rac1asm1.vmdk Even though created "in green3tb" instead of in VM, it places vmdk files inside of the vm directory anyway. # FOR Both VMS: Add the shared disk # Edit Both VM's *.vmx file, adding: scsi0:4.sharing = "multi-writer" # Image Setup according to imagesetup.html, except # Inside system-config-network, add 2nd n/w device "eth1" # 2nd addr: 192.168.123.X (private network) # Addr 192.168.123.1, mask 255.255.255.0, leave g/w + DNS unset # yum -y update # shutdown -r now # yum -y install /tmp/*oracleasm*.rpm # oracleasm configure -i # as root # responses: grid, asmadmin, y, parted /dev/sde mklabel gpt parted /dev/sde 'mkpart grid1 ext4 0c -0c' # Edit fstab to mount label oraclegrid as /local/oraclegrid mkfs.ext4 -L oraclegrid /dev/sde1 mount -a mkdir /local/oraclegrid/121grid [Unfortunately, the Grid home dir can't be under the Oracle Base] chown --reference=/local/oraclesys /local/oraclegrid/121grid chmod --reference=/local/oraclesys /local/oraclegrid/121grid ? CVUQDISK_GRP=oinstall yum -y install /tmp/cvuqdisk* [Instructions say to clone disk here] # ONLY FOR FIRST VM: # parted /dev/sde 'mklabel gpt' # parted /dev/sde 'mkpart asm1 ext4 0c -0c' # oracleasm init # oracleasm createdisk KMOD_DISK1 /dev/sde1 # oracleasm scandisks # oracleasm listdisks # to check for KMOD_DISK1 # ONLY FOR NON-FIRST VMS: # oracleasm init # oracleasm scandisks # oracleasm listdisks # to check for KMOD_DISK1 Extract grid installers into /local/oraclesys/oradata # yum -y install nfs-utils # ONLY ON 1st HOST: mkdir /mnt/vmres # ONLY ON 1st HOST: mount -o ro suse:/export/share01/vmres /mnt/vmres rm -rf /tmp/*.zip mv -v g* grid-installer-12.1 # Log in to rac11 host AS grid # /path/to/g*/run*r # All defaults other than... # Installation Type: Advanced Installation # [default cluster name is "rac-cluster"] # Grid Plug and Play: SCAN Name: racX-scan.admc.com. Configure GNS: DISABLE # Cluster Node Information: # Add button: Add second node in same fashion as the first provided node. # SSH connectivity button: Enter grid's "OS Password". Click Setup button. # (On repeat runs, use the Test button instead of Setup). # Network Interface Usage: Change the priv+ASM to just private # Grid Infrastructure Manage...: No (and confirm) # Create ASM Disk Group: Redundancy: External. Select the only candidate disk. # I set Change Discovery Path... to "/dev/oracleasm/disks,ORCL:*" # But the /dev/... disk looks to be exactly same as the ORCL: one. # I selected the /dev/... disk # ASM Password: Use same... Using asm0racle (OS Groups: Ignore the ASM group warning) # Installation Location: Oracle base of /local/gridsys; S/w location of /local/girdsys/121home # Create Inventory: /local/ora-inv # Root script execution: Must select "Automatically run..." even with response file. # (This performs the software installation on all nodes) # As root: chown oracle /local/ora-inv # Update grid's .bash_profile according to instructions in it. Log in again (as ##grid##). . oraenv # Set ORACLE_SID to +ASM1 # crsctl status resource -t # Lots of stuff on all nodes should report ONLINE and STABLE (only oc4j should be OFFLINE) Run the database installer (for s/w) as oracle According to tutorial EXCEPT: Database Edition: Standard Edition Installation Location: Oracle base and s/w loc according to convs. (Operating System Groups defaults are different, but accept them) # Looks good to use oracleHomeRac121.rsp. # Should only need to update the hostnames. Search for =.*\ = main cluster SID; C_GLOBAL NAME = main cluster's Global Name) ORACLE_SID= sqlplus system/password as sysdba ORACLE_SID=_x user # where _x is running on the local host user@$(hostname -s)/ user@/ user@/ and from sqltool with URL like url jdbc:oracle:thin:@/ url jdbc:oracle:thin:@/ Individual instances like url jdbc:oracle:thin:@:1521:_x ASM database only through sqlplus: export ORACLE_SID, ORACLE_HOME sqlplus system/password as sysdba JDBC connections only work to hosts where an instance for the cluster is running (not just a node in the cluster). ASM partitions Files/directories can be referenced case-insensitively. As Grid owner, set ORACLE_BASE=/local/oraclesys, ORACLE_HOME=/local/oraclegrid/121grid, ORACLE_SID=+ASM1 asmcmd EVERYTHING under /DATA ("DATA" is the Disk group name) Password and Grid management files elsewhere, but DB files under /DATA// "asmcmd " seems to hang. "du" is like "du -s", one (optional) dir as param (defaults to $PWD). "lsdg" very useful. Reports Total_MB and Uable_file_MB. Check srvctl config database -d