I needed a RAC environment to test some tools like the new Oracle Cloud Control 12c, Toad for Oracle and Spotlight from Quest Software with their RAC extensions and a monitoring tool from Herrmann & Lenz Services (HLMM). Therefore I did not need a “certified” environment nor do I plan to make this a production environment, “simple and easy” are the main attributes for this RAC environment.
- VMware ESXi 5.0
- Oracle Enterprise Linux 6 Update 1 (OEL 6U1)
- Oracle Database and Grid Infrastructure 184.108.40.206
First I created a vritual machine named asterix with 4 GB main memory, two network interfaces and one 40 GB virtual disk. After installing OEL 6U1 I cloned the machen to obelix.
After asterix and obelix are both working properly with networking enabled (one interface as public network and the other one as a private admin network) I disabled the firewalls and selinux on both systems. Now I’m able to configure the shared storage.
In larger environments you might have your SAN storage to use as shared devices but I wanted an environment with only one box. In past I used openfiler to work as a virtual SAN and there are many other virtual applicances that can act as a SAN storage. But VMware allow to use shared storage within one box without any additional virtual machines.
On asterix I created two virtual disks with 20 GB size each. To use them as shared storage later on the following parameters have to be checked while creating the disks:
- All disks have to be created as “Thick-Provision Eager-Zeroed”. If you miss this option the SCSI controller will not be able to share the disks.
- The new disks need a separated SCSI-Controller (so they are connected as SCSI (1:0) and SCSI (1:1)
- Snapshots need to be disabled for the disks
- The new SCSI-Controller (which shows up automatically after the first disk is linked to SCSI (1:0) has to be set to “Virtual”.
- Now the new disks can be created (which takes a while as the whole space has to be allocated)
On obelix the same procedure takes place except as the disks are linked to another SCSI-Controller: SCSI (2:0) and SCSCI (2:1).
After this four independant disks exist with two additional SCSI-Controller. Now the existing disks are declared to the “other” virtual machine. So the two disks from asterix are created on obelix with “use existing disk” vice versa. Keep in mind that obelix should use SCSI-Controller 1 for the disks from asterix while asterix uses SCSI-Controller 2 for the disks from obelix (while both are using SCSI-Controller 0 for their internal disks).
After restarting both virtual machines the disks should show up as sdb, sdc, sdd, and sde on both machines.
ASM installation and configuration
Before we can continue with the asm installation two users (oracle and oragrid) and three groups (oinstall, dba, griddba) have to be created on both machines. In general it’s possible to install all Oracle related software with one account but for updates etc. dedicated users are preferable.
In general there are two ways to configure ASM: using the asmlib or with the Linux udev definition. In the past I made very good experiences with asmlib that’s why I used this way again. On OTN you can find the oracleasmlib for various environments. The installation was done with rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm. In addition the ASM-Support which comes with the OEL6U1 DVD (oracleasm-support-2.1.5-1.el6.x86_64.rpm) has to be installed.
With fdisk we need to create one partition per disk (only on one machine!).
# fdisk -l Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 5222 41430016 8e Linux LVM Disk /dev/sdb: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x54a3d8c5 Device Boot Start End Blocks Id System /dev/sdb1 1 2610 20964793+ 83 Linux ... Device Boot Start End Blocks Id System /dev/sdc1 1 2610 20964793+ 83 Linux ... Device Boot Start End Blocks Id System /dev/sdd1 1 2610 20964793+ 83 Linux ... Device Boot Start End Blocks Id System /dev/sde1 1 2610 20964793+ 83 Linux
Now it’s time to configure ASM on both systems:
# service oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets (''). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface : oragrid Default group to own the driver interface : griddba Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [n]: y ...
Now we can create the ASM disks (on one machine e.g. asterix)
# service oracleasm createdisk ASM1 /dev/sdb1 # service oracleasm createdisk ASM2 /dev/sdc1 # service oracleasm createdisk ASM3 /dev/sdd1 # service oracleasm createdisk ASM4 /dev/sde1
The other machine has to read the ASM-Configuration with:
# service oracleasm scandisks
The next part will be the installation of the grid infrastructure
Grid Infrastructure installation
Since Oracle 11g Release 2 all Oracle Patchsets installations are “out-of-place” so it’s no longer necessary to install the base release (e.g. 220.127.116.11) before installing the patch. Instead you can directly use the installer coming with the patch. In case of the grid infrastructure it’s part 3 of the packages (p10404530_112030_Linux-x86-64_3of7.zip).
Bevore we can invoke the universal installer we need to configure the networking. Oracle recommends to use a nameserver as this is the only way to specify more than one SCAN (Single Client Access Name) for the entire cluster. As said earlier my intension was to create a very simple RAC environment so I used the hosts files instead. For my cluster I needed the following entries:
- one public ip address per node (asterix and obelix)
- one private ip address per node (asterix-priv and obelix-priv)
- one virtual ip address per node (asterix-vip and obelix-vip)
- one SCAN ip address (associated with the cluster named gallier)
Next both nodes need to communicate via ssh with equivalent users without passwords. So for both oragrid as well as oracle we need to setup ssh accordingly:
Asterix: $ ssh-keygen -t rsa Obelix: $ ssh-keygen -t rsa $ cd .ssh $ cp rsa.pub authorized_keys $ scp authorized_keys asterix:.ssh Asterix: $ cd .ssh $ cat rsa.pub >> authorized_keys $ scp authorized_keys obelix:.ssh $ ssh asterix date $ ssh obelix date Obelix $ ssh asterix date $ ssh obelix date
Now all connects between both systems can be done without passwords. This is essential for the the grid infrastructure installation.
This installation of the software is very simple. Please be aware you use the right network settion (private or public interface) and specify the right cluster name (in my case gallier). During the installation you are asked where to put the CRS-files. You should specify ASM as since Oracle 11g Release 2 it’s possible to store all of the shared files including CRS in ASM. The corresponding candidate disks will show up automatically after you specify ASM as the storage for the cluster registry.
Finally the root.sh scripts need to be executed on after the other so time for a coffee.
The installation of the database software is similar to a single instance installation except that you will see an additional screen named “Grid Installation Options”. If this screen does not show up something went wrong during the grid infrastructure installation.
This installation of RAC in this environment worked very smooth and it’s sufficient for testing. Some might wonder that I did not mention any kernel parameter or limits which have to be adjusted. The reason is pretty simple: since Oracle 11g Release 2 some fix-scripts are created automatically during installation. If a parameter is not adequate like the limits in sysctl.conf a script named runfixup.sh is created in the directory /tmp/CVU_<version>_<owner>. Execute the script and the parameters are set automatically – very nice.
If some additional packages are necessary they all come with the OEL6U1 DVD (except asmlib as mentioned earlier).
I’m waiting for some feedback and maybe some more discussions about this or other environments.