11gR2 – RAC Shared Storage Preparation(ASM) – Part1
Posted by Srikrishna Murthy Annam on March 18, 2010
RAC storage Options
- ASM Storage
- OCFS (Release 1 or 2)
- NFS ( NAS or SAN )
- Raw Devices
- Third party cluster filesystem such as GPFS or Veritas
Preparing ASM storate for clusterware
- Partition the Shared Disks
- Installing and Configuring ASMLib
- Using ASMLib to Mark the Shared Disks as Candidate Disks
For this installation we will be using ASM for Clusterware and Database storage on top of SAN technology. The following Table shows the storage layout for this implementation:
I.Partition the Shared Disks
As root user on Node1, run the following command
Now Load the updated block device partition tables by running the following on ALL servers participating in the cluster:
NOTE : Repeat above two steps for all the disks and create partitions as per the table mentioned above.
II.Installing and Configuring ASMLib :
The ASMLib is highly recommended for those systems that will be using ASM for shared storage within the cluster due to the performance and managability benefits that it provides. Perform the following steps to install and configure ASMLib on the cluster nodes:
NOTE: ASMLib automatically provides LUN persistence, so when using ASMLib there is no need to manually configure LUN persistence for the ASM devices on the system.
1. Download the following packages from the ASMLib OTN page, if you are a Enterprise Linux customer you can obtain the software through the Unbreakable Linux network.
NOTE: The ASMLib kernel driver MUST match the kernel revision number, the kernel revision number of your system can be identified by running the “uname -r” command. Also, be sure to download the set of RPMs which pertain to your platform architecture, in our case this is x86_64.
2. Install the RPMs by running the following as the root user:
3. Configure ASMLib by running the following as the root user:
NOTE: If using user and group separation for the installation (as shown in this guide), the ASMLib driver interface owner is grid and the group to own the driver interface is asmdba (oracle and grid are both members of this group). These groups were created in section 2.1. If a more simplistic installation using only the Oracle user is performed, the owner will be oracle and the group owner will be dba.
4. Repeat steps 2 – 4 on ALL cluster nodes.
III.Using ASMLib to Mark the Shared Disks as Candidate Disks
To create ASM disks using ASMLib:
1. As the root user, use oracleasm to create ASM disks using the following syntax:
In this command, disk_name is the name you choose for the ASM disk. The name you choose must contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an ASM disk is the device_partition_name. For example:
If you need to unmark a disk that was used in a createdisk command, you can use the following syntax as the root user:
2. Repeat step 1 for each disk that will be used by Oracle ASM.
3. After you have created all the ASM disks for your cluster, use the listdisks command to verify their availability:
4. On all the other nodes in the cluster, use the scandisks command as the root user to pickup the newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in the cluster.
5. After scanning for ASM disks, display the available ASM disks on each node to verify their availability: