Note: This note was created for 9i RAC. The 10g Oracle documentation provides installation instructions for 10g RAC. These instructions can be found on OTN:
Oracle® Real Application Clusters Installation and Configuration Guide
10g Release 1 (10.1) for AIX-Based Systems, hp HP-UX PA-RISC (64-bit), hp Tru64 UNIX, Linux, Solaris Operating System (SPARC 64-bit)
--------------------------------------------------------------------------------
Purpose
This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) (Version 9.2.0.5), and start a cluster database on Linux. For additional explanation or information on any of these steps, please see the references listed at the end of this document.
Disclaimer: If there are any errors or issues prior to step 2, please contact your Linux distributor.
The information contained here is as accurate as possible at the time of writing.
1. Configuring the Cluster Hardware
1.1 Minimal Hardware list / System Requirements
1.1.1 Hardware
1.1.2 Software
1.2 Installing the Shared Disk Subsystem
1.3 Configuring the Cluster Interconnect and Public Network Hardware
2. Creating a cluster
2.1 UNIX Pre-installation tasks
2.2 Configuring the Shared Disks
2.3 Run the Oracle Universal Installer to install the 9.2.0.4 ORACM (Oracle Cluster Manager)
2.4 Configure the hangcheck-timer
2.5 Install Version 10.1.0.2 of the Oracle Universal Installer
2.6 Run the 10.1.0.2 Oracle Universal Installer to patch the Oracle Cluster Manager (ORACM) to 9.2.0.5
2.7 Modify the ORACM configuration files to utilize the hangcheck-timer
2.8 Start the ORACM (Oracle Cluster Manager)
3. Installing RAC
3.1 Install 9.2.0.4 RAC
3.2 Patch the RAC Installation to 9.2.0.5
3.3 Start the GSD (Global Service Daemon)
3.4 Create a RAC Database using the Oracle Database Configuration Assistant
4. Administering Real Application Clusters Instances
5. References
--------------------------------------------------------------------------------
1. Configuring the Clusters Hardware<>
1.1 Minimal Hardware list / System Requirements
Please check the RAC/Linux certification matrix for information on currently supported hardware/software.
1.1.1 Hardware
Requirements:
Refer to the RAC/Linux certification matrix for information on supported configurations. Ensure that the system has at least the following resources:
- 400 MB in /tmp
- 512 MB of Physical Memory (RAM)
- Three times the amount of Physical Memory for Swap space (unless the system exceeds 1 GB of Physical Memory, where two times the amount of Physical Memory for Swap space is sufficient)
An example system disk layout is as follows:-
A sample system disk layout
Slice
Contents
Allocation (in Mbytes)
0
/
2000 or more
1
/boot
64
2
/tmp
1000
3
/usr
3000-7000 depending on operating system and packages installed
4
/var
512 (can be more if required)
5
swap
Three times the amount of Physical Memory for Swap space (unless the system exceeds 1 GB of Physical Memory, where two times the amount of Physical Memory for Swap space is sufficient).
6
/home
2000 (can be more if required)
1.1.2 Software
For RAC on Linux support, consult the operating system vendor and see the RAC/Linux certification matrix.
Make sure you have make and rsh-server packages installed, check with:
$rpm -q rsh-server make
rsh-server-0.17-5
make-3.79.1-8
If these are not installed, use your favorite package manager to install them.
1.1.3 Patches
Consult with your operating system vendor to get on the latest patch version of the kernel.
1.2 Installing the Shared Disk Subsystem
This is highly dependent on the subsystem you have chosen. Please refer to your hardware documentation for installation and configuration instructions on Linux. Additional drivers and patches might be required. In this article we assume that the shared disk subsystem is correctly installed and that the shared disks are visible to all nodes in the cluster.
1.3 Configuring the Cluster Interconnect and Public Network Hardware
If not already installed, install host adapters in your cluster nodes. For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware.
Each system will have at least an IP address for the public network and one for the private cluster interconnect. For the public network, get the addresses from your network manager. For the private interconnect use 1.1.1.1 , 1.1.1.2 for the first and second node. Make sure to add all addresses in /etc/hosts.
[oracle@opcbrh1 oracle]$ more /etc/hosts
ex:
9.25.120.143 rac1 #Oracle 9i Rac node 1 - public network
9.25.120.143 rac2 #Oracle 9i Rac node 2 - public network
1.1.1.1 int-rac1 #Oracle 9i Rac node 1 - interconnect
1.1.1.2 int-rac2 #Oracle 9I Rac node 2 - interconnect
Use your favorite tool to configure these adapters. Make sure your public network is the primary (eth0).
Interprocess communication is an important issue for RAC since cache fusion transfers buffers between instances using this mechanism. Thus, networking parameters are important for RAC databases. The values in the following table are the recommended values. These are NOT the default on most distributions.
Parameter
Meaning
Value
/proc/sys/net/core/rmem_default
The default setting in bytes of the socket receive buffer
262144
/proc/sys/net/core/rmem_max
The maximum socket receive buffer size in bytes
262144
/proc/sys/net/core/wmem_default
The default setting in bytes of the socket send buffer
262144
/proc/sys/net/core/wmem_max
The maximum socket send buffer size in bytes
262144
You can see these settings with:
$ cat /proc/sys/net/core/rmem_default
Change them with:
$ echo 262144 > /proc/sys/net/core/rmem_default
This will need to be done each time the system boots. Some distributions already have setup a method for this during boot. On Red Hat , this can be configured in /etc/sysctl.conf (like : net.core.rmem_default = 262144).
--------------------------------------------------------------------------------
2. Creating a Cluster
On Linux, the cluster software required to run Real Application Clusters is included in the Oracle distribution.
The Oracle Cluster Manager (ORACM) installation process includes eight major tasks.
UNIX pre-installation tasks.
Configuring the shared disks
Run the Oracle Universal Installer to install the 9.2.0.4 ORACM (Oracle Cluster Manager)
Configure the hangcheck-timer.
Install version 10.1.0.2 of the Oracle Universal Installer
Run the 10.1.0.2 Oracle Universal Installer to patch the Oracle Cluster Manager (ORACM) to 9.2.0.5
Modify the ORACM configuration files to utilize the hangcheck-timer.
Start the ORACM (Oracle Cluster Manager)
2.1 UNIX Pre-installation tasks
These steps need to be performed on ALL nodes.
First, on each node, create the Oracle group. Example:
# groupadd dba -g 501
Next, make the Oracle user's home directory. Example:
# mkdir -p /u01/home/oracle
On each node, create the Oracle user. Make sure that the Oracle user is part of the dba group. Example:
# useradd -c "Oracle Software Owner" -G dba -u 101 -m -d /u01/home/oracle -s /bin/csh oracle
On each node, Create a mount point for the Oracle software installation (at least 2.5 GB, typically /u01). The oracle user should own this mount point and all of the directories below the mount point. Example:
# mkdir /u01
# chown -R oracle.dba /u01
# chmod -R ug=rwx,o=rx /u01
Once this is done, test the permissions on each node to ensure that the oracle user can write to the new mount points. Example:
# su - oracle
$ touch /u01/test
$ ls -l /u01/test
-rw-rw-r-- 1 oracle dba 0 Aug 15 09:36 /u01/test
Depending on your Linux distribution, make sure inetd or xinetd is started on all nodes and that the ftp, telnet, shell and login (or rsh) services are enabled (see /etc/inetd.conf or /etc/xinetd.conf and /etc/xinetd.d). Example:
# more /etc/xinetd.d/telnet
# default: on
# description: The telnet server serves telnet sessions; it uses # unencrypted username/password pairs for authentication.
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
In this example, disable should be set to 'no'.
On the node from which you will run the Oracle Universal Installer, set up user equivalence by adding entries for all nodes in the cluster, including the local node, to the .rhosts file of the oracle account, or the /etc/hosts.equiv file.
Sample entries in /etc/hosts.equiv file:
rac1
rac2
int-rac1
int-rac2
As oracle user, check for user equivalence for the oracle account by performing a remote copy (rcp) to each node (public and private) in the cluster. Example:
RAC1:
$ touch /u01/test
$ rcp /u01/test rac2:/u01/test1
$ rcp /u01/test int-rac2:/u01/test2
RAC2:
$ touch /u01/test
$ rcp /u01/test rac1:/u01/test1
$ rcp /u01/test int-rac1:/u01/test2
$ ls /u01/test*
/u01/test /u01/test1 /u01/test2
RAC1:
$ ls /u01/test*
/u01/test /u01/test1 /u01/test2
Note: If you are prompted for a password, you have not given the oracle account the same attributes on all nodes. You must correct this because the Oracle Universal Installer cannot use the rcp command to copy Oracle products to the remote node's directories without user equivalence.
System Kernel Parameters
Verify operating system kernel parameters are set to appropriate levels:
Kernel Parameter
Setting
Purpose
SHMMAX
2147483648
Maximum allowable size of one shared memory segment.
SHMMIN
1
Minimum allowable size of a single shared memory segment.
SHMMNI
100
Maximum number of shared memory segments in the entire system.
SHMSEG
10
Maximum number of shared memory segments one process can attach.
SEMMNI
100
Maximum number of semaphore sets in the entire system.
SEMMSL
250
Minimum recommended value. SEMMSL should be 10 plus the largest PROCESSES parameter of any Oracle database on the system.
SEMMNS
1000
Maximum semaphores on the system. This setting is a minimum recommended value. SEMMNS should be set to the sum of the PROCESSES parameter for each Oracle database, add the largest one twice, plus add an additional 10 for each database.
SEMOPM
100
Maximum number of operations per semop call.
You will have to set the correct parameters during system startup, so include them in your startup script (startoracle_root.sh):
$ export SEMMSL=100
$ export SEMMNS=1000
$ export SEMOPM=100
$ export SEMMNI=100
$ echo $SEMMSL $SEMMNS $SEMOPM $ SEMMNI > /proc/sys/kernel/sem
$ export SHMMAX=2147483648
$ echo $SHMMAX > /proc/sys/kernel/shmmax
Check these with:
$ cat /proc/sys/kernel/sem
$ cat /proc/sys/kernel/shmmax
You might want to increase the maximum number of file handles, include this in your startup script or use /etc/sysctl.conf :
$ echo 65536 > /proc/sys/fs/file-max
To allow your oracle processes to use these file handles, add the following to your oracle account login script (ex.: .profile)
$ ulimit -n 65536
Note: This will only allow you to set the soft limit as high as the hard limit. You might have to increase the hard limit on system level. This can be done by adding ulimit -Hn 65536 to /etc/initscript. You will have to reboot the system to make this active. Sample /etc/initscript:
ulimit -Hn 65536
_eval exec "$4"
Establish Oracle environment variables: Set the following Oracle environment variables:
Environment Variable
Suggested value
ORACLE_HOME
eg /u01/app/oracle/product/920
ORACLE_TERM
xterm
PATH
/u01/app/oracle/product/9.2.0/bin: /usr/ccs/bin:/usr/bin/X11/:/usr/local/bin
and any other items you require in your PATH
DISPLAY
(review Note:153960.1 for detailed information)
TMPDIR
Set a temporary directory path for TMPDIR with at least 100 Mb of free space to which the OUI has write permission.
ORACLE_SID
Set this to what you will call your database instance. This should be UNIQUE on each node.
It is best to save these in a .login or .profile file so that you do not have to set the environment every time you log in.
Create the directory /var/opt/oracle and set ownership to the oracle user. Example:
$ mkdir /var/opt/oracle
$ chown oracle.dba /var/opt/oracle
Set the oracle user's umask to "022" in you ".profile" or ".login" file. Example:
$ umask 022
Note: There is a verification script InstallPrep.sh available which may be downloaded and run prior to the installation of Oracle Real Application Clusters. This script verifies that the system is configured correctly according to the Installation Guide. The output of the script will report any further tasks that need to be performed before successfully installing Oracle 9.x DataServer (RDBMS). This script performs the following verifications:-
ORACLE_HOME Directory Verification
UNIX User/umask Verification
UNIX Group Verification
Memory/Swap Verification
TMP Space Verification
Real Application Cluster Option Verification
Unix Kernel Verification
. ./InstallPrep.sh
You are currently logged on as oracle
Is oracle the unix user that will be installing Oracle Software? y or n
y
Enter the unix group that will be used during the installation
Default: dba
Enter the version of Oracle RDBMS you will be installing
Enter either : 901 OR 920 - Default: 920
920
The rdbms version being installed is 920
Enter Location where you will be installing Oracle
Default: /u01/app/oracle/product/oracle9i
/u01/app/oracle/product/9.2.0
Your Operating System is Linux
Gathering information... Please wait
JDK check is ignored for Linux since it is provided by Oracle
Checking unix user ...
Checking unix umask ...
umask test passed
Checking unix group ...
Unix Group test passed
Checking Memory & Swap...
Memory test passed
/tmp test passed
Checking for a cluster...
Linux Cluster test section has not been implemented yet
No cluster warnings detected
Processing kernel parameters... Please wait
Running Kernel Parameter Report...
Check the report for Kernel parameter verification\n
Completed.
/tmp/Oracle_InstallPrep_Report has been generated
Please review this report and resolve all issues before attempting to
install the Oracle Database Software
Note: If you get an error like this:
InstallPrep.sh: line 45: syntax error near unexpected token `fi'
or
./InstallPrep.sh: Command not found.
Then you need to copy the script into a text file (it will not run if the file is in binary format).
2.2 Configuring the Shared Disks
For 9.2 Real Application Clusters on Linux, you can use either OCFS (Oracle Cluster Filesystem), RAW, or NFS (Redhat and Network Appliance Only) for storage of Oracle database files.
For more information on setting up OCFS for RAC on Linux, see the following MetaLink Note:
Note 220178.1 - Installing and setting up ocfs on Linux - Basic Guide
For more information on setting up RAW for RAC on Linux, see the following MetaLink Note:
Note 246205.1 - Configuring Raw Devices for Real Application Clusters on Linux
For more information on setting up NFS for RAC on Linux, see the following MetaLink Note (Steps 1-6):
Note 210889.1 - RAC Installation with a NetApp Filer in Red Hat Linux Environment
2.3 Run the Oracle Universal Installer to install the 9.2.0.4 ORACM (Oracle Cluster Manager)
These steps only need to be performed on the node that you are installing from (typically Node 1).
If you are using OCFS or NFS for your shared storage, pre-create the quorum file and srvm file. Example:
# dd if=/dev/zero of=/ocfs/quorum.dbf bs=1M count=20
# dd if=/dev/zero of=/ocfs/srvm.dbf bs=1M count=100
# chown root:dba /ocfs/quorum.dbf
# chmod 664 /ocfs/quorum.dbf
# chown oracle:dba /ocfs/srvm.dbf
# chmod 664 /ocfs/srvm.dbf
Verify the Environment - Log off and log on as the oracle user to ensure all environment variables
are set correctly. Use the following command to view them:
% env | more
Note: If you are on Redhat Advanced Server 3.0, you will need to temporarily use an older gcc for the install:
mv gcc gcc3.2.3
mv g++ g++3.2.3
ln -s /usr/bin/gcc296 /usr/bin/gcc
ln -s /usr/bin/g++296 /usr/bin/g++
You will also need to apply patch 3006854 if on RHAS 3.0
Before attempting to run the Oracle Universal Installer, verify that you can successfully run the following command:
% /usr/bin/X11/xclock
If this does not display a clock on your display screen, please review the following article:
Note 153960.1 FAQ: X Server testing and troubleshooting
Start the Oracle Universal Installer and install the RDBMS software - Follow these procedures to use the Oracle Universal Installer to install the Oracle Cluster Manager software. Oracle9i is supplied on multiple CD-ROM disks. During the installation process it is necessary to switch between the CD-ROMS. OUI will manage the switching between CDs.
Use the following commands to start the installer:
% cd /tmp
% /cdrom/runInstaller
Or cd to /stage/Disk1 and run ./runInstaller
Respond to the installer prompts as shown below:
At the "Welcome Screen", click Next.
If this is your first install on this machine:
If the "Inventory Location" screen appears, enter the inventory location then click OK.
If the "Unix Group Name" screen appears, enter the unix group name created in step 2.1 then click Next.
At this point you may be prompted to run /tmp/orainstRoot.sh. Run this and click Continue.
At the "File Locations Screen", verify the destination listed is your ORACLE_HOME directory. Also enter a NAME to identify this ORACLE_HOME. The NAME can be anything.
At the "Available Products Screen", Check "Oracle Cluster Manager". Click Next.
At the public node information screen, enter the public node names and click Next.
At the private node information screen, enter the interconnect node names. Click Next.
Enter the full name of the file or raw device you have created for the ORACM Quorum disk information. Click Next.
Press Install at the summary screen.
You will now briefly get a progress window followed by the end of installation screen. Click Exit and confirm by clicking Yes.
Note: Create the directory $ORACLE_HOME/oracm/log (as oracle) on the other nodes if it doesn't exist.
2.4 Configure the hangcheck-timer
These steps need to be performed on ALL nodes.
Some kernel versions include the hangcheck-timer with the kernel. You can check to see if your kernel contains the hangcheck-timer by running:
# /sbin/lsmod
Then you will see hangcheck-timer listed. Also verify that hangcheck-timer is starting in your /etc/rc.local file (on Redhat) or /etc/init.d/boot.local (on United Linux). If you see hangcheck-timer listed in lsmod and in the rc.local file or boot.local, you can skip to section 2.5.
If hangcheck-timer is not listed here and you are not using Redhat Advanced Server, see the following note for information on obtaining the hangcheck-timer:
Note 232355.1 - Hangcheck Timer FAQ
If you are on Redhat Advanced Server, you can either apply the latest errata version (> 12) or go to MetaLink - Patches:
Enter 2594820 in the Patch Number field.
Click Go.
Click Download.
Save the file p2594820_20_LINUX.zip to the local disk, such as /tmp.
Unzip the file. The output should be similar to the following:
inflating: hangcheck-timer-2.4.9-e.3-0.4.0-1.i686.rpm
inflating: hangcheck-timer-2.4.9-e.3-enterprise-0.4.0-1.i686.rpm
inflating:
hangcheck-timer-2.4.9-e.3-smp-0.4.0-1.i686.rpm
inflating: README.TXT
Run the uname -a command to identify the RPM that corresponds to the kernel in use. This will show if the kernel is single CPU, smp, or enterprise.
The p2594820_20_LINUX.zip file contains four files. The following describes the files:
hangcheck-timer-2.4.9-e.3-0.4.0-1.i686.rpm is for single CPU machines
hangcheck-timer-2.4.9-e.3-enterprise=0.4.0-1.i686.rpm is for multi-processor machines with more than 4 GB of RAM
hangcheck-timer-2.4.9-e.3-smp-0.4.0-1.i686.rpm is for multi-processor machines with 4 GB of RAM or less
The three RPMs will work for the e3 kernels in Red Hat Advanced Server 2.1 gold and the e8 kernels in the latest Red Hat Advanced Server 2.1 errata release. These RPMs are for Red Hat Advanced Server 2.1 kernels only.
Transfer the relevant hangcheck-timer RPM to the /tmp directory of the Oracle Real Applications Cluster node.
Log in to the node as the root user.
Change to the /tmp directory.
Run the following command to install the module:
#rpm -ivh hangcheck-timer RPM name
If you have previously installed RAC on this cluster, remove or disable the mechanism that loads the softdog module at system start up, if that module is not used by other software on the node. This is necessary for subsequent steps in the installation process. This step may require log in as the root user. One method for setting up previous versions of Oracle Real Applications Clusters involved loading the softdog module in the /etc/rc.local (on Redhat) or /etc/init.d/boot.local (on United Linux) file. If this method was used, then remove or comment out the following line in the file:
/sbin/insmod softdog nowayout=0 soft_noboot=1 soft_margin=60
Append the following line to the /etc/rc.local file (on Redhat) or /etc/init.d/boot.local (on United Linux):
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Load the hangcheck-timer kernel module using the following command as root user:
# /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Repeat the above steps on all Oracle Real Applications Clusters nodes where the kernel module needs to be installed.
Run dmesg after the module is loaded. Note the build number while running the command. The following is the relevant information output:
build 334adfa62c1a153a41bd68a787fbe0e9
The build number is required when making support calls.
2.5 Install Version 10.1.0.2 of the Oracle Universal Installer
These steps need to be performed on ALL nodes.
Download the 9.2.0.5 patchset from MetaLink - Patches:
Enter 3501955 in the Patch Number field.
Click Go.
Click Download.
Place the file in a patchset directory on the node you are installing from. Example:
$ mkdir $ORACLE_HOME/9205
$ cp p3501955_9205_LINUX.zip $ORACLE_HOME/9205
Unzip the file:
$ cd $ORACLE_HOME/9205
$ unzip p3501955_9205_LINUX.zip
Archive: p3501955_9205_LINUX
inflating: 9205_lnx32_release.cpio
inflating: README.html
inflating: ReleaseNote9205.pdf
Run CPIO against the file:
$ cpio -idmv < 9205_lnx32_release.cpio
Run the installer from the 9.2.0.5 staging location:
$ cd $ORACLE_HOME/9205/Disk1
$ ./runInstaller
Respond to the installer prompts as shown below:
At the "Welcome Screen", click Next.
At the "File Locations Screen", Change the $ORACLE_HOME name from the dropdown list to the 9.2 $ORACLE_HOME name. Click Next.
On the "Available Products Screen", Check "Oracle Universal Installer 10.1.0.2. Click Next.
Press Install at the summary screen.
You will now briefly get a progress window followed by the end of installation screen. Click Exit and confirm by clicking Yes.
Remember to install the 10.1.0.2 Installer on ALL cluster nodes. Note that you may need to ADD the 9.2 $ORACLE_HOME name on the "File Locations Screen" for other nodes. It will ask if you want to specify a non-empty directory, say "Yes".
2.6 Run the 10.1.0.2 Oracle Universal Installer to patch the Oracle Cluster Manager (ORACM) to 9.2.0.5
These steps only need to be performed on the node that you are installing from (typically Node 1).
The 10.1.0.2 OUI will use SSH (Secure Shell) if it is configured. If it is not configured it will use RSH (Remote Shell). If you have SSH configured on your cluster, test and make sure that you can SSH and SCP to all nodes of the cluster without being prompted. If you do not have SSH configured, skip this step and run the installer from $ORACLE_BASE/oui/bin as noted below.
SSH Test:
As oracle user, check for user equivalence for the oracle account by performing a secure copy (scp) to each node (public and private) in the cluster. Example:
RAC1:
$ touch /u01/sshtest
$ scp /u01/sshtest rac2:/u01/sshtest1
$ scp /u01/sshtest int-rac2:/u01/sshtest2
RAC2:
$ touch /u01/sshtest
$ scp /u01/sshtest rac1:/u01/sshtest1
$ scp /u01/sshtest int-rac1:/u01/sshtest2
$ ls /u01/sshtest*
/u01/sshtest /u01/sshtest1 /u01/sshtest2
RAC1:
$ ls /u01/sshtest*
/u01/sshtest /u01/sshtest1 /u01/sshtest2
Run the installer from the 9.2.0.5 oracm staging location:
$ cd $ORACLE_HOME/9205/Disk1/oracm
$ ./runInstaller
Respond to the installer prompts as shown below:
At the "Welcome Screen", click Next.
At the "File Locations Screen", make sure the source location is to the products.xml file in the 9.2.0.5 patchset location under Disk1/stage. Also verify the destination listed is your ORACLE_HOME directory. Change the $ORACLE_HOME name from the dropdown list to the 9.2 $ORACLE_HOME name. Click Next.
At the "Available Products Screen", Check "Oracle9iR2 Cluster Manager 9.2.0.5.0". Click Next.
At the public node information screen, enter the public node names and click Next.
At the private node information screen, enter the interconnect node names. Click Next.
Click Install at the summary screen.
You will now briefly get a progress window followed by the end of installation screen. Click Exit and confirm by clicking Yes.
2.7 Modify the ORACM configuration files to utilize the hangcheck-timer.
These steps need to be performed on ALL nodes.
Modify the $ORACLE_HOME/oracm/admin/cmcfg.ora file:
Add the following line:
KernelModuleName=hangcheck-timer
Adjust the value of the MissCount line based on the sum of the hangcheck_tick and hangcheck_margin values. (> 210)
MissCount=210
Make sure that you can ping each of the names listed in the private and public node name sections from each node. Example:
$ ping rac2
PING opcbrh2.us.oracle.com (138.1.137.46) from 138.1.137.45 : 56(84) bytes of data.
64 bytes from opcbrh2.us.oracle.com (138.1.137.46): icmp_seq=0 ttl=255 time=1.295 msec
64 bytes from opcbrh2.us.oracle.com (138.1.137.46): icmp_seq=1 ttl=255 time=154 usec
Verify that a valid CmDiskFile line exists in the following format:
CmDiskFile=file or raw device name
In the preceding command, the file or raw device must be valid. If a file is used but does not exist, then the file will be created if the base directory exists. If a raw device is used, then the raw device must exist and have the correct ownership and permissions. Sample cmcfg.ora file:
ClusterName=Oracle Cluster Manager, version 9i
MissCount=210
PrivateNodeNames=int-rac1 int-rac2
PublicNodeNames=rac1 rac2
ServicePort=9998
CmDiskFile=/u04/quorum.dbf
KernelModuleName=hangcheck-timer
HostName=int-rac1
Note: The cmcfg.ora file should be the same on both nodes with the exception of the HostName parameter which should be set to the local (internal) hostname.
Make sure all of these changes have been made to all RAC nodes. More information on ORACM parameters can be found in the following note:
Note 222746.1 - RAC Linux 9.2: Configuration of cmcfg.ora and ocmargs.ora
Note: At this point it would be a good idea to patch to the latest ORACM, especially if you have more than 2 nodes. For more information see:
Note 278156.1 - ORA-29740 or ORA-29702 After Applying 9.2.0.5 Patchset on RAC / Linux
2.8 Start the ORACM (Oracle Cluster Manager)
These steps need to be performed on ALL nodes.
Cd to the $ORACLE_HOME/oracm/bin directory, change to the root user, and start the ORACM.
$ cd $ORACLE_HOME/oracm/bin
$ su root
# ./ocmstart.sh
oracm &1 >/u01/app/oracle/product/9.2.0/oracm/log/cm.out &
Verify that ORACM is running with the following:
# ps -ef | grep oracm
On RHEL 3.0, add the -m option:
# ps -efm | grep oracm
You should see several oracm threads running. Also verify that the ORACM version is the same on each node:
# cd $ORACLE_HOME/oracm/log
# head -1 cm.log
oracm, version[ 9.2.0.2.0.49 ] started {Fri May 14 09:22:28 2004 }
--------------------------------------------------------------------------------
3.0 Installing RAC
The Real Application Clusters installation process includes four major tasks.
Install 9.2.0.4 RAC.
Patch the RAC Installation to 9.2.0.5.
Start the GSD.
Create and configure your database.
3.1 Install 9.2.0.4 RAC
These steps only need to be performed on the node that you are installing from (typically Node 1).
Note: Due to bug 3547724, temporarily create a symbolic link /oradata directory pointing to an oradata directory with space available as root prior to running the RAC install:
# mkdir /u04/oradata
# chmod 777 /u04/oradata
# ln -s /u04/oradata /oradata
Install 9.2.0.4 RAC into your $ORACLE_HOME by running the installer from the 9.2.0.4 cd or your original stage location for the 9.2.0.4 install.
Use the following commands to start the installer:
% cd /tmp
% /cdrom/runInstaller
Or cd to /stage/Disk1 and run ./runInstaller
Respond to the installer prompts as shown below:
At the "Welcome Screen", click Next.
At the "Cluster Node Selection Screen", make sure that all RAC nodes are selected.
At the "File Locations Screen", verify the destination listed is your ORACLE_HOME directory and that the source directory is pointing to the products.jar from the 9.2.0.4 cd or staging location.
At the "Available Products Screen", check "Oracle 9i Database 9.2.0.4". Click Next.
At the "Installation Types Screen", check "Enterprise Edition" (or whichever option your prefer), click Next.
At the "Database Configuration Screen", check "Software Only". Click Next.
At the "Shared Configuration File Name Screen", enter the path of the CFS or NFS srvm file created at the beginning of step 2.3 or the raw device created for the shared configuration file. Click Next.
Click Install at the summary screen. Note that some of the items installed will say "9.2.0.1" for the version, this is normal because only some items needed to be patched up to 9.2.0.4.
You will now get a progress window, run root.sh when prompted.
You will then see the end of installation screen. Click Exit and confirm by clicking Yes.
Note: You can now remove the /oradata symbolic link:
# rm /oradata
3.2 Patch the RAC Installation to 9.2.0.5
These steps only need to be performed on the node that you are installing from.
Run the installer from the 9.2.0.5 staging location:
$ cd $ORACLE_HOME/9205/Disk1
$ ./runInstaller
Respond to the installer prompts as shown below:
At the "Welcome Screen", click Next.
View the "Cluster Node Selection Screen", click Next.
At the "File Locations Screen", make sure the source location is to the products.xml file in the 9.2.0.5 patchset location under Disk1/stage. Also verify the destination listed is your ORACLE_HOME directory. Change the $ORACLE_HOME name from the dropdown list to the 9.2 $ORACLE_HOME name. Click Next.
At the "Available Products Screen", Check "Oracle9iR2 PatchSets 9.2.0.5.0". Click Next.
Click Install at the summary screen.
You will now get a progress window, run root.sh when prompted.
You will then see the end of installation screen. Click Exit and confirm by clicking Yes.
3.3 Start the GSD (Global Service Daemon)
These steps need to be performed on ALL nodes.
Start the GSD on each node with:
% gsdctl start
Successfully started GSD on local node
Then check the status with:
% gsdctl stat
GSD is running on the local node
If the GSD does not stay up, try running 'srvconfig -init -f' from the OS prompt. If you get a raw device exception error or PRKR-1064 error then see the following note to troubleshoot:
Note 212631.1 - Resolving PRKR-1064 in a RAC Environment
Note: After confirming that GSD starts, if you are on Redhat Advanced Server 3.0, restore gcc296:
rm /usr/bin/gcc
mv /usr/bin/gcc3.2.3 /usr/bin/gcc
rm /usr/bin/g++
mv /usr/bin/g++3.2.3 /usr/bin/g++
3.4 Create a RAC Database using the Oracle Database Configuration Assistant
These steps only need to be performed on the node that you are installing from (typically Node 1).
The Oracle Database Configuration Assistant (DBCA) will create a database for you. The DBCA creates your database using the optimal flexible architecture (OFA). This means the DBCA creates your database files, including the default server parameter file, using standard file naming and file placement practices. The primary phases of DBCA processing are:-
Verify that you correctly configured the shared disks for each tablespace (for non-cluster file system platforms)
Create the database
Configure the Oracle network services
Start the database instances and listeners
Oracle Corporation recommends that you use the DBCA to create your database. This is because the DBCA preconfigured databases optimize your environment to take advantage of Oracle9i features such as the server parameter file and automatic undo management. The DBCA also enables you to define arbitrary tablespaces as part of the database creation process. So even if you have datafile requirements that differ from those offered in one of the DBCA templates, use the DBCA. You can also execute user-specified scripts as part of the database creation process.
Note: Prior to running the DBCA it may be necessary to run the NETCA tool or to manually set up your network files. To run the NETCA tool execute the command netca from the $ORACLE_HOME/bin directory. This will configure the necessary listener names and protocol addresses, client naming methods, Net service names and Directory server usage.
If you are using OCFS or NFS, launch DBCA with the -datafileDestination option and point to the shared location where Oracle datafiles will be stored. Example:
% cd $ORACLE_HOME/bin
% dbca -datafileDestination /ocfs/oradata
If you are using RAW, launch DBCA without the -datafileDestination option. Example:
% cd $ORACLE_HOME/bin
% dbca
Respond to the DBCA prompts as shown below:
Choose Oracle Cluster Database option and select Next.
The Operations page is displayed. Choose the option Create a Database and click Next.
The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next.
The Database Templates page is displayed. The templates other than New Database include datafiles. Choose New Database and then click Next. Note: The Show Details button provides information on the database template selected.
DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID). The Global Database Name is typically of the form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the Database Name field). In the RAC case the SID specified will be used as a prefix for the instance number. For example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2 respectively.
The Database Options page is displayed. Select the options you wish to configure and then choose Next. Note: If you did not choose New Database from the Database Template page, you will not see this screen.
Select the connection options desired from the Database Connection Options page. Click Next.
DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab to update information on the Initialization Parameters filename and location. The option Create persistent initialization parameter file is selected by default. If you have a cluster file system, then enter a file system name, otherwise a raw device name for the location of the server parameter file (spfile) must be entered. The button File Location Variables… displays variable information. The button All Initialization Parameters… displays the Initialization Parameters dialog box. This box presents values for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y/N). Instance specific parameters have an instance value in the instance column. Complete entries in the All Initialization Parameters page and select Close. Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization Parameters page are complete and select Next.
DBCA now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database.
The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish.
The DBCA Summary window is displayed. Review this information and then click OK. Once you click the OK button and the summary screen is closed, it may take a few moments for the DBCA progress bar to start. DBCA then begins to create the database according to the values specified.
During the database creation process, you may see the following error:
ORA-29807: specified operator does not exist
This is a known issue (bug 2925665). You can click on the "Ignore" button to continue. Once DBCA has completed database creation, remember to run the 'prvtxml.plb' script from $ORACLE_HOME/rdbms/admin independently, as the user SYS. It is also advised to run the 'utlrp.sql' script to ensure that there are no invalid objects in the database at this time.
A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to work with an Oracle RAC database.
Additional database configuration best practices can be found in the following note:
Note 240575.1 - RAC on Linux Best Practices
--------------------------------------------------------------------------------
4.0 Administering Real Application Clusters Instances
Oracle Corporation recommends that you use SRVCTL to administer your Real Application Clusters database environment. SRVCTL manages configuration information that is used by several Oracle tools. For example, Oracle Enterprise Manager and the Intelligent Agent use the configuration information that SRVCTL generates to discover and monitor nodes in your cluster. Before using SRVCTL, ensure that your Global Services Daemon (GSD) is running after you configure your database. To use SRVCTL, you must have already created the configuration information for the database that you want to administer. You must have done this either by using the Oracle Database Configuration Assistant (DBCA), or by using the srvctl add command as described below.
To display the configuration details for, example, databases racdb1/2, on nodes racnode1/2 with instances racinst1/2 run:-
$ srvctl config
racdb1
racdb2
$ srvctl config -p racdb1 -n racnode1
racnode1 racinst1 /u01/app/oracle/product/9.2.0
$ srvctl status database -d racdb1
Instance racinst1 is running on node racnode1
Instance racinst2 is running on node racnode2
Examples of starting and stopping RAC follow:-
$ srvctl start database -d racdb2
$ srvctl stop database -d racdb2
$ srvctl stop instance -d racdb1 -i racinst2
$ srvctl start instance -d racdb1 -i racinst2
For further information on srvctl and gsdctl see the Oracle9i Real Application Clusters Administration manual.
--------------------------------------------------------------------------------
5.0 References
9.2.0.5 Patch Set Notes
Tips for Installing and Configuring Oracle9i Real Application Clusters on Red Hat Linux Advanced Server
Note 201370.1 - LINUX Quick Start Guide - 9.2.0 RDBMS Installation
Note 252217.1 - Requirements for Installing Oracle 9iR2 on RHEL3
Note 240575.1 - RAC on Linux Best Practices
Note 222746.1 - RAC Linux 9.2: Configuration of cmcfg.ora and ocmargs.ora
Note 212631.1 - Resolving PRKR-1064 in a RAC Environment
Note 220178.1 - Installing and setting up ocfs on Linux - Basic Guide
Note 246205.1 - Configuring Raw Devices for Real Application Clusters on Linux
Note 210889.1 - RAC Installation with a NetApp Filer in Red Hat Linux Environment
Note 153960.1 FAQ: X Server testing and troubleshooting
Note 232355.1 - Hangcheck Timer FAQ
RAC/Linux certification matrix
Oracle9i Real Application Clusters Administration
Oracle9i Real Application Clusters Concepts
Oracle9i Real Application Clusters Deployment and Performance
Oracle9i Real Application Clusters Setup and Configuration
Oracle9i Installation Guide Release 2 for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel, and Sun Solaris
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
Copyright © 2005, Oracle. All rights reserved. Legal Notices and Terms of Use.
HTML Attachment [ Scan and Save to Computer | Save to Yahoo! Briefcase ]
Bookmark Go to End
Doc ID: Note:241114.1
Subject: Step-By-Step Installation of RAC on Linux - Single Node (Oracle9i 9.2.0 with OCFS)
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 11-JUN-2003
Last Revision Date: 12-FEB-2004
PURPOSE
To provide details of how to configure Oracle9i Oracle Real Application Clusters (RAC) 9.2.0.3.0 on Linux on a single node.
SCOPE & APPLICATION
This article is intended for seasoned Database Administrators (DBAs), System Administrators (SAs) and Application Developers (ADs) considering a migration from single to multiple instance database. Whether for reasons of high availability, verifying Application suitability for use with RAC or to conceptually/practically evaluate RAC, this article provides instructions on how to configure and test a fully functional RAC database on a single node, without having to purchase cluster hardware.
This article focuses on the technical implementation of RAC, rather than explanation of RAC concepts or fundamentals. Users wishing to gain a further understanding of RAC concepts, administration and manageability should review relevent documentation offered at http://otn.oracle.com.
Detailed instructions on configuring RAC on a single node are provided, however this configuration is not certified or supported by Oracle Support Services - they are provided for evaluation/educational purposes only.
This article has been compiled from several sources. Configurations contained within were taken from a working system. Given the range of discrete technologies involved, refer to the References section for related and referenced material. The article is intended to be used in conjunction with, but not replace the Red Hat and Oracle Installation guides, Release Notes et al. For the sake of brevity, the article contains some, but not all necesary information found in the other references.
This article was based on:
Dell GX-260s P4 2.4Ghz, 1Gb physical memory, 40Gb IDE hard disk, CDROM
Red Hat Linux Advanced Server 2.1 with Errata 16 (2.4.9-e.16)
Oracle9i Oracle Real Application Clusters version 9.2.0.1.0
Oracle Server Enterprise Edition 9.2.0.1.0 for Linux (Part A91845-01)
Oracle Cluster File System (OCFS) version 2.4.9-e-1.0.8-4
Step-By-Step Installation of RAC on Linux - Single Node (Oracle9i 9.2.0 with OCFS)
[Part 1: Operating System]
Install Red Hat Linux Advanced Server 2.1 (Pensacola)
Disk space permitting, select an 'Advanced Server' installation type (as opposed to Custom). This ensures most Oracle-required packages are installed.
Disk Druid was used to partition the single 40Gb IDE disk into a root (/) partition of 10Gb and swap partition of 2Gb. The remaining free disk space was left for future partitioning.
[root@arachnid root]# df -kl
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda1 10080488 1217936 8350484 13% /
none 509876 0 5098763 0% /dev/shm
Lilo was selected as the boot loader, to be installed to the master boot record (/dev/hda).
During Firewall Configuration, the 'No firewall' option was selected.
Select your preferred Window Manager (Gnome, KDE or both) as well as the Software Development option during Package Group Selection. Selecting Gnome and Software Development options (without individual package
selection) results in an approximate install size of approximately 1.2Gb.
Upon completion and reboot, uni-processor servers should select the appropriate kernelfrom which to boot. The default kernel called linux is the Symmetric Multi-Processing (SMP) kernel and will hang on reboot if using uni-processor hardware, so select linux-up (uni-processor) instead. Modify the default kernel in /etc/lilo.conf, /etc/grub.conf later.
2. Install other required packages
Ensure the binutils package [binutils-2.11.90.0.8-12.i386.rpm] is installed - this is required for /usr/bin/ar, as, ld, nm, size, etc. utilities used by Oracle later e.g.:
[root@arachnid /]# rpm -qa | grep -i binutils
binutils-2.11.90.0.8-12
[root@arachnid /]#
Install any other required packages such as; pdksh, wu-ftp, Netscape, xpdf, zip, unzip, etc.
Those attempting to install Red Hat Linux Advanced Server 2.1 on like hardware e.g. Dell OptiPlex GX260, Compaq Evo D510, or any comprising an Intel Extreme graphics card with i810 chipset: i810, i810-dc100, i810e, i815, i830M, 845G, i810_audio device and/or Intel Pro/1000 MT network interface card should already be aware, that Errata 2.4.9-e.9 (or higher) and/or an upgrade of XFree86 from 4.1.0 to 4.3.0 is required to correctly discover the network card and overcome X/Motif display issues. A bios upgrade and/or modification of Onboard Video Buffer settings may also be required to realise optimal graphics performance.
Note: a fully working X-Windows environment is required to install Oracle, whether in silent or interactive modes.
XFree86 version 4.3.0 is available from http://www.xfree86.org or mirror site (65Mb).
Red Hat Advanced Server 2.1 Errata are available from Red Hat (http://www.redhat.com), including Red Hat Network (http://rhn.redhat.com).
Red Hat are unlikely to release an XFree86 upgrade for Red Hat Linux Advanced Server 2.1, although should be available in Red Hat Linux Advanced Server 3.0 (once Production).
@Oracle employees - also refer to the following url for driver issues:
@http://patchset.au.oracle.com/images/LNXIX86/AdvancedServer2.1/GX260/readme
After initial installation (2.4.9-e.3), attempting to configure networking using neat [redhat-config-network-0.9.10-2.1.noarch.rpm] may core. An upgrade to version redhat-config-network-1.0.3-1 should prove stable.
[root@arachnid rpms]# rpm -Uvh redhat-config-network-1.0.3-1.i386.rpm
Preparing... ################################### [100%]
1:redhat-config-network ################################### [100%]
For ease of (re)installation, the contents of the Advanced Server distribution's RPMS directories (cdroms 1-3) may be copied to local disk (e.g. /rpms).
3. Apply latest Errata (Operating System/Kernel patches)
Here, the default kernel version (2.4.9-e.3) was upgraded to Errata e.16 (2.4.9-e.16) immediately after initial installation. Note that an installation of the new kernel [# rpm -ivh kernel-...] is different to an upgrade [# rpm -Uvh kernel-...]. Installation creates a new kernel of the higher version, but retains the original kernel for fail back, whereas Upgrade replaces the original kernel.
If you upgrade to a higher kernel and use lilo boot manager, modify /etc/lilo.conf to reflect the upgraded /boot/kernel file names, remembering to lilo the file after. Grub automatically modifies its configuration file (/etc/grub.conf).
A complete list of Oracle/Red Hat supported kernel versions is available. How To Check the Supportability of RedHat AS. Applying the latest supported kernel is strongly recommended.
Reboot the server and boot to the new kernel. Configure any devices newly discovered by kudzu.
Uname should look something like:
[root@arachnid /]# uname -a
Linux arachnid 2.4.9-e.16 #1 Fri Mar 21 05:55:06 PST 2003 i686 unknown
4. Make the server network accessible
Configure networking on the server. Where possible, obtain and use fixed IP addressing. Although possible to use DHCP addressing, any change in IP address after installing and configuring OCFS and the Cluster Manager (OCM) may result in issues later. To prevent such issues, specify a non-fully qualified domain name when selecting the hostname for the server e.g.:
[root@arachnid /]# hostname
arachnid
At least for the duration of initial installation/configuration, it may be helpful to gain remote access to the server. Enable xinetd services (telnet, wu-ftp, rsh, etc.) as required by modifying their corresponding files in /etc/xinetd.d, then restart xinetd e.g.:
[/etc/xinetd.d/telnet]
# default: on
# description: The telnet server serves telnet sessions; it uses \
# unencrypted username/password pairs for authentication.
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no # modify from yes to no
}
[root@arachnid /]# service xinetd restart
Stopping xinetd: OK ]
Starting xinetd: OK ]
[root@arachnid /]#
Note: Remote shell (rsh) is required to be enabled before installing Oracle Cluster Manager.
5. Configure kernel and user limits
Having met the minimum Oracle requirements (see Installation guide), configure the kernel and user limits appropriately according to available resources.
The following are the contents of core initialisation/configuration files used to tune the kernel and increase user limits. Consult with your SA or Red Hat before implementing any changes that you are unfamiliar with.
[/etc/sysctl.conf]
# Disables packet forwarding
net.ipv4.ip_forward = 0
# Enables source route verification
net.ipv4.conf.default.rp_filter = 1
# Disables the magic-sysrq key
kernel.sysrq = 0
net.core.rmem_default = 65535
net.core.rmem_max = 65535
net.core.wmem_default = 65535
net.core.rmem_max = 65535
fs.file-max = 65535
fs.aio-max-size = 65535
kernel.sem = 250 35000 100 128
kernel.shmmin = 1
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.shmmax = 522106880
#vm.freepages = 1242 2484 3726 # only use with pre-e.12 kernel (ie. workaround for kswapd issue)
[/etc/security/limits.conf]
...
oracle soft nofile 60000
oracle hard nofile 65535
oracle soft nproc 60000
oracle hard nproc 65535
Changes to the above files (except limits.conf) take effect upon server reboot. Linux provides dynamic kernel tuning via the /proc filesystem. Most kernel parameters can be changed immediately (dynamically) by echo'ing new values to the desired parameter e.g.:
[root@arachnid /proc/sys/kernel]# cat shmmax
522106880
[root@arachnid /proc/sys/kernel]# echo 4294967295 > shmmax
[root@arachnid /proc/sys/kernel]# cat shmmax
4294967295
[root@arachnid /proc/sys/kernel]#
Note: dynamic changes made via /proc are volatile i.e. lost on reboot.
A complete description of the above parameters and recommended values are available from Red Hat, and are discussed in detail throughout the material cited in the Reference section.
6. Configure I/O Fencing
In 2 or more node RAC configurations, an I/O fencing model is required to detect when one or other node(s) die or become unresponsive - this helps to prevent Data corruption i.e. a node in a unknown state continuing to write to the shared disk. Two I/O fencing models are discussed - watchdog and hangcheck-timer.
Note: Neither watchdog nor hangcheck-timer configuration are required for single node configuration. However, for the purpose of emulating two or more node configuration, either watchdog (for 9.2.0.1.0) or hangcheck-timer (for 9.2.0.2.0+) can be implemented in single node configuration.
Watchdog:
In 9.2.0.1.0 (9.2.0 base release), Oracle originally recommended using the softdog module (also known as watchdog) as the I/O fencing model. However, due to performance and stability issues when using watchdog with the /dev/watchdog device, Oracle since recommended using /dev/null as the watchdog device file.
To use /dev/watchdog device, perform the following steps:
Check whether the watchdog device exists i.e.:
[root@arachnid /]# ls -l /dev/watchdog
crw------- 1 oracle root 10, 130 Sep 24 2001 /dev/watchdog
If it does not exist, issue the following commands as root:
[root@arachnid /]# mknod /dev/watchdog c 10 130
[root@arachnid /]# chmod 600 /dev/watchdog
[root@arachnid /]# chown oracle /dev/watchdog
To use /dev/null with watchdog, modify the $ORACLE_HOME/oracm/admin/ocmargs.ora file as follows:
watchdogd -g dba -d /dev/null
oracm
noretsart 1800
To implement watchdog, modify the /etc/rc.local file to install the softdog module at boot time i.e.:
[/etc/rc.local]
#!/bin/sh
touch /var/lock/subsys/local
/sbin/insmod softdog soft_margin=60
Hangcheck Timer:
From 9.2.0.2.0 (9.2.0 Patch 1) onward, Oracle recommends using a new I/O fencing model , the hangcheck-timer module, in lieu of watchdog. Oracle Cluster Manager configuration changes are required if you have already implemented RAC using 9.2.0.1.0 then upgrade to 9.2.0.2.0 or higher. The reason for the I/O fencing model change and hangcheck-timer configuration requirements are discussed in the Oracle Server 9.2.0.2.0 (and onwards) patchset readme.
To configure the hangcheck-timer (recommended), refer to the 9.2.0.2.0 or higher patchset readme for specific instructions.
To use the hangcheck timer, modify the /etc/rc.local file to install the hangcheck-timer module at boot time i.e.:
[/etc/rc.local]
#!/bin/sh
touch /var/lock/subsys/local
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
7. Create partitions and filesystem for Oracle software ($ORACLE_HOME)
If not already performed during step 1, use /sbin/fdisk to create a partition to install Oracle software and binaries. In our example, an extended partition (/dev/hda3) of 26Gb is created, in which a logical partition (/dev/hda5) of 10Gb is created e.g.:
[root@arachnid kernel]# fdisk /dev/hda
The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (1531-4865, default 1531):
Using default value 1531
Last cylinder or +size or +sizeM or +sizeK (1531-4865, default 4865): +10000m
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 4865 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 1275 10241406 83 Linux
/dev/hda2 1276 1530 2048287+ 82 Linux swap
/dev/hda3 1531 4865 26788387+ 5 Extended
/dev/hda5 1531 2805 10241406 83 Linux
Command (m for help): w
After writing all changes, reboot the server to ensure the new partition table entries are read.
Create a filesystem on top of the partition(s) e.g.:
[root@arachnid /]# mkfs.ext2 -j /dev/hda5
Note: specifying the -j (journalling) flag when running mkfs.ext2 will create a journalled filesystem of type ext3.
Create a mount point upon which to attach the filesystem e.g.:
[root@arachnid /]# mkdir /u01;chmod 777 /u01
Optionally change ownership of the mount to the oracle user e.g.:
[root@arachnid /]# chown oracle:dba /u01
Mount the filesystem as root e.g.:
[root@arachnid /]# mount -t ext3 /dev/hda5 /u01
To automount the file system upon reboot, update /etc/fstab e.g.:
[/etc/fstab]
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/hda2 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/hda5 /u01 ext3 defaults 1 1
8. Create the Unix dba group
If it does not already exist, create the Unix dba group e.g.:
[root@arachnid /]# groupadd dba -g 501
[root@arachnid /]# grep dba /etc/group
dba:x:501:
9. Create the Oracle user
If it does not already exist, create the Oracle software owner/user e.g.:
[root@arachnid /]# useradd oracle -u 501 -g dba -d /home/oracle \
-s /bin/bash
[root@arachnid /]# grep oracle /etc/passwd
oracle:x:501:501::/home/oracle:/bin/bash
[root@arachnid /]# passwd oracle
Changing password for user oracle
New password:
Retype new password:
passwd: all authentication tokens updated successfully
10. Configure Oracle user environments
For a single instance, single database configuration, the Oracle environment can be appended to the oracle user's existing login script e.g. [/home/oracle/.bash_profile], so that the Oracle environment is defined and database accessible immediately upon oracle user login.
In this single node RAC configuration, the intention is to emulate two nodes, therefore two separate environment definition files are created - one defining the environment for instance A, the other for instance B.
Copy and modify the following files (V920A, V920B) to suit your environment.
The relevent file is source'd (.) by the oracle user as required, depending on which instance database access is required from e.g.:
[root@arachnid /]# su - oracle
[oracle@arachnid oracle]$ . V920A
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0]$ echo $ORACLE_SID
V920A
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0]$
[root@arachnid /]# su - oracle
[oracle@arachnid oracle]$ . V920B
[oracle@V920B@arachnid /u01/app/oracle/product/9.2.0]$ echo $ORACLE_SID
V920B
[oracle@V920B@arachnid /u01/app/oracle/product/9.2.0]$
[oracle@V920A@arachnid /home/oracle]$ ls -l
total 8
-rwxr-xr-x 1 oracle dba 932 Jun 10 18:09 V920A
-rwxr-xr-x 1 oracle dba 933 Jun 10 18:10 V920B
--- Start sample Oracle environment script [/home/oracle/V920A] ---
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
unset USERNAME
#oracle
ORACLE_SID=V920A;export ORACLE_SID
ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/9.2.0;export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin;export TNS_ADMIN
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network
/jlib:$ORACLE_HOME/assistants/dbca/jlib:$ORACLE_HOME/assistants/dbma/
jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib/classes12.zip;export
CLASSPATH
PS1='[\u@$ORACLE_SID@\h $PWD]$ ';export PS1
PATH=$ORACLE_HOME/bin:$PATH;export PATH
alias ll='ls -l --color'
alias cdo='cd $ORACLE_HOME'
alias sql='sqlplus "/ as sysdba"'
alias scott='sqlplus scott/tiger'
umask 022
cd $ORACLE_HOME
--- End sample Oracle environment script [/home/oracle/V920A] ---
--- Start sample Oracle environment script [/home/oracle/V920B] ---
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
unset USERNAME
#oracle
ORACLE_SID=V920B;export ORACLE_SID
ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/9.2.0;export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin;export TNS_ADMIN
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network
/jlib:$ORACLE_HOME/assistants/dbca/jlib:$ORACLE_HOME/assistants/dbma/
jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib/classes12.zip;export
CLASSPATH
PS1='[\u@$ORACLE_SID@\h $PWD]$ ';export PS1
PATH=$ORACLE_HOME/bin:$PATH;export PATH
alias ll='ls -l --color'
alias cdo='cd $ORACLE_HOME'
alias sql='sqlplus "/ as sysdba"'
alias scott='sqlplus scott/tiger'
umask 022
cd $ORACLE_HOME
--- End sample Oracle environment script [/home/oracle/V920B] ---
[Part 2: Downloads]
11. Download Oracle Cluster File System (OCFS)
Oracle Cluster File System (OCFS) presents a consistent file system image across the servers in a cluster. OCFS allows administrators to take advantage of a filesystem for Oracle database files (data files, control files, and archive logs) and configuration files. This eases administration of Oracle9i Real Application Clusters (RAC).
When installing RAC on a 2 or more node cluster, OCFS provides an alternative to having to use raw devices. In this case, when using a single node setup for RAC, filesystem cache consistency issues that would otherwise plague a multi-node standard filesystem configuration, do not apply. In other words, standard filesystems such as ext2, ext3, etc. may be used to store Oracle datafiles in single node RAC configuration. To fully appreciate and emulate a multi-node RAC configuration, the steps involved in configuring OCFS volumes are provided.
Download the version of OCFS appropriate for your system. OCFS is readily available for download from http://oss.oracle.com, Oracle's Linux Open Source Projects development website. OCFS is offered under GNU Public Licence (GPL).
At time of writing, the latest available version was 2.4.9-e-1.0.8-4 suitable for kernel versions 2.4.9-e.12 and higher.
In this case, the minimum required files are:
ocfs-2.4.9-e-1.0.8-4.i686.rpm
ocfs-support-1.0.8-4.i686.rpm
ocfs-tools-1.0.8-4.i686.rpm
Since publication of this article, OCFS 1.0.9 has been made available from MetaLink (Patch 3034004) - latest revisions are available from http://oss.oracle.com in the near future.
12. Download latest Oracle Server 9.2.0 Patchset
Download the latest Oracle Server 9.2.0 Patchset. At the time of writing, the latest available patchset is 9.2.0.3.0. The patchset (226Mb) not only contains core Oracle Server patches, but also Oracle Cluster Manager patches. The patch is readily available for download from MetaLink > Patches as Patch Number 2761332.
Read the readme, then re-read the readme.
Note: although this article exists solely for evaluative purposes, significant changes to Oracle Cluster Manager configuration have been made made from 9.2.0.2.0 onward. The application of the latest OCM/Oracle Server patchset is recommended, however using the base release of Oracle Cluster Manager and Oracle Server (9.2.0.1.0) has been tested and works. However, if using the 9.2.0 base release, 9.2.0.1.0, expect regular instance failure accompanied with the following error in /var/log/messages. Oracle is only capable of o_direct write from the 9.2.0.2.0 patchset onward.
Jul 15 13:02:13 arachnid kernel: (2914) TRACE: ocfs_file_write(1271) non O_DIRECT write, fileopencount=1
Unzip and untar the latest patchset to a temporary directory e.g.:
[root@arachnid /]# mkdir -p /u01/app/oracle/patches/92030
[root@arachnid /]# mv p2761332_92030_LINUX32.zip /u01/app/oracle/patches/92030
[root@arachnid /]# cd /u01/app/oracle/patches/92030
[root@arachnid /]# tar xvfz p2761332_92030_LINUX32.zip
[Part 3: Oracle Cluster File System]
13. Create additional Oracle Cluster File Sysytem (OCFS) partitions
In preparation for installing OCFS to store the database files, create at least two partitions using /sbin/fdisk.
The Oracle Cluster Manager quorum disk should reside on a dedicated partition. The quorum file itself should be of at least 1Mb in size. However, be aware that OCFS volumes require space for volume structures, therefore the minimum partition size if should be 50Mb). The number of files toreside in and number of accessing nodes of an OCFS partition dictate the minimum size of the OCFS partition.
The size of the OCFS partition to store database files should exceed the total size of database files, ensuring to allow for ample growth. In our case, 5Gb was allocated for a single database only.
Following is sample fdisk output:
[root@arachnid /]# fdisk /dev/hda
The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (2806-4865, default 2806):
Using default value 2806
Last cylinder or +size or +sizeM or +sizeK (2806-4865, default 4865): +10m
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (2808-4865, default 2808):
Using default value 2808
Last cylinder or +size or +sizeM or +sizeK (2808-4865, default 4865): +5000m
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 4865 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 1275 10241406 83 Linux
/dev/hda2 1276 1530 2048287+ 82 Linux swap
/dev/hda3 1531 4865 26788387+ 5 Extended
/dev/hda5 1531 2805 10241406 83 Linux
/dev/hda6 2806 2807 16033+ 83 Linux
/dev/hda7 2808 3445 5124703+ 83 Linux
Command (m for help):
Create a mount point upon which to attach the filesystem e.g.:
[root@arachnid /]# mkdir /cfs01;chmod 777 /cfs01
Logical partition /dev/hda6 (of 10Mb) will become /quorum (ocfs). Logical partition /dev/hda7 (of 5Gb) will become /cfs01 (ocfs).
After writing all changes, reboot the server to ensure the new partition table entries are read. Use /sbin/fdisk or cat /proc/partitions.
14. Install the Oracle Cluster File System (OCFS) software
Install the appropriate OCFS packages for your kernel as root e.g.:
[root@arachnid /rpms]# rpm -ivh ocfs-2.4.9-e-1.0.8-4.i686.rpm ocfs-support-1.0.8-4.i686.rpm ocfs-tools-1.0.8-4.i686.rpm
A complete list of files installed as part of OCFS can be seen by querying the rpm database or packages e.g.:
[root@arachnid /rpms]# rpm -qa | grep -i ocfs
ocfs-support-1.0.8-4
ocfs-2.4.9-e-1.0.8-4
ocfs-tools-1.0.8-4
[root@arachnid /rpms]# rpm -ql ocfs-support-1.0.8-4
/etc/init.d/ocfs
/sbin/load_ocfs
/sbin/mkfs.ocfs
/sbin/ocfs_uid_gen
[root@arachnid /rpms]# rpm -ql ocfs-2.4.9-e-1.0.8-4
/lib/modules/2.4.9-e-ABI/ocfs
/lib/modules/2.4.9-e-ABI/ocfs/ocfs.o
[root@arachnid /rpms]# rpm -ql ocfs-tools-1.0.8-4
/usr/bin
/usr/bin/cdslctl
/usr/bin/debugocfs
/usr/bin/ocfstool
/usr/share
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/cdslctl.1.gz
/usr/share/man/man1/ocfstool.1.gz
Note: the OCFS installation automatically creates the necessary rc (init) scripts to start OCFS on server reboot i.e.:
[root@arachnid /]# find . -name '*ocfs*' -print
...
./etc/rc.d/init.d/ocfs
./etc/rc.d/rc3.d/S24ocfs
./etc/rc.d/rc4.d/S24ocfs
./etc/rc.d/rc5.d/S24ocfs
15. Configuring Oracle Cluster File System (OCFS)
OCFS must first be configured before you create any OCFS volumes. Guidelines, limitations, and instructions for how to configure OCFS are described in the following documents available from http://oss.oracle.com:
Oracle Cluster File System Installation Notes Release 1.0 for Red Hat Linux Advanced Server 2.1 Part B10499-01
RHAS Best Practices (http://oss.oracle.com/projects/ocfs/dist/documentation/RHAS_best_practices.txt)
United Linux Best Practices (http://oss.oracle.com/projects/ocfs/dist/documentation/UL_best_practices.txt)
OCFS Installation Notes (Part B10499-01) covers the following topics:
1. Installing OCFS rpm files (performed in step 14.)
2. Using ocfstool to generate the/etc/ocfs.conf file.
3. Creating /var/opt/oracle/soft_start.sh script to load the ocfs module and start Oracle Cluster Manager.
4. Creating partitions using fdisk (performed in Step 13.)
5. Creating mounts points for OCFS partitions (performed in step 11.).
6. Formatting OCFS partitions e.g.:
[root@arachnid /]# mkfs.ocfs -b 128 -C -g 501 -u 501 -L cfs01 -m /cfs01 -p 0775 /dev/hda7
Cleared volume header sectors
Cleared node config sectors
Cleared publish sectors
Cleared vote sectors
Cleared bitmap sectors
Cleared data block
Wrote volume header
7. Adding OCFS mounts to /etc/fstab e.g.:
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/hda2 swap swap defaults 0 0
/dev/hda5 /u01 ext3 defaults 0 0
/dev/hda6 /quorum ocfs _netdev 0 0
/dev/hda7 /cfs01 ocfs _netdev 0 0
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
8. Tuning Red Hat Advanced Server for OCFS (performed in step 5.)
9. Swap partition configuration.
10. Network Adapter configuration.
11. OCFS limitations.
Note: OCFS Installation Notes (Part B10499-01) assumes Oracle Cluster Manager and Oracle Server patchset 9.2.0.2.0 (at least) have already been applied.
[Part 4: Oracle Cluster Manager]
16. Install Oracle Cluster Manager (OCM) software
a. Once OCFS installation is complete, load, start and mount all OCFS partitions e.g.:
[root@arachnid /]# mount -a -t ocfs
[root@arachnid /]# cat /proc/mounts
b. In our case, the quorum device is /quorum and the quorum file will be /quorum/quorum. Initialise the quorum file in /quorum as follows before attemptiong to start OCM:
[root@arachnid /]# touch /quorum/quorum
c. Mount the Oracle Server cdrom e.g.:
[root@arachnid /]# mount -t iso9660 /dev/cdrom /mnt/cdrom
mount: block device /dev/cdrom is wrote-protected , mounting read only
d. Run the Oracle Universal Installer (OUI) as the oracle user e.g.:
[oracle@V920A@arachnid /home/oracle]$ /mnt/cdrom/runInstaller&
e. Select the option to install the Oracle Cluster Manager software only - accept default values for watchdog timings. Exit the Installer once complete.
f. Perform the steps g., h., i., j. only if you wish to pre-patch Oracle Cluster Manager (OCM) beyond the base 9.2.0.1.0 version.
g. Once installed, re-run the Installer pointing it to the 9.2.0.3.0 products.jar file. Apply the 9.2.0.3.0 Oracle Cluster Manager patch ensuring to refer to the readme.
h. If using kernel 2.4.9-e.16 or higher, the hangcheck-timer module will already exist as /lib/modules/2.4.9-e.16/kernel/drivers/char/hangcheck-timer.o. If using a kernel version of 2.4.9-e.3, e.8, e.9 or e.10, download
and install the hangcheck-timer from MetaLink > Patches - patch (2594820).
i. Remove all references or calls to watchdog (softdog) daemon from startup scripts, such as /etc/rc.local.
j. Implement the hangcheck timer by adding the following line to /etc/rc.local or /etc/rc.sysinit files:
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
17. Start the Oracle Cluster Manager (OCM)
Install (load) the hangcheck-timer module by running the following command as root:
[root@arachnid /]# /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Define the ORACLE_HOME environment variable for root i.e.:
[root@arachnid /]# export ORACLE_HOME=/u01/app/oracle/product/9.2.0
Start the Oracle Cluster Manager e.g.:
[root@arachnid /]# $ORACLE_HOME/oracm/bin/ocmstart.sh
Ensure the OCM processes start correctly i.e:
[root@arachnid /]# ps -ef | grep -i oracm
root 2875 1 0 17:49 pts/4 00:00:00 oracm
root 2877 2875 0 17:49 pts/4 00:00:00 oracm
root 2878 2877 0 17:49 pts/4 00:00:00 oracm
root 2879 2877 0 17:49 pts/4 00:00:00 oracm
root 2880 2877 0 17:49 pts/4 00:00:00 oracm
root 2882 2877 0 17:49 pts/4 00:00:00 oracm
root 2883 2877 0 17:49 pts/4 00:00:00 oracm
root 2884 2877 0 17:49 pts/4 00:00:00 oracm
root 2885 2877 0 17:49 pts/4 00:00:00 oracm
[Part 5: Oracle Database Server]
18. Install the Oracle Server software
Firstly, pre-define the intended Oracle environment (ORACLE_HOME, ORACLE_SID, etc.) to auto-populate OUI and DBCA field locations throughout the installation.
Start the Oracle Universal Installer as the oracle user e.g.
[oracle@V920A@arachnid /home/oracle]$ /mnt/cdrom/runInstaller&
To prevent cdrom eject issues later, invoke the installer from a directory other than the mount point (/mnt/cdrom) or any part of the mounted volume.
From the Welcome screen click Next.
The next screen should be the Cluster Node Selection screen - this screen will only appear if the Oracle Universal Installer detects the Cluster Manager is running (refer step 17). If the Cluster Manager is not running, correct this before performing this step, otherwise the Real Applications Clusters product will not appear in the list of installable products.
At the Cluster Node Selection screen, the non-fully qualified hostname (arachnid in this example) should already be listed. Because this is a single node installation only, click Next.
At the File Locations screen, confirm or enter the Source and Destination paths for the Oracle software, then click Next.
At the Available Products screen, select Oracle9i Database 9.2.0.1.0 Product, then click Next.
At the Installation Types screen, select the Enterprise Edition or Custom Installation Type. The Enterprise Edition Installation Type installs a pre-configured set of products, whereas the Custom Installation offers the ability to individually select which products to install. Click Next after making your selection. In this case, a Custom installation was performed.
Only the following products were selected from the Available Product Components screen:
Oracle9i Database 9.2.0.1.0
Enterprise Edition Options 9.2.0.1.0
Oracle Advanced Security 9.2.0.1.0
Oracle9i Real Application Clusters 9.2.0.1.0
Oracle Partitioning 9.2.0.1.0
Oracle Net Services 9.2.0.1.0
Oracle Net Listener 9.2.0.1.0
Oracle9i Development Kit 9.2.0.1.0
Oracle C++ Call Interface 9.2.0.1.0
Oracle Call Interface (OCI) 9.2.0.1.0
Oracle Programmer 9.2.0.1.0
Oracle XML Developer’s Kit 9.2.0.1.0
Oracle9i for UNIX Documentation 9.2.0.1.0
Oracle JDBC/ODBC Interfaces 9.2.0.1.0
After product selection click Next.
At the Component Locations screen, enter the destination path for Components (OUI, JRE) that are not bound to a particular Oracle Home. In this case, ORACLE_BASE (/u01/app/oracle) was used.
At the Shared Configuration File Name screen, enter the OCFS or raw device name for the shared configuration file. This configuration file is used by the srvctl utility - the configuration/administration utility to manage Real Application Clusters instances and Listeners.
At the Privileged Operating System Groups screen, confirm or enter the Unix group(s) you defined in step 8 (dba in our case), then click Next. Users who are made members of this group are implicitly granted direct access and management of the Oracle database and software.
At the Create Database screen, select No i.e. do not create a database, then click Next. Do not create at this time. If you downloaded the latest Oracle Server patchset, apply it first (outlined in later steps) before creating a database. Doing so will save time and eliminate the need to perform database a upgrade later.
At the Summary screen, review your product selections then click Install.
Perform the following actions when prompted to run $ORACLE_HOME/root.sh as root:
[root@arachnid /]# mkdir -p /var/opt/oracle
[root@arachnid /]# touch /var/opt/oracle/srvConfig.loc
[root@arachnid /]# /u01/app/oracle/product/9.2.0/root.sh
Note: the above actions prevent issues running root.sh script when no shared configuration file/destination is specified.
From the Oracle Net Configuration Assistant Welcome screen, select Perform typical configuration, then click Next.
Once complete, exit the Installer.
19. Apply the Oracle Server 9.2.0.3.0 patchset
This step is optional and is only required if you wish to pre-patch the Oracle Server software beyond the base 9.2.0.1.0 version.
Re-run the Installer ($ORACLE_HOME/bin/runInstaller&) pointing it to the 9.2.0.3.0 patchset products.jar file in the directory created in Step 9.
At the Welcome screen, click Next.
At the Cluster Node Selection screen, ensure the local hostname is specified, then click Next.
At the File Locations screen, enter or browse to the 9.2.0.3.0 patchset products.jar in the source path window, then click Next.
At the Available Products screen, select Oracle9iR2 Patch Set 2 - 9.2.0.3.0 (Oracle9iR2 Patch Set 2), then click Next.
At the Summary screen, review your product selections then click Install.
Exit the Installer once complete.
20. Create and initialise the Server Configuration File
In two or more node configurations, the Server Configuration file should reside on OCFS or raw partitions. Standard file system is used in this example.
Check whether /var/opt/oracle/srvConfig.loc, /etc/srvConfig.loc or $ORACLE_HOME/srvm/config/srvConfig.loc file already exists. If not, create it as 'root' as follows. Allow at least 100Mb in size.
[root@arachnid /]# mkdir -p /var/opt/oracle
[root@arachnid /]# touch /var/opt/oracle/srvConfig.loc
[root@arachnid /]# chown oracle:dba /var/opt/oracle/srvConfig.loc
[root@arachnid /]# chmod 755 /var/opt/oracle/srvConfig.loc
Add the srvconfig_loc parameter to the srvConfig.loc e.g.:
srvconfig_loc=/u01/app/oracle/product/9.2.0/dbs/srvConfig.dbf
If it does not already exist, create the Server Configuration file referenced by srvconfig_loc in the /var/opt/oracle/srvConfig.loc file e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ touch srvConfig.dbf
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ ls -l
total 92
lrwxrwxrwx 1 oracle dba 30 Jun 25 17:47 initV920A.ora -> initV920.ora
lrwxrwxrwx 1 oracle dba 12 Jun 25 17:48 initV920B.ora -> initV920.ora
-rw-r--r-- 1 oracle dba 3372 Jun 26 17:23 initV920.ora
-rwxr-xr-x 1 oracle dba 0 Jun 26 17:58 srvConfig.ora
Initialise the Server Configuration File once from either node e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ srvconfig -f -init
21. Start the Global Services Daemon
Start the Global Services Daemon (GSD) as the oracle user using the gsdctl utility e.g.:
[oracle@V920A@arachnid /home/oracle]$ $ORACLE_HOME/bin/gsdtctl start
Ensure the gsd services are running e.g.:
[oracle@V920A@arachnid /home/oracle]$ ps -fu oracle
UID PID PPID C STIME TTY TIME CMD
...
oracle 14851 1881 0 14:33 pts/1 00:00:00 /bin/sh ./gsdctl start
oracle 14853 14851 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14863 14853 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14864 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14865 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14866 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14872 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 15079 15025 0 14:39 pts/3 00:00:00 ps -fu oracle
...
22. Create a Standalone Database
Create a database manually or use the Database Configuration Assistant (?/bin/dbca). If using DBCA to create a database, due to known DBCA issues, select 'Oracle Single Instance Database' and not 'Oracle Clustered Database'. For greater control and future reuse, use DBCA to generate the database create scripts. Doing so allows you to modify the scripts to increase default values for; MAXINSTANCES, MAXLOGFILES, MAXDATAFILES.
Note: Stopping Oracle Cluster Manager and Global Services Daemon before running DBCA will invoke DBCA, however you will only be presented with the option to create a standalone database.
Run dbca as the oracle user e.g.:
[oracle@V920A@arachnid /home/oracle]$ $ORACLE_HOME/bin/dbca&
At the Welcome screen, select Oracle single instance database, then click Next.
At Step 1 of 8: Operations, select Create a Database, then click Next.
At Step 2 of 8: Database Templates, select a database type then click Next. The Includes Datafiles column denotes whether a pre-configured seed database will be used, or whether to create a new database/files afresh. In this example, New Database was selected.
At Step 3 of 8: Database Identification, enter the Global Database Name (V920 in this example) and SID (V920A), then click Next.
At Step 4 of 8: Database Features, select required features from the Database Features tab, then click Next. In this case, Oracle UltraSearch and Example Schemas were not selected.
At Step 5 of 8: Database Connection Options, select either Dedicated or Shared Server (formerly MTS) Mode, then click Next. In this case, Dedciated Server was selected.
At Step 6 of 8: Initialization Parameter, select or modify the various instance parameters from the Memory, Character Sets, DB Sizing, File Locations and Archive tabs, then click Next. In this case the following configuration was specified:
Memory:
Custom
Shared Pool: 100 Mb
Buffer cache: 26 Mb
Java Pool: 100 Mb
Large Pool: 10 Mb
PGA: 24 Mb
Character Sets:
Database Character Set: Use the default (WE8ISO8859P1)
National Character Set: AL16UTF16
DB Sizing:
Block Size: 8 Kb
Sort Area Size: 524288 bytes
File Locations:
Initialization Parameter Filename: /u01/app/oracle/product/9.2.0/dbs/initV920A.ora
Create server parameter: Not selected
Trace File Directories:
For User Processes: /u01/admin/{DB_NAME}/udump
For Background Process: /u01/admin/{DB_NAME}/bdump
For Core Dumps: /u01/admin/{DB_NAME}/cdump
Archive:
Archive Log Mode: Disabled
At Step 7 of 8: Database Storage, explode each of the database object types, specify their desired locations, then click Next. Ensure that all database files reside on the OCFS partition(s) you created earlier in step 15. - /cfs01 In this case.
Storage:
Controlfile:
control01.ctl: /cfs01/oradata/{DB_NAME}/
control02.ctl: /cfs01/oradata/{DB_NAME}/
control03.ctl: /cfs01/oradata/{DB_NAME}/
Tablespaces:
Default values selected
Datafiles:
/cfs01/oradata/{DB_NAME}/drsys01.dbf: Size 10 Mb
/cfs01/oradata/{DB_NAME}/indx01.dbf: Size 10 Mb
/cfs01/oradata/{DB_NAME}/system01.dbf: Size 250 Mb
/cfs01/oradata/{DB_NAME}/temp01.dbf: Size 20 Mb
/cfs01/oradata/{DB_NAME}/tools01.dbf: Size 10 Mb
/cfs01/oradata/{DB_NAME}/undotbs1_01.dbf: Size 200 Mb
/cfs01/oradata/{DB_NAME}/users01.dbf: Size 10 Mb
/cfs01/oradata/{DB_NAME}/xdb01.dbf: Size 10 Mb
Redo Log Groups:
Group 1: /cfs01/oradata/{DB_NAME}/redo01.log Size 1024 Kb
Group 2: /cfs01/oradata/{DB_NAME}/redo02.log Size 1024 Kb
Group 3: /cfs01/oradata/{DB_NAME}/redo03.log Size 1024 Kb
At step 8 of 8: Creation Operations, select either Create Database and/or Generate Database Creation Scripts to review and create a database at a later time, then click Finish. In this example, both Create Database and Generate Database Creation Scripts options were selected.
23. Convert the Standalone Database to a Clustered Database
The following steps are based on
a. Make a full database backup before you change anything.
b. Copy the existing $ORACLE_HOME/dbs/init
[oracle@V920A@arachnid /]$ cp $ORACLE_HOME/initV920A.ora $ORACLE_HOME/initV920.ora
c. Add the following cluster database parameters to $ORACLE_HOME/dbs/init
[/u01/app/oracle/product/9.2.0/dbs/initV920.ora]
*.cluster_database = TRUE
*.cluster_database_instances = 4
V920A.instance_name = V920A
V920B.instance_name = V920B
V920A.instance_number = 1
V920B.instance_number = 2
*.service_names = "V920"
V920A.thread = 1
V920B.thread = 2
V920A.local_listener="(address=(protocol=tcp)(host=arachnid)(port=1521))"
V920A.remote_listener="(address=(protocol=tcp)(host=arachnid)(port=1522))"
V920B.local_listener="(address=(protocol=tcp)(host=arachnid)(port=1522))"
V920B.remote_listener="(address=(protocol=tcp)(host=arachnid)(port=1521))"
V920A.undo_tablespace=UNDOTBS1
V920B.undo_tablespace=UNDOTBS2
...
Note: Parameters prefixed with V920A. apply only to instance V920A. Those prefixed with V920B. apply only to instance V920B. Those prefixed with *. apply to all instances (V920A and V920B in this case).
d. Modify the original $ORACLE_HOME/dbs/init
[/u01/app/oracle/product/9.2.0/dbs/initV920A.ora]
ifile=/u01/app/oracle/product/9.2.0/dbs/initV920.ora
In preparation to create the second database instance, create a second $ORACLE_HOME/dbs/init
[/u01/app/oracle/product/9.2.0/dbs/initV920B.ora]
ifile=/u01/app/oracle/product/9.2.0/dbs/initV920.ora
Alternatively, only create the $ORACLE_HOME/dbs/init
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ ll
total 36
...
lrwxrwxrwx 1 oracle dba 30 Jun 25 17:47 initV920A.ora -> initV920.ora
lrwxrwxrwx 1 oracle dba 12 Jun 25 17:48 initV920B.ora -> initV920.ora
-rw-r--r-- 1 oracle dba 3368 Jun 25 15:11 initV920.ora
Restart Oracle Cluster Manager and Global Services Daemon if you stopped them previously.
Restart the first instance (V920A) for the new cluster parameters to take effect.
e. Open the database then run $ORACLE_HOME/rdbms/admin/catclust.sql (formerly catparr.sql) as sys to create cluster specific views e.g.:
SQL> @?/catclust
Note: As this creates the necessary database views, the script need only be run from 1 instance.
f. If you created a single instance database using DBCA/scripts without modifying MAXINSTANCES, MAXLOGFILES, etc., recreate the controlfile and modify these parameters accordingly. This step is discussed in
g. From the first instance (V920A), mount the database then create additional redologs for the second instance thread e.g.:
SQL> alter database add logfile thread 2
2 group 4 ('/cfs01/oradata/V920/redo04.log') size 10240K,
3 group 5 ('/cfs01/oradata/V920/redo05.log') size 10240K,
4 group 6 ('/cfs01/oradata/V920/redo06.log') size 10240k;
Database altered.
SQL> alter database enable public thread 2;
Database altered.
h. Create a second Undo Tablespace from the first instance (V920A) e.g.:
SQL> create undo tablespace undotbs2 datafile
2 '/cfs01/oradata/V920/undotbs2_01.dbf' size 200m;
Tablespace created.
i. From a new telnet session, source (.) the second instance's environment script to set the ORACLE_SID to that of the second instance e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0]$ cd
[oracle@V920A@arachnid /home/oracle]$ . V920B
[oracle@V920B@arachnid /u01/app/oracle/product/9.2.0]$
j. Start the second instance e.g.:
[oracle@V920B@arachnid /]$ sqlplus "/ as sysdba"
k. From either instance, check that both redo threads are active i.e.:
SQL> select THREAD#,STATUS,ENABLED from gv$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 OPEN PUBLIC
1 OPEN PUBLIC
2 OPEN PUBLIC
24. Create and start two Oracle Net Listeners
The Oracle Network Manager (netmgr) runs by default after an Oracle Server installation. If you did not create an Oracle Net Listener earlier, create two Listeners (one for each instance) ensuring to use the TCP/IP ports specified by LOCAL_LISTENER, REMOTE_LISTENER parameters in Step 23c. e.g.:
[/u01/app/oracle/product/9.2.0/network/admin/listener.ora]
1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = arachnid)(PORT = 1521))
)
)
)
2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = arachnid)(PORT = 1522))
)
)
)
Start both Listeners i.e.:
[oracle@V920A@arachnid /]$ lsnrctl start 1
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:42:40
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Starting /u01/app/oracle/product/9.2.0/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 9.2.0.3.0 - Production
System parameter file is /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/9.2.0/network/log/1.log
Listening on: (DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1521)))
Connecting to (DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias 1
Version TNSLSNR for Linux: Version 9.2.0.3.0 - Production
Start Date 27-JUN-2003 13:42:40
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security OFF
SNMP OFF
Listener Parameter File /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/9.2.0/network/log/1.log
Listening Endpoints Summary...
(DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@V920A@arachnid /]$
[oracle@V920A@arachnid /]$ lsnrctl start 2
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:42:59
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Starting /u01/app/oracle/product/9.2.0/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 9.2.0.3.0 - Production
System parameter file is /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/9.2.0/network/log/2.log
Listening on: (DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1522)))
Connecting to (DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1522)))
STATUS of the LISTENER
------------------------
Alias 2
Version TNSLSNR for Linux: Version 9.2.0.3.0 - Production
Start Date 27-JUN-2003 13:42:59
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security OFF
SNMP OFF
Listener Parameter File /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/9.2.0/network/log/2.log
Listening Endpoints Summary...
(DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1522)))
The listener supports no services
The command completed successfully
[oracle@V920A@arachnid /]$
Because Automatic Service Registration is configured, each instance (if already started before Listeners were started) will automatically register with both the local and remote Listeners after one minute, thereby implementing server-side Listener load balancing i.e.:
[oracle@V920A@arachnid /]$ lsnrctl serv 1
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:45:46
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1521)))
Services Summary...
Service "V920.au.oracle.com" has 2 instance(s).
Instance "V920A", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Instance "V920B", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(address=(protocol=tcp)(host=arachnid)(port=1522))
The command completed successfully
[oracle@V920A@arachnid /]$
[oracle@V920A@arachnid /]$ lsnrctl serv 2
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:46:05
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTIonfiltered=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1522)))
Services Summary...
Service "V920.au.oracle.com" has 2 instance(s).
Instance "V920A", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(address=(protocol=tcp)(host=arachnid)(port=1521))
Instance "V920B", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[oracle@V920A@arachnid /]$
You should now have a 2 instance RAC database running on a single node using OCFS.
RELATED DOCUMENTS
<
<
<
<
@<
<
Oracle Cluster File System Release 1.0 for Red Hat Advanced Server 2.1 Installation Notes November 2002 (Part B10499-01)
<
<
KEYWORDS
ADVANCED SERVER; ARACHNID; ARACHNOPHOBIA; RED HAT; INSTALLLATION; LINUX; SINGLE NODE; OCFS; RAC;