Amazing unattended install of Oracle RAC 12c Grid Infrastructure with Vagrant!

As part of some other work I am doing I wanted to look around an already running Oracle 12c RAC environment and work with Grid Infrastructure, stop/start tests, failing over resources, simulate node failure, etc. There are days when things just work out unexpectedly and you learn a lot, for me Monday, Feb 3, 2014 was one of those days. Like I said, I had some questions about Oracle 12c Grid Infrastructure and while there are some great blogs on building out Oracle RAC on VMs – Yuri Velikanov does an outstanding job with his Oracle 12c RAC On your laptop Step by Step Implementation Guide 1.0 – I  hadn’t got around to building an environment of my own just yet. Sure, as Oracle DBAs we need to know how to do it and I will take a day to do that sometime soon, but not everyone wants to go through that the first time – sound familiar?

I follow a lot of Oracle folks on my Twitter feed (@DBAStorage) and by complete co-incidence I get this tweet:

 .png

This is the real deal, I recommend this to every Oracle DBA who needs to see Oracle 12c Grid Infrastructure in action, that’s all of us right?

Here’s the link Leighton provided (freelists.org/post/racattack…), detailed documentation is still to be prepared, this is a write-up on how it worked for me.

As a quick summary, once you have downloaded the necessary software and patch files from Oracle, a few minutes preparation to download and install Vagrant (this product looks awesome too), download and prepare the Oracle RAC Vagrant files, do a few edits and move the Oracle files into place; with a simple “vagrant up” command, a completely automated build of a two node Oracle 12c RAC Cluster on Oracle VirtualBox is yours to explore!

 It’s really that easy, but let’s talk about the steps as I built this first on a Mac Mini. If you want this to work first time as I did, it helps if you have at least a dual core laptop, with 16 GB Memory and 100 GB free disk space (the default VMs build to 2-Cores, 5 GB Memory).

 Download and Install both Oracle’s open source virtualization product VirtualBox and the Vagrant virtual machine automated configuration tool.

 Decide on your install directory and download Alvaro Miranda’s Oracle RAC Vagrantfile and stagefiles.zip from GitHub.

Unpack the stagefiles.zip, you can check stagefiles/README.md for information on files you need to download and minor edits to config files, that’s only two additional steps!

Add the following Oracle software and patch downloads to the stagefiles/db/zip directory:

linuxamd64_12c_database_1of2.zip
linuxamd64_12c_database_2of2.zip
linuxamd64_12c_grid_1of2.zip
linuxamd64_12c_grid_2of2.zip
p6880880_121010_Linux-x86-64.zip
p17735306_121010_Linux-x86-64.zip

Note: I haven’t looked into why yet, but something went wrong with the required_files.txt file having unwelcome \r characters at the end of each line, probably a Windows to Linux issue introduced when I first downloaded and unzipped the files on my Windows laptop!

Review the Vagrantfile, Vagrant does look remarkably simple to automate VirtualBox configuration. I already have Mitchell Hashimoto’s book to read (Vagrant: Up and Running), there’s another book on Vagrant that looks interesting by Michael Peacock (Creating Development Environments with Vagrant).

To get a fully automated install Oracle 12c RAC, you need to uncomment the two lines below:

#remove the comments if you want a full automated install of rac
if File.directory?("stagefiles")
#node1.vm.provision :shell, :inline =>
 "sh /media/stagefiles/db/unzip.sh"
#node1.vm.provision :shell, :inline =>
 "sh /media/stagefiles/db/install_crs_db.sh rac"
end

If you have done everything above you are ready to go, just type “vagrant up” from your Vagrant install directory. Here’s the first part of the screen output;

minimac:Vagrant PHG$ vagrant up
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node1' up with 'virtualbox' provider...
[node2] Importing base box 'oracle65-2disk'...
[node2] Matching MAC address for NAT networking...
[node2] Setting the name of the VM...
[node2] Clearing any previously set forwarded ports...
[node2] Clearing any previously set network interfaces...
[node2] Preparing network interfaces based on configuration...
[node2] Forwarding ports...
[node2] -- 22 => 2222 (adapter 1)
[node2] Running 'pre-boot' VM customizations...
[node2] Booting VM...
[node2] Waiting for machine to boot. This may take a few minutes...
[node2] Machine booted and ready!
[node2] Setting hostname...
[node2] Configuring and enabling network interfaces...
[node2] Mounting shared folders...
[node2] -- /vagrant
[node2] -- /media/stagefiles
Come back in a couple of hours, it’s done when you see the following;
============
============
all task finished
well done, oracle grid and database installed
system ready to create a database

Passwords all default to the usernames – root, grid, oracle At the end of this blog you will find some output from basic commands I ran to check out the environment, here’s how to get logged in;

minimac:Vagrant PHG$ ls -ltr
total 62922816
-rw-r--r--  1 PHG  staff        22652 4 Feb 23:01 stagefiles.zip
-rw-r--r--  1 PHG  staff        6016 5 Feb 00:55 Vagrantfile
drwxr-xr-x  11 PHG  staff          374 5 Feb 00:57 stagefiles
-rw-------  1 PHG  staff  16108224512 5 Feb 14:09 shared2.vdi
-rw-------  1 PHG  staff  16108224512 5 Feb 14:09 shared1.vdi
minimac:Vagrant PHG$ grep 192.168 Vagrantfile
192.168.56.10 scan
192.168.56.11 node1
192.168.56.12 node2
minimac:Vagrant PHG$ ssh grid@192.168.56.11
grid@192.168.56.11's password:
[grid@node1 ~]$

If something goes wrong or you want to start again, run “vagrant destroy”. Then a “vagrant up” will recreate your default Oracle 12c RAC environment.

All in this has been outstanding and I wanted to share my experience, demonstrating automation at its best for something that every Oracle DBA can do today. Many thanks to Leighton for bringing this to our attention and Alvaro for creating the downloads, I will certainly be following this work with interest and be looking into Vagrant as a tool for automating the creation and management of my Virtualized Oracle test environments.

See below for some basic Grid Infrastructure commands I ran to check the environment, really pleased to see the new Grid Infrastructure Management Repository -MGMTDB database is available in the build, and can be used for testing instance failure scenarios with Grid Infrastructure.

[grid@node1 ~]$ olsnodes
-bash: olsnodes: command not found
[grid@node1 ~]$ cat /etc/oratab
.
.
+ASM1:/u01/app/12.1.0.1/grid:N:      # line added by Agent
-MGMTDB:/u01/app/12.1.0.1/grid:N:    # line added by Agent
[grid@node1 ~]$ ps -ef | grep pmon
grid 1631    1  0 Feb05 ?        00:00:04 asm_pmon_+ASM1
grid 2048    1  0 Feb05 ?        00:00:03 mdb_pmon_-MGMTDB
grid 5221  5189  0 05:17 pts/0    00:00:00 grep pmon
[grid@node1 ~]$ . oraenv
ORACLE_SID = [grid] ? -MGMTDB
The Oracle base has been set to /u01/app/grid
[grid@node1 ~]$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node1
[grid@node1 ~]$ olsnodes -s
node1    Active
node2    Active
[grid@node1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Thu Feb 6 05:20:22 2014
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0
- 64bit Production
With the Partitioning, Automatic Storage Management
and Advanced Analytics options
SQL>
[grid@node1 ~]$ df -k
Filesystem 1K-blocks      Used Available Use% Mounted on
/dev/sda3 30206976  1930600  26392728 7% /
tmpfs 2479136    644088  1835048 26% /dev/shm
/dev/sda1 487652    49828    412224 11% /boot
/vagrant 487546976 399357752  88189224  82% /vagrant
/media/stagefiles 487546976 399357752  88189224 82% /media/stagefiles
/dev/sdb 32768000  23481696  7372824 77% /u01
[grid@node1 ~]$ crsctl stat res -t
------------------------------------------------------------------------------
Name Target  State        Server                  State details
------------------------------------------------------------------------------
Local Resources
------------------------------------------------------------------------------
ora.DATA.dg
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ora.FRA.dg
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ora.LISTENER.lsnr
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ora.asm
ONLINE  ONLINE      node1                    Started,STABLE
ONLINE  ONLINE      node2                    Started,STABLE
ora.net1.network
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ora.ons
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
------------------------------------------------------------------------------
Cluster Resources
------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE ONLINE      node1                    STABLE
ora.MGMTLSNR
      1        ONLINE ONLINE      node1                    169.254.71.12
192.168.66.11
192.168.76.11
                                                            ,STABLE
ora.cvu
      1        ONLINE ONLINE      node1                    STABLE
ora.mgmtdb
      1        ONLINE ONLINE      node1                    Open,STABLE
ora.node1.vip
      1        ONLINE ONLINE      node1                    STABLE
ora.node2.vip
      1        ONLINE ONLINE      node2                    STABLE
ora.oc4j
      1        ONLINE ONLINE      node1                    STABLE
ora.scan1.vip
      1        ONLINE ONLINE      node1                    STABLE
------------------------------------------------------------------------------
[grid@node1 ~]$ crsctl stop crs
CRS-4563: Insufficient user privileges.
CRS-4000: Command Stop failed, or completed with errors.
[grid@node1 ~]$
[grid@node1 ~]$ su
Password:
[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'node1'
.
.
CRS-2793: Shutdown of Oracle High Availability Services-managed resources
on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]# exit
[grid@node1 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[grid@node1 ~]$ su
Password:
[root@node1 grid]# olsnodes
PRCO-19: Failure retrieving list of nodes in the cluster
PRCO-2: Unable to communicate with the clusterware
[root@node1 grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s