Monthly Archives: May 2014

Best Practices for Virtualizing Your Oracle Database – Datastores

EMC IT Proven

By Darryl Smith — Chief Database Architect, EMC IT

First off, my apologies for delaying the last part of this four part blog for so long.  I have been building a fully automated application platform as a service product for EMC IT to allow us to deploy entire infrastructure stacks in minutes – all fully wired, protected and monitored, but that topic is for another blog.

In my last post,Best Practices For Virtualizing Your Oracle Database With VMware, the best practices were all about the virtual machine itself.  This post will focus on VMware’s virtual storage layer, called a datastore.  A datastore is storage mapped to the physical ESX servers that a VM’s luns, or disks, are provisioned onto.   This is a critical component of any virtual database deployment as it is where the database files reside.  It is also a silent killer of performance because there are…

View original post 909 more words


Oracle Big Data Lite VM

Oracle Big Data Lite Virtual Machine provides an integrated environment to help you get started with the Oracle Big Data platform. Many Oracle Big Data platform components have been installed and configured – allowing you to begin using the system right away.

Oracle has provided a single VM that contains everything you need to get familiar with its Big Data platform – with it being updated to Oracle 12c components, it’s exactly what I have been waiting for, including videos and tutorials on integration with Hadoop and NoSQL. If you have a dual-core, 8GB RAM laptop with 50GB free disk space, all you then need is Oracle VM VirtualBox to run the VM and 7-zip to extract the contents of the first BigDataLite-3.0.001 file. This will create a BigDataLite-3.0.ova VirtualBox appliance file, double-click that and up will come VirtualBox. Once you start the VM, login as oracle/welcome1

Oracle has provided a Deployment Guide with these and further details.

I am fascinated to see how Oracle Corporation is responding to the challenge of the new Big Data and NoSQL vendors, that journey for me started here.

Automate a ScaleIO lab setup with Vagrant and VirtualBox


If you’ve read the other blog posts on ScaleIO you might be interested in running it yourself. However, you might not have your own hardware lab to run it on, but you do have a laptop or desktop, right? Awesome! That’s all you need, and we’ll go through how to get it up and running by using some really smart tools.

If you just want to see how it runs without installing anything, here’s the entire automated setup captured in asciinema, one of my new favourite tools:

First tool we’ll use is VirtualBox, a freely available and open source virtualization solution (yes, no money needed to get it but please contribute to the development!) for Windows, OS X, Linux and Solaris. Download it, install it, and that’s it. No configuration needed unless you want to change any of the defaults we’ll be working with. It is a really…

View original post 1,127 more words

Amazing unattended install of Oracle RAC 12c Grid Infrastructure with Vagrant!

As part of some other work I am doing I wanted to look around an already running Oracle 12c RAC environment and work with Grid Infrastructure, stop/start tests, failing over resources, simulate node failure, etc. There are days when things just work out unexpectedly and you learn a lot, for me Monday, Feb 3, 2014 was one of those days. Like I said, I had some questions about Oracle 12c Grid Infrastructure and while there are some great blogs on building out Oracle RAC on VMs – Yuri Velikanov does an outstanding job with his Oracle 12c RAC On your laptop Step by Step Implementation Guide 1.0 – I  hadn’t got around to building an environment of my own just yet. Sure, as Oracle DBAs we need to know how to do it and I will take a day to do that sometime soon, but not everyone wants to go through that the first time – sound familiar?

I follow a lot of Oracle folks on my Twitter feed (@DBAStorage) and by complete co-incidence I get this tweet:


This is the real deal, I recommend this to every Oracle DBA who needs to see Oracle 12c Grid Infrastructure in action, that’s all of us right?

Here’s the link Leighton provided (…), detailed documentation is still to be prepared, this is a write-up on how it worked for me.

As a quick summary, once you have downloaded the necessary software and patch files from Oracle, a few minutes preparation to download and install Vagrant (this product looks awesome too), download and prepare the Oracle RAC Vagrant files, do a few edits and move the Oracle files into place; with a simple “vagrant up” command, a completely automated build of a two node Oracle 12c RAC Cluster on Oracle VirtualBox is yours to explore!

 It’s really that easy, but let’s talk about the steps as I built this first on a Mac Mini. If you want this to work first time as I did, it helps if you have at least a dual core laptop, with 16 GB Memory and 100 GB free disk space (the default VMs build to 2-Cores, 5 GB Memory).

 Download and Install both Oracle’s open source virtualization product VirtualBox and the Vagrant virtual machine automated configuration tool.

 Decide on your install directory and download Alvaro Miranda’s Oracle RAC Vagrantfile and from GitHub.

Unpack the, you can check stagefiles/ for information on files you need to download and minor edits to config files, that’s only two additional steps!

Add the following Oracle software and patch downloads to the stagefiles/db/zip directory:

Note: I haven’t looked into why yet, but something went wrong with the required_files.txt file having unwelcome \r characters at the end of each line, probably a Windows to Linux issue introduced when I first downloaded and unzipped the files on my Windows laptop!

Review the Vagrantfile, Vagrant does look remarkably simple to automate VirtualBox configuration. I already have Mitchell Hashimoto’s book to read (Vagrant: Up and Running), there’s another book on Vagrant that looks interesting by Michael Peacock (Creating Development Environments with Vagrant).

To get a fully automated install Oracle 12c RAC, you need to uncomment the two lines below:

#remove the comments if you want a full automated install of rac
#node1.vm.provision :shell, :inline =>
 "sh /media/stagefiles/db/"
#node1.vm.provision :shell, :inline =>
 "sh /media/stagefiles/db/ rac"

If you have done everything above you are ready to go, just type “vagrant up” from your Vagrant install directory. Here’s the first part of the screen output;

minimac:Vagrant PHG$ vagrant up
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node1' up with 'virtualbox' provider...
[node2] Importing base box 'oracle65-2disk'...
[node2] Matching MAC address for NAT networking...
[node2] Setting the name of the VM...
[node2] Clearing any previously set forwarded ports...
[node2] Clearing any previously set network interfaces...
[node2] Preparing network interfaces based on configuration...
[node2] Forwarding ports...
[node2] -- 22 => 2222 (adapter 1)
[node2] Running 'pre-boot' VM customizations...
[node2] Booting VM...
[node2] Waiting for machine to boot. This may take a few minutes...
[node2] Machine booted and ready!
[node2] Setting hostname...
[node2] Configuring and enabling network interfaces...
[node2] Mounting shared folders...
[node2] -- /vagrant
[node2] -- /media/stagefiles
Come back in a couple of hours, it’s done when you see the following;
all task finished
well done, oracle grid and database installed
system ready to create a database

Passwords all default to the usernames – root, grid, oracle At the end of this blog you will find some output from basic commands I ran to check out the environment, here’s how to get logged in;

minimac:Vagrant PHG$ ls -ltr
total 62922816
-rw-r--r--  1 PHG  staff        22652 4 Feb 23:01
-rw-r--r--  1 PHG  staff        6016 5 Feb 00:55 Vagrantfile
drwxr-xr-x  11 PHG  staff          374 5 Feb 00:57 stagefiles
-rw-------  1 PHG  staff  16108224512 5 Feb 14:09 shared2.vdi
-rw-------  1 PHG  staff  16108224512 5 Feb 14:09 shared1.vdi
minimac:Vagrant PHG$ grep 192.168 Vagrantfile scan node1 node2
minimac:Vagrant PHG$ ssh grid@
grid@'s password:
[grid@node1 ~]$

If something goes wrong or you want to start again, run “vagrant destroy”. Then a “vagrant up” will recreate your default Oracle 12c RAC environment.

All in this has been outstanding and I wanted to share my experience, demonstrating automation at its best for something that every Oracle DBA can do today. Many thanks to Leighton for bringing this to our attention and Alvaro for creating the downloads, I will certainly be following this work with interest and be looking into Vagrant as a tool for automating the creation and management of my Virtualized Oracle test environments.

See below for some basic Grid Infrastructure commands I ran to check the environment, really pleased to see the new Grid Infrastructure Management Repository -MGMTDB database is available in the build, and can be used for testing instance failure scenarios with Grid Infrastructure.

[grid@node1 ~]$ olsnodes
-bash: olsnodes: command not found
[grid@node1 ~]$ cat /etc/oratab
+ASM1:/u01/app/      # line added by Agent
-MGMTDB:/u01/app/    # line added by Agent
[grid@node1 ~]$ ps -ef | grep pmon
grid 1631    1  0 Feb05 ?        00:00:04 asm_pmon_+ASM1
grid 2048    1  0 Feb05 ?        00:00:03 mdb_pmon_-MGMTDB
grid 5221  5189  0 05:17 pts/0    00:00:00 grep pmon
[grid@node1 ~]$ . oraenv
The Oracle base has been set to /u01/app/grid
[grid@node1 ~]$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node1
[grid@node1 ~]$ olsnodes -s
node1    Active
node2    Active
[grid@node1 ~]$ sqlplus / as sysdba
SQL*Plus: Release Production on Thu Feb 6 05:20:22 2014
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release
- 64bit Production
With the Partitioning, Automatic Storage Management
and Advanced Analytics options
[grid@node1 ~]$ df -k
Filesystem 1K-blocks      Used Available Use% Mounted on
/dev/sda3 30206976  1930600  26392728 7% /
tmpfs 2479136    644088  1835048 26% /dev/shm
/dev/sda1 487652    49828    412224 11% /boot
/vagrant 487546976 399357752  88189224  82% /vagrant
/media/stagefiles 487546976 399357752  88189224 82% /media/stagefiles
/dev/sdb 32768000  23481696  7372824 77% /u01
[grid@node1 ~]$ crsctl stat res -t
Name Target  State        Server                  State details
Local Resources
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ONLINE  ONLINE      node1                    Started,STABLE
ONLINE  ONLINE      node2                    Started,STABLE
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
ONLINE  ONLINE      node1                    STABLE
ONLINE  ONLINE      node2                    STABLE
Cluster Resources
      1        ONLINE ONLINE      node1                    STABLE
      1        ONLINE ONLINE      node1          
      1        ONLINE ONLINE      node1                    STABLE
      1        ONLINE ONLINE      node1                    Open,STABLE
      1        ONLINE ONLINE      node1                    STABLE
      1        ONLINE ONLINE      node2                    STABLE
      1        ONLINE ONLINE      node1                    STABLE
      1        ONLINE ONLINE      node1                    STABLE
[grid@node1 ~]$ crsctl stop crs
CRS-4563: Insufficient user privileges.
CRS-4000: Command Stop failed, or completed with errors.
[grid@node1 ~]$
[grid@node1 ~]$ su
[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'node1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources
on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]# exit
[grid@node1 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[grid@node1 ~]$ su
[root@node1 grid]# olsnodes
PRCO-19: Failure retrieving list of nodes in the cluster
PRCO-2: Unable to communicate with the clusterware
[root@node1 grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.