Author Archives: dbastorage

You can do “Hello World” on OSX using Kubernetes for Container Orchestration?

Recently, I was inspired to update my LinkedIn professional headline to truly reflect how I see myself. After a couple of iterations, considering where my strengths are, what I want to do and having just moved to Berlin, Germany; I came up with “curious problem solver with a liking for automation seeks new IT implementation challenges in Berlin”.

On that very same day, I caught this tweet from Mirantis IT – according to their website, Mirantis delivers all the software, services, training, and support needed for running OpenStack.


I had first heard about Kubernetes at a very engaging Orchestrating Cloud Native Apps presentation by VMware’s Kit Colbert at VMworld 2014 in Barcelona. That was about spinning up app containers, only two years later we’re talking about orchestrating complete virtual infrastructure


Mirantis Collaborates with Intel and Google to Enable OpenStack on Kubernetes


Learn more:

No usable default provider could be found for your system.

For a while now I have been intending to read something on CoreOS.

CoreOS Essentials, by Rimantas Mocevicius, looked like a good place to start as it says “To use the CoreOS virtual machine, you need to have VirtualBox, Vagrant, and git installed on your computer.”

Being  a huge fan of both Virtualbox and Vagrant, this worked for me… But when I tried to run the simplest of commands “vagrant up”, it didn’t actually work for me!

For completeness, let’s walk through the first steps to get a CoreOS VM up & running with Vagrant and Virtualbox.

$ git clone

$ cd coreos-vagrant
$ mv user-data.sample user-data

$ vagrant up

What could be simpler? Only that it didn’t work!  😦

iMac-PHG:coreos-vagrant PHG$ vagrant up
==>  Provider 'virtualbox' not found. We'll automatically install it now...
     The installation process will start below. Human interaction may be
     required at some points. If you're uncomfortable with automatically
     installing this provider, you can safely Ctrl-C this process and install
     it manually.
==>  Downloading VirtualBox 5.0.10...
     This may not be the latest version of VirtualBox, but it is a version
     that is known to work well. Over time, we'll update the version that
     is installed.
==>  Installing VirtualBox. This will take a few minutes...
     You may be asked for your administrator password during this time.
     If you're uncomfortable entering your password here, please install
     VirtualBox manually.
==>  VirtualBox has successfully been installed!
No usable default provider could be found for your system.

That first message immediately had me concerned, as I knew I already had a working VirtualBox install. Then, despite positive messages about VirtualBox working well, the last line became the title of this post. Something was up with Vagrant or compatibility with different versions of Vagrant and VirtualBox.

I’m not going to take up any more of your time than is necessary, after a few iterations trying to understand the problem I headed over to the Vagrant – Getting Started page.

Again, simple commands get you up and running;

$ vagrant init hashicorp/precise64
$ vagrant up

The first command worked, but “vagrant up” failed in a most unexpected way;

iMac-PHG:test PHG$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'hashicorp/precise64' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
The box 'hashicorp/precise64' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:
URL: [""]

A few Google searches and it became clear the problem was with the “curl” command and I then found Vagrant box not found, in a fresh install, in a mac with an answer;


It is very common for 3rd-party software to bundle packages that they have tested to work. This case is very curious to me, as it looks like the MacOS (10.12.1) version  of curl (/usr/bin/curl) works just fine but the bundled version provided by the latest version of Vagrant (1.8.7) does not.

Running the above command does indeed fix the problem:

$ sudo rm /opt/vagrant/embedded/bin/curl
iMac-PHG:test PHG$ sudo rm /opt/vagrant/embedded/bin/curl
iMac-PHG:test PHG$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'hashicorp/precise64' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'hashicorp/precise64'
    default: URL:
==> default: Adding box 'hashicorp/precise64' (v1.1.0) for provider: virtualbox
    default: Downloading:
==> default: Successfully added box 'hashicorp/precise64' (v1.1.0) for 'virtualbox'!

This write up is intended to record my findings on this issue, and help anyone going down the same path to get to the solution quicker than I did. I certainly will look out for an update for Vagrant, to see if they address the problem. Any helpful feedback is as always most welcome!  🙂


Is Networking a Cognitive-Gap in Hyper-Convergence?

My background over nearly 20 years has been Oracle Database, more recently focusing on the supporting infrastructure of large-scale databases. While many of the IT silos we know have come apart, even with Software-Defined Everything (SDx), knowledge of IT Networking has remained an apparent dark-art, at least to me. So, on the basis that if I feel this way, at least one other person does, here’s a trip around what I am calling “Networking, the Cognitive-Gap in Hyper-Converged Infrastructure”.

Let’s start the conversation by asking – why is it that many articles promoting awesome new 2U+ rack-oriented technology, that can scale to 10-100 nodes, make little mention of networking? I want to know more about what a ToR Switch and 10GbE Fabric are, and if you’re like me, today we want to be able to find a sufficiently technical level of information on our own, not have to approach vendors directly to ask for it means we don’t feel burdened by being sold to.

Given my need to know, I approached a former EMCer, Dennis Smith (now at Brocade), with my problem of understanding and I got this great reply;

Most hyperconverged appliance solutions today are delivered in a single rack.  However, if you need multi-rack the most common architecture is leaf and spine with fixed config 40GbE spine switches.  For us it would be the VDX 6740 at the leaf, VDX 6940 at the spine.  We generally recommend using a flat L2 fabric for hyperconvergence (VCS fabric) since it’s the simplest to deploy, but an IP fabric using BGP or OSPF can also be deployed if desired.  And if it’s just a couple of racks, a dense 10GbE VDX 6940 could be used as a “top-of-racks” switch (“racks” in plural).

Dennis also provided a couple of links to papers they prepared in collaboration with VMware and Nutanix, respectively.

Brocade VDX 6740 Deployment Guide for VMware® EVO:RAIL

Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch

These are great papers that I encourage everyone to read, they are very clear and discuss the networking at a level I can understand. 

If you need more background on network topology, old and new, I found this post from West Monroe Partners helpful;

A Beginner’s Guide to Understanding the Leaf-Spine Network Topology

There’s also a really fun podcast that helps a lot to deepen understanding of ToR, Spanning Tree, Leaf-Spine, East-West and more;

Datanauts 011: Understanding Leaf-Spine Networks

In summary, this post is a journey to close a gap in understanding that the network is perhaps the most important component in hyperconverged infrastructure, that is the least well understood.

I want to assure everyone, the omission of similar material from Cisco, Simplivity, and anyone else is unintentional, I intend to become familiar with their offerings, and more. I am certain with the recent announcements of 2nd-generation offerings from EMC, HPE and now Cisco, Hyperconverged Infrastructure is about the get even more attention.

Researching this post has helped me better understand networking at the “leaf”-level, in a follow-up post I plan to explore the “spine” of Hyperconvergence. Hopefully with some great feedback from you, comments and links to more supporting material that I did not manage to find are most welcome.

Yours truly, @DBAStorage


Attending #UKOUG_LME16? It’s next week!





Next week, I will be attending the inaugural UKOUG Licence Management Event 2016 (#UKOUG_LME16) on the 15th March in London at the Kensington Close Hotel.

This feels a lot like how I remember my first CloudExpo in London, 4 years ago now, it was a little “rough around the edges”, at the same time, it felt like we were all still learning what cloud computing could become, much less about marketing.

In this age of automation, it’s not enough just to count CPU sockets/cores that I need to make sure I am fully licensed for; I also want to know when my DBA has inadvertently enabled a feature we don’t pay for, therefore are not licensed to use.

I am optimistic bringing together a lot of folks interested to understand and improve what we know about Oracle Licence Management is an outstanding idea. In the longer term, with our support, I believe it can even help elevate the conversation to a new level by making Software Asset Management more accessible to everyone.

For sure there will be animated discussions around Oracle Audit challenges. Let’s not forget, while Oracle is in the business of selling software licenses, it is our responsibility to make sure we are correctly licensed for the software we use.

At #UKOUG_LME16, I will be engaging with as many exhibitors, presenters and attendees as possible to help move the conversation forward on the following:

  • What simple methods should be used when virtualizing Oracle servers?
  • What solutions are available to help monitor Oracle license usage?
  • How can I prevent or at least detect usage of unlicensed features?

Hope to see you there, I have been looking forward to this event since it was first announced, will do my best to share experiences on Twitter, feel free to follow me, the hashtag #UKOUG_LME16, or both. You can find the Agenda here.

Yours truly, Peter Herdman-Grant



Getting Hyper about Converged Infrastructure, Part 2

Did you like the title? Probably not as much as I do, because I love it! So much, I’ve decided to make this a series of posts.

Last time out, I opened with a discussion around converged infrastructure and the key considerations were:

  • Single vendor solutions from Oracle, HP, etc.
  • Choose your storage, with Cisco UCS-based solutions
  • Introducing an alternative with Hyper-Convergence
  • In this post (and the next) I want to focus on the last point, namely the Hyper-Converged solutions provided by Nutanix and SimpliVity, among others.

I make no secret that my experience to-date has certainly been with Converged Infrastructure, namely VCE Vblock, that means a high-end, pre-configured, tightly-coupled, potentially multi-million dollar solution with branded components from industry big-hitters, Cisco, EMC, HP, IBM, etc.

What I have discovered is that in recent years Converged Infrastructure advocates have looked down on the upstart Hyper-Converged vendors, so let’s explore three of the arguments put forward.

  • Enterprise Solutions need Big-SAN Storage
  • “Spaghetti Wiring” – how do I connect all those little boxes together?
  • Still need an expensive 10GbE infrastructure to make it truly scalable?

Enterprise Solutions need Big-SAN Storage

This is simply no longer true, I have been a big fan of the inline data services that Flash-technology has enabled vendors, such as in EMC with XtremIO, to implement in their high-end Flash-based storage offerings. That said, only last week VMware added deduplication and compression to their VSAN product, others such as DataCore SANsymphony-V have had these features and more for some years.

“Spaghetti Wiring” – how do I connect all those little boxes together?

CauVVQkW4AA1veII recently Tweeted a jokey comment about “Spaghetti Wiring” and how the 10s, 100s or even 1000s of these new hyper-converged boxes are connected together! Very pleased to share with you this photo, provided by Nutanix’ Webscale Webster (@vcdxnz001), showing that wiring is no more complicated than old-style integrated infrastructure. You have to admire the Nutanix color-coordination in that cabling!



Still need an expensive 10GbE infrastructure to make it truly scalable?

This last point I am leaving until my next post to explore in more detail, I need to get a little deeper into the network infrastructure needed to make Hyper-Converged Infrastructure as easy to implement as both Nutanix and SimpliVity claim. I did see this in a recent Principled Technologies Reference Architecture paper, commissioned by Dell;

“The Dell Networking S4048-ON 10/40 GbE top-of-rack Open Networking switch, Dell’s latest datacenter networking solution, is built to optimize performance, efficiency, flexibility, and availability in the modern datacenter. According to Dell, it offers a range of benefits, including: Low latency, High density, and Flexibility in the datacenter.”

As I’ve been looking more closely at Nutanix and Simplivity, I found two papers that provide the DeepDive analysis and testing that I have come to expect from both The Enterprise Strategy Group and Principled Technologies. Both are great resources and I recommend reading them;

Dell XC630-10 Nutanix on VMware ESXi, A Principled Technologies Reference Architecture

SimpliVity Hyperconverged Infrastructure, An ESG Lab Validation Report

To finish up this time, anyone with any interest in Hyper-Converged Infrastructure will want to tune in on Tuesday and see the latest on what EMC has to say about all this; The X Force in Hyper-Converged.

Before then, you can read two articles by The Register for background;



As always, constructive comments/feedback welcome!

Yours, DBAStorage


Getting Hyper about Converged Infrastructure

Having long been interested in the Integrated Infrastructure space and discussion around Converged and Hyper-Converged infrastructure, here’s a first blog on the subject – reading the Oracle Private Cloud Appliance data sheet, it describes the formerly named Virtual Compute Appliance, as an integrated, “wire once,” software-defined converged infrastructure.

This got me thinking about the areas of choice available in Converged and Hyper-Converged infrastructure, so I decided to take a high-level look at the many virtualization-based solutions out there. The table below details the Vendor, Server Compute, Hypervisor and Data Fabric in the different solutions considered.

You will see the Converged Infrastructure space includes vendor-specific solutions from among others HDS, HP Enterprise and Oracle, and then multiple solutions based on Cisco UCS Integrated Infrastructure combined with FC Storage Arrays from EMC, IBM, NetApp and Pure.

I have only included Nutanix and Simplivity in the Hyper-Converged Infrastructure space, as vendors I am so far familiar with that provide both hardware and software solutions.

When considering Converged or Hyper-Converged Infrastructure, this summary overview highlights key considerations:

  • Choice of Hypervisor (VMware, Microsoft, KVM)
  • Choice of Data Fabric (Storage Array or SDS)
  • Choice of Management Software (Vendor or Multiplatform)
Vendor Management Compute Hypervisor Data Fabric
Hitachi Unified Compute Platform VMware vSphere, Microsoft Private Cloud Hitachi Compute Rack VMware ESXi,  Hyper-V HDS / Cisco or Brocade
HPE Helion OpenStack OpenStack CloudSystem HP Proliant DL Gen9 Servers OpenStack KVM HP / Cisco
Oracle Private Cloud Appliance Oracle Enterprise Manager 12c Oracle Server X5-2 Oracle VM Oracle ZFS / Infiniband
FlashStack, by Pure & Cisco VMware vSphere Cisco UCS VMware ESXi Pure / Cisco
FlexPod, by NetApp & Cisco VMware vSphere, Microsoft Private Cloud Cisco UCS VMware ESXi, Hyper-V NetApp / Cisco
VCE Vblock, by EMC & Cisco VCE Vision / VMware vSphere Cisco UCS VMware ESXi EMC / Cisco
VersaStack, by IBM & Cisco IBM Spectrum / VMware vSphere Cisco UCS VMware ESXi IBM / Cisco
Nutanix VMware vSphere, Microsoft Private Cloud, Acropolis Nutanix XCP, Dell XC VMware ESXi, Hyper-V, KVM NDFS / 10GbE
Simplivity OmniStack VMware vSphere OmniCube, Cisco UCS VMware ESXi SDS / 10GbE

Decision making remains challenging – when choosing the Oracle PCA you are choosing vendor “lock-in” with Oracle VM as your hypervisor and Oracle Enterprise Manager 12c for management and orchestration, similarly choosing an HP Enterprise Helion OpenStack solution, no chance to make use of Microsoft Private Cloud or the more widely available VMware vSphere.

You can also see why even with the upcoming Dell/EMC merger, VCE is protective of its Vblock remaining a Cisco-based product, otherwise EMC would be leaving the Cisco UCS-based Converged Infrastructure space to IBM, NetApp and Pure Storage.

Between the Hyper-Converged Infrastructure solutions shown (while both provide a unique Software-Defined Storage solution), choosing Simplivity over Nutanix, I understand you are currently limited to using VMware vSphere. Whereas with Nutanix, your choice also includes Microsoft Private Cloud and Nutanix’s own Acropolis KVM.

Interesting times, with EMC recognizing the growth opportunity in Hyper-Converged Infrastructure solutions, we shall soon see what they will add to customer choice.

I welcome your comments/corrections, as this is my first foray into this fast moving and exciting area that I believe will dominate Infrastructure discussions in the coming few years.

EMC Redefines Big Data with 2 Business Data Lake Solutions

EMC Redefines Big Data with 2 Business Data Lake Solutions

This is my first blog since being selected to join the EMC Elect 2015 program, recognizing 100 of the most socially-engaged specialists in the EMC Community, that includes employees, partners and customers. One of the many benefits to being a member of the EMC Elect team is being introduced to details on technology that you perhaps would not have expected, and this is definitely one for me – the new Big Data ecosystem is very important to Oracle DBAs, we must come to understand Hadoop and NoSQL, realizing that the database world is now #NoOracle – Oracle folks will know Larry Ellison likes to remind everyone that NoSQL means “Not only SQL”, I decided NoOracle means “Not only Oracle”! 🙂

On March 23, 2015, EMC announced it is redefining Big Data by delivering the industry’s first fully-engineered, enterprise-grade business data lake solutions.

According to EMC “Businesses are recognizing that combining Big Data technologies with new data sources and analytic approaches generates new opportunities, but many are struggling to identify the best opportunities and convert them into measurable business value.”

Two solutions were announced: Federation Business Data Lake and EMC Business Data Lake.

Fundamentally the two solutions have many common components, with the difference being that the EMC Business Data Lake provides additional choices for the ecosystem that include non-Federation, competitive products, such as Hortonworks and Cloudera (interestingly, that compete with Pivotal HD).

The Federation Business Data Lake is a fully-engineered, enterprise-grade data lake that makes enabling Big Data initiatives radically simple. Preconfigured building blocks, core Federation technologies and an open ecosystem let you focus on building new capabilities instead of a new infrastructure. A flexible, self-service, virtualized environment means you can prove the value quickly, scale out rapidly, and keep things like security and governance under control. Removing the headaches of integration means that you can start to realize the value of analytics for your business much more quickly.

Federation Business Data Lake 1.0


For customers looking to leverage other Hadoop distributions in addition to Pivotal HD, EMC announced the EMC Business Data Lake, which provides customers additional choice and flexibility.

EMC Business Data Lake 1.0


EMC believes that by implementing the Federation Business Data Lake and EMC Business Data Lake solutions, customers can realize value from Big Data analytics in as little as one week (instead of months) with a fully-engineered, enterprise-grade data lake solution – I think you would have to have completed a lot of preparation to achieve anything substantial in one week, the point likely being that a fully-engineered, enterprise-grade data lake solution can be up and running relatively quickly, ready to ingest data and commence Big Data analytics.

As businesses look to leverage new sources of customer, product, and operational insights to develop new applications, products, and business models, volumes of data are hugely increasing, while the cost of data storage is decreasing and emerging technologies enable execution of new capabilities in real-time.

This creates a game changing opportunity for IT to help lead this transformation by delivering an information infrastructure matched to how the business really wants to operate. The Big Data opportunity is also an opportunity for IT to prove its strategic value to the business by removing integration barriers and clearing a path to more timely and informed business decisions.

The Federation Business Data Lake (FBDL) and EMC Business Data Lake (BDL) are both fully-engineered, enterprise-grade data lakes. As you can see in the above diagrams, both are comprised of core EMC Federation technologies (including EMC II Storage, VMware vCloud Suite and the Pivotal Big Data Suite) enabling self-service, end-to-end integration, management, and provisioning of the entire Big Data environment.

This launch also included some key Big Data Services that are designed to help ensure success for customers at various levels of proficiency with Big Data.

For undecided customers taking the first steps in defining their Big Data strategy, the EMC Big Data Vision Workshop can help align IT and business stakeholders to identify and prioritize feasible target use cases that will deliver meaningful business outcomes.

For motivated customers that need help proving out the value of data science and machine learning, The EMC Big Data Proof of Value Service demonstrates the ROI of the target Big Data use case in a working analytics environment using real customer data. These customers typically know the use case they are going after but are struggling to put things into practice.

For customers ready to deploy a fully-integrated, enterprise-grade data lake quickly, the Federation and EMC Business Data Lakes deliver fully-engineered solutions built on core Federation technologies and EMC engineering, providing automatic instantiation of the Big Data environment, including storage, cluster and analytics.

The following diagrams provide some detail on the applications and products in available configurations:

Products in Federation Business Data Lake 1.0


Products in EMC Business Data Lake 1.0


As an EMC Oracle specialist, I see this inaugural launch of both Federation and EMC Business Data Lake platforms as an exciting time. While it illustrates the power of the EMC Federation to bring a complete, fully-engineered, enterprise-grade, end-to-end set of solutions to support customers on their Big Data journey regardless of their stage of maturity, it also issues an enterprise-grade challenge to Oracle Corporation, showing them that they will not have everything their own way with Enterprise Business Data Lakes as they have with Enterprise RDBMS. Understanding this is the first step to Oracle Practitioners realizing the Oracle Database is now only a part of a much larger emerging Business Data Lake ecosystem.

Using Ansible for executing Oracle DBA tasks.

Frits Hoogland Weblog

This post looks like I am jumping on the bandwagon of IT orchestration like a lot of people are doing. Maybe I should say ‘except for (die hard) Oracle DBA’s’. Or maybe not, it up to you to decide.

Most people who are interested in IT in general will have noticed IT orchestration has gotten attention, especially in the form of Puppet and/or Chef. I _think_ IT orchestration has gotten important with the rise of “web scale” (scaling up and down applications by adding virtual machines to horizontal scale resource intensive tasks), in order to provision/configure the newly added machines without manual intervention, and people start picking it up now to use it for more tasks than provisioning of virtual machines for web applications.

I am surprised by that. I am not surprised that people want boring tasks like making settings in configuration files and restarting daemons, installing software…

View original post 2,564 more words

Experience Oracle’s Sun ZFS Storage Appliance


First Published by DBAStorage in Everything Oracle at EMC on Jun 24, 2014 11:40:00 PM

I know I should not be surprised by the recent unbelievable 5x / 10x claims comparing Oracle’s Sun ZFS Storage Appliance with EMC and other vendors “legacy” products, but they really were not comparing similar purpose products to their own.

I firmly believe EMC prides itself on providing informed customer choice and a breadth of products to most often exceed customer requirements. With that in mind, the right way for technical people like us to respond to this form of old-style marketing is to either present our strengths and leave the customer to decide, or preferably become sufficiently familiar with Oracle’s competing products to be fair and compare them with our own, ensuring the right questions are being asked of both our competitors and our own specialists to the benefit of our customers.

Having recently come across references on EMC Support and ECN describing VNX, Isilon and XtremIO Simulators available for download, I began to think how cool it would be if Oracle had a simulator for their ZFS appliance that everyone can use for testing and after a quick search on Google, below is an introduction to what I found.

Oracle’s Sun ZFS Storage Appliance Simulator

You must be registered with an account on Oracle’s Technology Network (OTN) and then perform the following steps to download, install and configure the Sun ZFS Storage Appliance Simulator.

Hardware requirements for the Oracle VM are a healthy 2GB RAM, 1 CPU and 125GB HD space, this is a storage simulator with a 50GB system disk and 15 x 5GB disks for testing purposes. In the GUI provided, you actually get a view of the storage array and can click on individual disks for information.

Follow these steps to get the Oracle ZFS Storage Simulator:

  1. Install and start VirtualBox 3.1.4 or later.
  2. Download the simulator and uncompress the archive file.
  3. Select “File – Import Appliance” in VirtualBox or simply double-click the file Oracle_ZFS_Storage.ovf
  4. Select the VM labeled “Oracle_ZFS_Storage” and check the Network: Adapter 1, I had to change this to “Bridged Adapter”.
  5. Start the VM.

When the simulator initially boots you will be prompted for some basic network settings, this apparently is exactly the same as if you were using an actual Oracle ZFS Storage Appliance;

  • Host Name: oraclezfs1
  • DNS Domain: “localdomain”
  • IP Address:
  • IP Netmask:
  • Default Router:
  • DNS Server:
  • Password: @@@@@@@@
  • Password: @@@@@@@@

After you enter this information, press ESC-1, then wait and you will see two URLs to use for subsequent configuration and administration, one with an IP address and one with the hostname you specified.

Use the URL with the IP address (for example to complete appliance configuration.

The login name is ‘root’ and the password is what you entered as @@@@@@@@ above.

It is best now to download the Oracle ZFS Storage Appliance Simulator Quick Start Guide and starting on page 5, follow the steps to confirm the settings screens, check the values entered and click COMMIT to continue.

Page 14 gives an Example Filesystem Setup, with further information available in the Product Documentation;

Sun ZFS Storage Appliance Product Documentation
Oracle Snap Management Utility for Oracle Database User Guide
Oracle Enterprise Manager Plug-in for Oracle ZFS Storage Appliance User Guide

With this simulator, you can get a feel for the GUI and storage features available in a Sun ZFS Storage Appliance, and have a better-informed conversation with our customers about the features and functionality available.


Disabling Oracle’s new Database In-Memory option, or perhaps not!

You may already be aware of there now being a number of online articles and blog posts concerning issues around the potential for accidental enabling of the new Oracle Database In-Memory Option; with pricing understood to be the same as for Oracle’s Real Application Clusters and Oracle’s legendary tough stance on license audit findings, it is not difficult to see why this has got the attention it has.

The initial InformationWeek Report: Oracle Patch Turns On $23,000 Upgrade, was followed on 28th May by CBR Online reporting Oracle Denies Its £14,000 In-Memory Option Activates By Default and then Oracle hits back at ex-employee’s claims over in-memory database option.

My years in the IT industry certainly mean it is no surprise to me that my colleague at EMC, Kevin Closson, having raised awareness of the issues on his now 4-part personal blog, nevertheless I am disappointed that he has been subjected to criticism and occasional ridicule from so-called IT industry experts and apparently from some of his former colleagues at Oracle Corporation.

In her blog post Getting started with Oracle Database In-Memory Part I, Maria Colgan, Oracle’s Product Manager for Oracle Database In-Memory, has attempted to address the issues raised without mentioning them directly, by starting with the intention to answer the question “how and when is Database In-Memory installed and enabled”.

My initial response was enthusiastic, I still believe Maria has made a positive contribution to the discussion, I commented as such on her blog post, as I believed she had provided clarity on how to avoid accidentally enabling the new Oracle Database In-Memory Option and as a result avoid exposure to potential license audit problems. However, it has become clear after my own testing that Maria and her colleagues need to look at this again.

This is due to what I have described as “no rows selected” == “DOUBT” – the SQL SELECT statement used to enquire of the feature usage in the Oracle Database makes use of a dba_feature_usage_statistics view that joins 3 WRI tables and in my tests one of these tables (WRI$_DBU_FEATURE_USAGE) is empty!

Screen Shot 2014-07-30 at 02.37.30

After much consternation and multiple install and test runs, I concluded that in my initial tests the WRI$_DBU_FEATURE_USAGE table is unpopulated when performed in the CDB$ROOT database and certainly results in the “no rows selected” output (Ed. happy to be corrected here)

However, when the tests are performed using the orclpdb PDB, installed as part of the Typical Installation of the Oracle Database 12c software, I see output similar to that provided by Kevin Closson. I say similar, as you can see below I am yet to reproduce his output that has resulted in such controversy!

Screen Shot 2014-07-30 at 02.41.47

Clearly, there is something different about our tests, perhaps some sqlplus flags or other settings. What we need to do is share the exact configuration and commands and get to the bottom of this in a mutually agreeable and respectful manner, let’s see what we can do.