Oracle Database Support on the Cloud – when no news are bad news.

Hello there, hope you are well today.

No major changes in this field since the launch of “Cloud Licensing Support” available here. AWS and Azure are supported, without the core factor for intel platforms, but no news from GCP. If you are checking out Google Cloud – which is amazing by the way – you will have no donuts running Oracle – unfortunately.

GCP gives you the power of customize everything (cores, RAM, disk), giving you the ability to only pay for what you use. This could be awesome for an Oracle Licesing perspective, ’cause I can create a VM with 3 cores for example, if my database+os consumes only 3 cores.

Please Google – get in hands with Oracle, I want to use GCP on my next Projects 🙂

As always, comments are welcome.

See you around!


*UPDATE* – May 22

So, finally I get it. After being invited to visit Google and talk with GCP team, I understood their game. There are no technical limitations to run Oracle in GCP, at all. Some customers are running Oracle at GCP, however, they have limitations in Licensing and Support. The support issue could be solved using some third party consultancy services, like Rimini. The license issue is the big mountain to climb. Some customers, however, receive a formal “go-ahead” in GCP, but after a hard time negotiating with Oracle.

GCP game is all about their PaaS solutions. I saw BigQuery running, and I could say to you, its something quite amazing – have the potential to be faster than Exadata. Yes, you read it right. FASTER THAN EXADATA. Sometimes. 🙂

So, GCP is keeping up AWS and Azure…and Google have the power (brains and money) to change the market.

See you around!




Posted in cloud | Tagged , , | Leave a comment

Wow – time goes by

Wow – 4 months and no news from us?

Yeah, we know. Sometimes life take you the hard way, and days, weeks, months just pass in a blink of an eye.

But we are back to business! New knowledge, new understanding, new topics, new certifications..

See you soon!


Posted in mindset, off-topic | Leave a comment

Wish you the best in 2018!

Me and Sharedpool hope you a great new years eve, and a amazing 2018!

Image result for happy new year star wars
copyright to Lucasfilm and Disney – always =]


Posted in off-topic | Leave a comment

Backup in the Cloud era – what is changing?

Hello there!

Hope this post find you well. These days one of my customers ask me to advise him on backup/restore procedures and solutions on a new environment running on AWS.

Production databases will run on EC2 instances, on Oracle with BYOL (bring your own license). Customer is thinking about EBS (Elastic Block Storage) solutions or S3 (Simple Storage Service).

Snapshots in Oracle world usually are used in conjunction with begin/end backup operations. If you take 1 snapshot per day only, and you are not willing to lost 24hs of commited data you need to have a second backup strategy. There is a product offered by N2WS which is quite amazing in orchestrating, scheduling and controlling the snapshots – you can configure to take a picture every 5 min – which is usually lower than a business RTO. But how about a logical corruption, a wrong delete done on last weekend. How to restore this? You must retain snaps for a week, maybe a month, maybe a year, restore this snap in another EC2 instance and manually restore data. Seems costly, right?

The ideal world way – from the documentation – is to deploy OSBCS Oracle Secure Backup Cloud Service. This (paid) option give you the ability to use S3 as tape, so minimal adjustments need to be done in order to migrate your backup strategy to the cloud. You need to setup and install OSBCS on each EC2 instance, adjust your channels to use OSBCS and you are ready to go! RMAN retention, catalog, everything go smoothly if you choose to pay for this option. How OSBCS is charged? Per Channel.

So, if you have 100 databases, you can buy for example 10 channels, and make 1 backup at time with 10 channels, or 10 backups at time with 1 channel, or 2 backups at time with 5 channels, or 5 backups at time with 2 channels, or – you got the idea.

Comments are welcome =]

See you around,


Posted in cloud, database | Tagged , , , | Leave a comment

Why you should, and should not, worry about the new era of inteligent databases

These days I feel like an terminator movie, with Ciberdyne working really hard to make Skynet a cient being – to become inteligent.

We all know that some activities are a best fit to automation, reason why we spend hours learning phyton or perl. If you gain your daily bread by executing scripts, loading data into the database, creating users, increasing tablespace sizes, making basic health checks – YES – you need to step up and advance in your Oracle DBA game. The machines are getting smarter on a dailly bases, you should grow smarter as well.

So, in my opinion, what activities will not be feasible to the machines in the short, perhaps medium run? Performance analysis, deep troubleshooting, installation cenarios that escape from the next-next-finish rule, Database architecture,  Data migration and consolidation…things that usually people have to think and design more (using the 80/20 plan/execute rule) will not be played by the machines in the short run.

If your performance analysis in done by the dbms_sqltune – you need to worry too 🙂

As always feedbacks are always welcome.

See you around,



Posted in database, mindset | Tagged | Leave a comment

The infamous jdbc closed connection

Sometimes you, as an DBA, is blamed for everything. The database is slow, unavailable, unpatched, and the list goes on and on.

Sometimes, in rare situations, you can prove then wrong 😀

Last week we have been called to analyze a intermitent application issue. The app team blame the database, showing the “java.sql.SQLException: Closed Connection” error on app logs. Everything at database level was checked out, and rechecked again, until they call us. Long debugging hours and still no results, we tried a different approach. What about sniffing the eth card at app server..checking the communication flow between APP and Database.

And so we did it:

tcpdump -i eth0 tcp port 1521 -A -s1500 | awk ‘$1 ~ “ORA-” {i=1;split($1,t,”ORA-“);while (i <= NF) {if (i == 1) {printf(“%s”,”ORA-“t[2])}else {printf(“%s “,$i)};i++}printf(“\n”)}’

This give us the nice output:

bla@app_blaserver:~ # tcpdump -i eth0 tcp port 1521 -A -s1500 | awk ‘$1 ~ “ORA-” {i=1;split($1,t,”ORA-“);while (i <= NF) {if (i == 1) {printf(“%s”,”ORA-“t[2])}else {printf(“%s “,$i)};i++}printf(“\n”)}’
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 1500 bytes
ORA-01403:no data found
ORA-00913:too many values
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01438:value larger than specified precision allowed for this column
ORA-06512:at line 2
ORA-00937:not a single-group group function
ORA-01403:no data found
ORA-01403:no data found
ORA-00937:not a single-group group function
ORA-01403:no data found
ORA-00937:not a single-group group function
ORA-01403:no data found
ORA-00937:not a single-group group function
ORA-01403:no data found

The ORA-01403 is expected after the fetch of each cursor being processed – no bad news here.

Hummmm…and when the ORA-01438/ORA-06512/ORA-00937 are raised what happens to the connection? You got it right?

After checking what was causing the errors, the intermittent issue stops, everyone was happy – incluing the DBA team 😀

(You need to adapt the script to fit the listener port and eth card in your box, okay?)

As always feedbacks are very welcome.

See you around,


Posted in database, Troubleshooting | Tagged , | Leave a comment

12cR1 + Suse 12 SP2 = take a look on this post

Today I face a weird issue during a HAS in Suse 12 SP2 for a customer.

Everything went smooth during the runInstaller, but failed with the follow error:

SERVER_BLA:/u01/grid/ # /u01/grid/ -I/u01/grid/ -I/u01/grid/ /u01/grid/
Using configuration parameter file: /u01/grid/
Creating OCR keys for user ‘ora12c’, privgrp ‘dba’..
Operation successful.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node SERVER_BLA successfully pinned.
2017/06/10 00:46:42 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.service’
PRCR-1006 : Failed to add resource ora.ons for ons
PRCR-1115 : Failed to find entities of type resource type that match filters (TYPE_NAME ends .type) and contain attributes
CRS-0184 : Cannot communicate with the CRS daemon.
2017/06/10 00:47:04 CLSRSC-180: An error occurred while executing the command ‘srvctl add ons’ (error code 0)
2017/06/10 00:47:55 CLSRSC-115: Start of resource ‘ora.evmd’ failed
2017/06/10 00:47:55 CLSRSC-202: Failed to start EVM daemon

This PRCR-1006 drive me nuts. Turns out, after long hours digging the web + metalink + caffeine, that you need to:

vi /etc/ and add /lib64/noelision on top of it.

The file should be like this after vi:

SERVER_BLA:/u01/grid/ cat /etc/
include /etc/*.conf

And then, add the symlink on GI_HOME:

SERVER_BLA:/u01/grid/ lns -s /lib64/noelision/

And only after this run If you are reading this post problably you already executed ( =[ sadly). So you need to deconfigure first, execute the steps above, and then execute again.

After this, happiness is back =]


When patching GI home, remove the symlink created, apply the patch, and make the symlink again.

Feedbacks are always welcome!
See you around


Posted in database, Installation, Sem categoria | Tagged , , , | Leave a comment

db_multiblock_read_count – to be or not to be?

Sometimes you wanna to use things with default options…and sometimes the default is not good enough.

We all know that db_file_multiblock_read_count is bumped by default on startup, usually you get 128 blocks per read. Why?

From Oracle documentation, this value is high, but CBO will not favor full tables scans.

“Even though the default value may be a large value, the optimizer will not favor large plans if you do not set this parameter. It would do so only if you explicitly set this parameter to a large value.”

Okay. But Why? From a 10053 trace within a session, you can see that the CBO will compute db_file_multiblock_read_count always as “8”…even if you see 128 in spfile.

CBO you always use 8, if you dont set db_file_multiblock_read_count explicitly. How to know if you set db_file_multiblock_read_count as 128? Use this:

SELECT nvl(value,’*NULL*’) AS value
FROM v$spparameter
WHERE name = ‘db_file_multiblock_read_count’;

If this return *NULL*, you have not set db_file_multiblock_read_count, Oracle is showing you 128, but CBO uses 8. Sad no?

You all read the documentation and know the effects of bumping db_file_multiblock_read_count high. Full table scans will happen more often and as a result, in a OLTP system, usually you will get a call from your boss (hehe)

But there is a way to set correctly the db_file_multiblock_read_count…by testing…for example, take a look on the block below. Its silly, I know, but it can give you a magic value for this parameter.

set serveroutput on
l_count PLS_INTEGER;
l_starting_time PLS_INTEGER;
l_ending_time PLS_INTEGER;
dbms_output.put_line(‘dbfmbrc seconds’);
FOR l_dbfmbrc IN 1..64
EXECUTE IMMEDIATE ‘ALTER SESSION SET db_file_multiblock_read_count=’||l_dbfmbrc;
EXECUTE IMMEDIATE ‘ALTER system flush buffer_cache’;
EXECUTE IMMEDIATE ‘ALTER session disable parallel dml’;
l_starting_time := dbms_utility.get_time();
SELECT /*+ full(t) */ count(*) INTO l_count FROM big_table t;
l_ending_time := dbms_utility.get_time();
l_time := round((l_ending_time-l_starting_time)/100);
dbms_output.put_line(l_dbfmbrc||’ ‘||l_time);

(Thanks to Chris Antognini, you’re great)

With this block (be carefull, it will take 60-90 minutes depending of the big table’s size, as a advice, use a 5GB table), you will have values, plot a graph and see the behavior. Dont choose a high value for OLTP.

See you around!

Posted in Performance, Troubleshooting | Tagged , , | Leave a comment

12c Upgrade Exam – 1z0-060

Today we will describe some impressions about 1z0-060 exam.

So, for get success in exam, you need:

  • Study hard on 12c architecture (CDB / PDB): how to start/ stop, create, drop, secure, etc.
  • Do not believe on dumps answers.

We recommend to do new features course, or have some experience on 12c first – some questions have direct relation with daily DBA tasks.

Posted in database, exam, New Features 12c, oracle | Tagged , , | Leave a comment

First impression on 12cR2 Oracle Clusterware install

Today we will gonna discuss about first impression on fresh install of Oracle Clusterware 12cR2 (for database, you could found some considerations here).

I install on new Oracle Linux 6.9 on Virtualbox 5.1.18 and I observe some points:

  • Like database install, the grid software has only one file to download (take care with uncompress the zip file. You need to uncompress on the location that you need to use the cluster software).
  • Oracle has added 2 diskgroups types (in 11gR2 its doesn’t are present): Flex and Extended (more information you could found here).

Screen Shot 2017-03-30 at 14.38.17

  • Pay attention with pre-requisites and space: at least, almost 40 GB was necessary for grid diskgroup if you have less than 4 nodes clusterware.
  • Gavin Soorma has upgrade a 12R1 Cluster. If you are planning to upgrade your cluster, consider to read this post
  • I install with old way Oracle ASMLIB for test purposes and it’s works! 🙂
  • If you install on graphical mode be aware in unzip the cluster software where ORACLE_HOME grid software resides because you can not change the location of install home, as you could see below:

Screen Shot 2017-04-12 at 17.44.12

So, I’ll perform more tests in this test cluster. So, if you are planning to construct your own test cluster, i recommend:


Posted in database, Installation, New Features 12c, oracle | Leave a comment