18c install on premises – first impressions

Hey all,

Somethings have change on 18c. Today I will explain my first impressions of this pretty new product on premises and some comparisons by older versions (PS: I classified each topic before start talk about… I hope to help you on your next upgrade)

  • Download: If you use download from Oracle OTN, you notice that there is only one zip for Oracle database and one zip file for grid install.
  • Install: now you have to unzip all content into Oracle_HOME<- they called this new way as “Simplified Image-based Oracle Database Installation”
  • Install: runInstaller was replaced by gridSetup.sh (in case of Grid installation) and for database Installer you still need to execute runInstall.sh
  • Install: you will need at least a machine with 8 GB RAM to install grid / database.
  • Install: for database installer, now you have both options of install (as 10g Installer) – you can choose install enterprise or Standard Edition 2.
  • Install: now you can choose on database install TFA during execution of root.sh.

Screen Shot 2018-09-10 at 17.33.02

Screen Shot 2018-09-10 at 18.32.51

See you. 🙂

Posted in 18c, database, Installation, new features, On Premises, oracle | Tagged , | Leave a comment

Why you should consider cloud – today.

Okay. You open this post, I have your attention for perhaps 1 minute tops, and I hope your are not scrolling down doing some fast pace reading.

I’ll not tell you that migrating to cloud you will save budget compared with on-premises architectures, because sometimes this is not true. I’ll not say to you that 4 cents per gig is a great value, because when you multiply this by a million or a trillion (petabytes) you will have a problem in your hand.

What I will tell is why, seriously, you need to think about cloud today. Dont leave it for tomorrow. Spent 3 minutes here with me, today. Lets go?

Wanna know what is truly great and awesome in cloud computing? Okay. In five years (or maybe next week), you a senior IT professional is talking with your child, who works in the IT field too, remembering the “golden age”, the “good old days”, the “old school”. Here is what you guys will talk about:

— Hey son, when I have your age, someday my manager enters my cubicle yelling at me (because someone yell at him/her) “We need to make this servers available today!!We already have the code, some beautiful and colorful app that will engage 5 times more customers to us, and we need to deploy the infrastructure today”. What I think? If everything is already bought, this means a lost night/dawn/morning, 2 maybe 5 grams of caffeine, high levels of cortisol and my job will be:

  • unbox everything and mess with the datacenter
  • physically install servers, firewalls, network load balancing, cabling, and make then receive an AC power for the very first time
  • if anything explodes, all lights are green and everything is 5 by 5, time to install and configure everything, OS, firewall rules, storage, networking, latest patches and testing testing, testing.
  • if everything is good, I should now have 2 maybe 3 servers, available to receive the code. Is 11AM, my boss is yelling at me again (not his/her fault, its a critical moment), but, after 12 hours, I am a hero and our new and colorful application is running flawlessly.

— What??? 10 hours dad? And you weren’t fired?

— No son, I was a hero. How would you do all of this today?

— 5 clicks, 15 minutes work, being pessimist.

In 2006 when AWS launch EC2 the game starts to change my friends. IT and Business start to make amends, walking hand to hand with each other. IT is not “that bad place, with really mean people that everyday says NO to me”..now the IT are the good guys..the “do” guys. Why? Because you dont need to spend 10 hours anymore. with 10 clicks, and minutes you’ll have the job done. Its amazing right?

Okay. Some of you know are thinking “but I have a virtualized environment”, I run a hypervisor. I have templates and automation to do this work in minutes too.

Well, what about your farm’s capacity? Do you have enough cores, ram and disk to deploy something that become a reality today? You need to make annual capacity planning, buying and making all this capacity available for someday, perhaps, consume it?

If you are not able to expand your farm today, but your competitor can, how it can affect your business goals? In a world of today, now, this second, you can really wait hours/days/weeks to get your infrastructure ready?

Virtualization is great. Virtualization change the game in the past. But on-premises virtualization dont have a *very* important thing. Hyperscale. You need a thousand cores today? Okay. You need 2TB of memory today? Okay. You need 800TB of disk today because someone thought that deploy a giant hadoop cluster today is a good idea? Okay.

Cloud providers are buying, in a scale of millions, brand new servers, disks, cpus, researching and developing new PaaS and SaaS offers. Everything to make your IT department a business best friend.

Cloud is not just about savings. Cloud is about overcome competition. Cloud is not a place were everyone is happy and you should be happy there too. Cloud is a tool. And you should consider use this tool. Today.

See you!

Posted in cloud, mindset | Tagged | Leave a comment

Patching an Engineered System

Everyone in Oracle DBA world knows the key value of use an Engineered System, such as Exadata or Supercluster.

In a mission critical on premises database, as a DBA we need to consolidate many databases, plataform in different versions into an Engeneed system.

In a Engeneered System world we have different types of patch deployment that consists in a group of many other patches for differents versions of databases / applications / system /services. They are called Oracle Engineered Systems Quarterly Patch Deployment (QPD).

The key advantage of using QPD is stay with best practices patches that was recommeded by Oracle. The QDP are available for download every 4 months and if you have a contract, you can let Oracle apply for you via ACS / platinum contract. More information, you can see in https://www.oracle.com/assets/as-quarterly-patch-deployment-3042102.pdf.

Personal experiences on Engineered Systems(Exadata, Supercluster) patch apply
I have some few experiences on patch apply in many enviroments even Exadata or Supercluster systems.

First, I always take an exachk. It’s extremely important to compare how health your environment are before apply. Always download the last available on MOS “Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1)”

Second, I recommend this note lecture:

Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)

Oracle SuperCluster Supported Software Versions – All Hardware Types (Doc ID 1567979.1)

PS. in this notes there are some references to others Engeneered Systems. If you have on your enviroment, please take a look on MOS.

So before download, as every patch I extremaly recomments read the README of each patch before start (it seems obvious, but I knew many DBAs that don’t follow this secret rule) and follow the prechecks as Oracle recommendations such as space, check if there are any conflict with another patch, etc.

Some tips based on my personal experience
– In most of cases, during patching some errors related by datapatch (on Database apply) occurs. Don’t be affraid and take a look at:

Queryable Patch Inventory – Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table (Doc ID 1602089.1)

12.1 : Datapatch Fails with ERROR “KUP-04004,KUP-04017,KUP-04118,KUP-04095,ORA-29913″,” fatal: libjli.so or libpicl.so.1: open failed” (Doc ID 2085653.1)

– If you have a segregated environment, always apply patches on DEV or UAT environment.

Examples of database patch apply


Don’t be affraid if you get an error. It’s happen with everyone:

./datapatch -verbose
SQL Patching tool version Production on Thu Aug 16 14:39:41 2018
Copyright (c) 2012, 2017, Oracle. All rights reserved.

Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_2278_2018_08_16_14_39_41/sqlpatch_invocation.log

Connecting to database…OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions…done

Queryable inventory could not determine the current opatch status.
Execute ‘select dbms_sqlpatch.verify_queryable_inventory from dual’
and/or check the invocation log
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
for information on how to resolve the above errors.

SQL Patching tool complete on Thu Aug 16 14:39:59 2018 Continue reading

Posted in database, Exadata, Installation, My Oracle Support, New Features 12c, ODA, Security, Supercluster, Troubleshooting | Leave a comment

Dear boss, it’s time to patch..

I know. Its hard (even today) to get an planned downtime for your database. But when you tell your boss what this vulnerability can do, trust me, you’ll get your window.

Some days ago, I receive the CVE-2018-3110 details. First of all, you know what a CVE is?

CVE comes from Common Vulnerabilities and Exposures, so its not a “Oracle only” space. Every product (from middleware, processors, linux, switches to routers) have a dictionary of vulnerabilities.

So, what this CVE states?


So, what?? 9.9 of 10? Affected plataforms are windows, linux, unix (so all plataforms, right?)

I cant make a how to about exploiting this..but check the image below and make some tests ( on your lab, please)


Its time to patch my friends.

See you around.

Posted in Security | Tagged , , , | Leave a comment

Oracle Database Support on the Cloud – when no news are bad news.

Hello there, hope you are well today.

No major changes in this field since the launch of “Cloud Licensing Support” available here. AWS and Azure are supported, without the core factor for intel platforms, but no news from GCP. If you are checking out Google Cloud – which is amazing by the way – you will have no donuts running Oracle – unfortunately.

GCP gives you the power of customize everything (cores, RAM, disk), giving you the ability to only pay for what you use. This could be awesome for an Oracle Licesing perspective, ’cause I can create a VM with 3 cores for example, if my database+os consumes only 3 cores.

Please Google – get in hands with Oracle, I want to use GCP on my next Projects 🙂

As always, comments are welcome.

See you around!


*UPDATE* – May 22

So, finally I get it. After being invited to visit Google and talk with GCP team, I understood their game. There are no technical limitations to run Oracle in GCP, at all. Some customers are running Oracle at GCP, however, they have limitations in Licensing and Support. The support issue could be solved using some third party consultancy services, like Rimini. The license issue is the big mountain to climb. Some customers, however, receive a formal “go-ahead” in GCP, but after a hard time negotiating with Oracle.

GCP game is all about their PaaS solutions. I saw BigQuery running, and I could say to you, its something quite amazing – have the potential to be faster than Exadata. Yes, you read it right. FASTER THAN EXADATA. Sometimes. 🙂

So, GCP is keeping up AWS and Azure…and Google have the power (brains and money) to change the market.

See you around!




Posted in cloud | Tagged , , | Leave a comment

Wow – time goes by

Wow – 4 months and no news from us?

Yeah, we know. Sometimes life take you the hard way, and days, weeks, months just pass in a blink of an eye.

But we are back to business! New knowledge, new understanding, new topics, new certifications..

See you soon!


Posted in mindset, off-topic | Leave a comment

Wish you the best in 2018!

Me and Sharedpool hope you a great new years eve, and a amazing 2018!

Image result for happy new year star wars
copyright to Lucasfilm and Disney – always =]


Posted in off-topic | Leave a comment

Backup in the Cloud era – what is changing?

Hello there!

Hope this post find you well. These days one of my customers ask me to advise him on backup/restore procedures and solutions on a new environment running on AWS.

Production databases will run on EC2 instances, on Oracle with BYOL (bring your own license). Customer is thinking about EBS (Elastic Block Storage) solutions or S3 (Simple Storage Service).

Snapshots in Oracle world usually are used in conjunction with begin/end backup operations. If you take 1 snapshot per day only, and you are not willing to lost 24hs of commited data you need to have a second backup strategy. There is a product offered by N2WS which is quite amazing in orchestrating, scheduling and controlling the snapshots – you can configure to take a picture every 5 min – which is usually lower than a business RTO. But how about a logical corruption, a wrong delete done on last weekend. How to restore this? You must retain snaps for a week, maybe a month, maybe a year, restore this snap in another EC2 instance and manually restore data. Seems costly, right?

The ideal world way – from the documentation – is to deploy OSBCS Oracle Secure Backup Cloud Service. This (paid) option give you the ability to use S3 as tape, so minimal adjustments need to be done in order to migrate your backup strategy to the cloud. You need to setup and install OSBCS on each EC2 instance, adjust your channels to use OSBCS and you are ready to go! RMAN retention, catalog, everything go smoothly if you choose to pay for this option. How OSBCS is charged? Per Channel.

So, if you have 100 databases, you can buy for example 10 channels, and make 1 backup at time with 10 channels, or 10 backups at time with 1 channel, or 2 backups at time with 5 channels, or 5 backups at time with 2 channels, or – you got the idea.

Comments are welcome =]

See you around,


Posted in cloud, database | Tagged , , , | Leave a comment

Why you should, and should not, worry about the new era of inteligent databases

These days I feel like an terminator movie, with Ciberdyne working really hard to make Skynet a cient being – to become inteligent.

We all know that some activities are a best fit to automation, reason why we spend hours learning phyton or perl. If you gain your daily bread by executing scripts, loading data into the database, creating users, increasing tablespace sizes, making basic health checks – YES – you need to step up and advance in your Oracle DBA game. The machines are getting smarter on a dailly bases, you should grow smarter as well.

So, in my opinion, what activities will not be feasible to the machines in the short, perhaps medium run? Performance analysis, deep troubleshooting, installation cenarios that escape from the next-next-finish rule, Database architecture,  Data migration and consolidation…things that usually people have to think and design more (using the 80/20 plan/execute rule) will not be played by the machines in the short run.

If your performance analysis in done by the dbms_sqltune – you need to worry too 🙂

As always feedbacks are always welcome.

See you around,



Posted in database, mindset | Tagged | Leave a comment

The infamous jdbc closed connection

Sometimes you, as an DBA, is blamed for everything. The database is slow, unavailable, unpatched, and the list goes on and on.

Sometimes, in rare situations, you can prove then wrong 😀

Last week we have been called to analyze a intermitent application issue. The app team blame the database, showing the “java.sql.SQLException: Closed Connection” error on app logs. Everything at database level was checked out, and rechecked again, until they call us. Long debugging hours and still no results, we tried a different approach. What about sniffing the eth card at app server..checking the communication flow between APP and Database.

And so we did it:

tcpdump -i eth0 tcp port 1521 -A -s1500 | awk ‘$1 ~ “ORA-” {i=1;split($1,t,”ORA-“);while (i <= NF) {if (i == 1) {printf(“%s”,”ORA-“t[2])}else {printf(“%s “,$i)};i++}printf(“\n”)}’

This give us the nice output:

bla@app_blaserver:~ # tcpdump -i eth0 tcp port 1521 -A -s1500 | awk ‘$1 ~ “ORA-” {i=1;split($1,t,”ORA-“);while (i <= NF) {if (i == 1) {printf(“%s”,”ORA-“t[2])}else {printf(“%s “,$i)};i++}printf(“\n”)}’
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 1500 bytes
ORA-01403:no data found
ORA-00913:too many values
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01403:no data found
ORA-01438:value larger than specified precision allowed for this column
ORA-06512:at line 2
ORA-00937:not a single-group group function
ORA-01403:no data found
ORA-01403:no data found
ORA-00937:not a single-group group function
ORA-01403:no data found
ORA-00937:not a single-group group function
ORA-01403:no data found
ORA-00937:not a single-group group function
ORA-01403:no data found

The ORA-01403 is expected after the fetch of each cursor being processed – no bad news here.

Hummmm…and when the ORA-01438/ORA-06512/ORA-00937 are raised what happens to the connection? You got it right?

After checking what was causing the errors, the intermittent issue stops, everyone was happy – incluing the DBA team 😀

(You need to adapt the script to fit the listener port and eth card in your box, okay?)

As always feedbacks are very welcome.

See you around,


Posted in database, Troubleshooting | Tagged , | Leave a comment