FinnishingThoughts Podcast first episode is out!

For already quite some time I’ve thought it would be great to discuss on interesting topics with some of the Oracle experts out there and now I’m happy to say my first episode on FinnishingThoughts Podcast is out!

Finnishing Thoughts Episode 1



In my first episode my guest is an Oracle Ace Director who works as Accenture’s Head of database management, Julian Dontcheff.

Julian shares his views on latest Oracle’s announcements and how he sees Oracle’s cloud positioning at the moment with all the new products released to the cloud. He also talks on Accenture’s study on Oracle Cloud performance. And we discuss future DBA’s role and how GDPR is impacting DBA’s.

And if you didn’t know what is Julian’s favourite database feature you will hear that as well!

You can read Julian’s blog at

Oracle Cloud performance study:

If you liked the episode please subscribe on it in iTunes!




Attended Oracle Cloud Infrastructure seminar – part 2

In this post I will go through rest of services we discussed in the seminar.  You can read part 1 from here.

5. Compute Services
6. Storage – Block volume & Object storage
7. Load Balancer
8. Database Services
9. A Lab

5. Compute Services

So this is the service where you will create your own compute machines and is comparable to Amazon EC2. Oracle has two main options here. Either Bare Metal (BM) or Virtual Machine (VM). List of machines and their pricing is here.

If you want a machine which is not shared with anyone without virtualization overhead you have the possibility to get a physical machine dedicated for you. But if you are fine going with virtualization then you can just create a virtual machine.

Most of the machines do not come up with local storage but instead of block storage which you need to create and allocate. However for Dense IO operations you have few machine types where you can select NVMe SSD devices. With those you yourself will take care of backups, RAID configuration etc.

After selecting the machine type you need to decide which image you use to create the machine. This is similar to Amazon AMI’s. Images can be Oracle-provided, custom or BYOI (bring your own).

Right now the options with OCI oracle-provided images are Oracle Linux 6 & 7, CentOS 6 & 7, Ubuntu 16.04 LTS and Windows Server 2012 R2 BM & VM. You can create custom images from these images after configuring them further with your own configs. There is limitation of 50GB in size of image though.

When you launch the instance you need to create key pair. Compared what AWS has in Oracle you had to supply your own keys and you could not create it on the fly. Small thing but in AWS it’s quite handy to get the key immediately when you need to play around with instances. When you launch the instance you need to define which compartment it belongs to so only people who require access to this compute server have access to it.

When you are creating the instance you also define the Virtual Cloud Network and Availability Domain it belongs to and after that you define specific subnet which you wan’t the instance to run on. If the instance is on public subnet you can get a public IP so you can connect to the instance using SSH.

My OEL 7 instance which has private and public IP address defined in a public subnet

One major thing which was not available yet in OCI was auto scaling. So you can’t configure group of servers to scale out and in based on need. Once again this was on the roadmap.

You can find more information on the compute service from here.

6. Storage – Block volume & Object storage

When it comes to storage there are two different types of storage similar to AWS. In AWS you have EBS for block storage and S3 for object storage. In OCI you have block volumes and object storage. Let’s start with block volumes.

Block Volume Service

These are the volumes you normally mount to your compute instance. You can dynamically attach these to different instances, clone and take snapshots. After attaching it to instance you can mount and use them like regular volumes.

These come in sizes from 50GB to 2TB and data is encrypted at rest in both volumes and backups. These are also replicated automatically to prevent data loss. They didn’t go through how its replicated but would guess to a different AD at least.

One nice thing was that when you attach the create block volume to your compute instance the OCI will display iSCSI commands to attach the volume to your server. Would be nice if this could be automated somehow though?

iSCSI commands to attach storage to your instance

And if you take a backup of your block volume they are stored on the Object Storage Service encrypted.

More info on block volumes from here.

Object Storage Service

Similar to Amazon S3 this is where you store your unstructured data. OCI seminar mentioned following use cases for Object Storage: Big Data, Archive & Storage, Content Repository.

Guess where you data is stored? In buckets of course like in AWS! However compared to AWS S3 where there is read-after-write consistency for new objects and eventual consistency for deletes and updates in OCI Object Storage you get strong consistency where you get the most recent copy on the disk.  In AWS S3 where you have global bucket names across customers in OCI the bucket name is distinct within namespace of tenancy.

Maximum size of object is 10TB and objects over 50GB need to use multi-part upload and you have option to storage your data either in Standard tier and Archive tier (four hours for first byte to be retrieved). At the moment you can not apply lifecycle policies like in AWS S3 but they said once again it is on the roadmap.

Object storage FAQ can be found from here.

7. Load Balancer

OCI Load Balancer service supports load balancing inside VCN and is a regional service. It supports SSL offloading and you create either public or private load balancer.

Also when you create the load balancer you define the necessary bandwith you want to be associated with. The options were 100 Mbps, 400 Mbps and 8Gbps.

When you configure the load balancer you need the listener, a backend set (logical entity of backend servers), backend servers (the ones which provide content) and health checks. The health checks support http and tcp traffic.

Also you had to configure the load balancing policy, the options were round-robin, IP-hash and least connection.

If I would compare this to AWS ELB there are obviously few functionalities missing and the setup was bit more complicated than in AWS. Still it was quite easy to setup everything if you had some prior knowledge on load balancing.

Load balancer FAQ.

8. Database Services

We actually ran out of time so we didn’t go this service through in detail. However this service is of course the bread and butter of Oracle’s services.

Oracle offers three type of database systems in OCI:

-Bare metal DB
-Exadata DB
-VM Based DB (New feature!)

What is great in their DB services is that you can dynamically scale amount of cores on your DB up or down based on requirement. Oracle manages the underlying infrastructure but customer takes care of database & OS. I haven’t used this service earlier so I can’t say much about it but some new features which recently were published to this service are:

-Support for dataguard
-Integrated backup & restore
-BYOL (Bring your own license model)

Also on the Exadata DB side there are several interesting features. Integration with IAM, database backups to object store and VCN use cases.

More details on the OCI database service here.

9. A Lab

The hands-on part consisted of creating a VCN, Subnet with a Internet Gateway and a compute instance where you mounted block volume. After that you modified the security list with a port 80 rule to allow HTTP traffic and verified the instance is accessible.

In the security lists setup you can click the rule to be stateless if you want

Very basic setup but the aim was to give people base knowledge on the services.

I had some time to do a load balancer configuration also without instructions and created one database on VM. Good example with load balancer was that if you know Amazon services then using OCI is simple.


Overall the services are much like Amazon Web Services which I’ve stated already few times. You see lot of areas where they are working on to get it to same level as AWS so I would assume if you have a large Oracle footprint and considering the cloud licensing model Oracle offers then Oracle is something you should consider on!

If there is one thing I had to complain is the actual user interface. Sure everything is there and it’s simple to navigate but I felt the content on the pages was so damn BIG. Making the actual content bit smaller would make it easier to see all your content you have provisioned.

The actual seminar had a good instructor who clearly was skilled with OCI and they let us ask questions all the time which was refreshing. 🙂

I wish Oracle would move more towards similar open-minded seminar/cloud usage so people could easily adapt on using Oracle’s cloud. The 1-month/300 USD test account is good start but compare that to Amazon’s where you can play around for a whole year it gives people more solid framework on their services.

Let’s hope they do similar tier also!

Attended Oracle Cloud Infrastructure seminar – part 1

I had the opportunity to attend half a day seminar about Oracle Cloud Infrastructure (OCI) what Oracle offers. Here are some notes about it and some comparison to Amazon Web Services which I have been using a lot lately as well.


The seminar was divided in to few different topics which were:

  1. Introduction to Infrastructure services
  2. Identity and Access Management (IAM)
  3. Virtual Cloud Network (VCN)
  4. Compute Services
  5. Storage – Block volume & Object Storage
  6. Load Balancer
  7. Database Services (DBCS)
  8. A Lab

I’ll describe what I learned from each area and in the first post I’ll go through everything up to VCN.

1. Introduction to Infrastructure services

This was just a general walk through on the services and how they are build up. As with AWS Oracle has divided OCI to different regions and each region has multiple Availability Domains (AD) same as AWS has Availability Zones.

Services which are available on high level can be seen from below picture. This is only the infrastructure services and Oracle’s other cloud services were not discussed in this seminar.


2. Identity and Access Management (IAM)

Similar to AWS when you sign up to OCI your account is the root account. After that you are free to create new user accounts with least privilege policy. So by default you don’t have access to anything.

OCI has IAM groups and you can then assign user to one or many groups and groups then dictate what access you have. You could for example have a group for network admins who can then modify network configurations.

OCI has a resource called tenancy which contains all of your OCI resources. However under tenancy there are compartments which is a logical container to isolate and organize your cloud resources. For example you can have specific compartment for your Finance department. You can still share resources across compartments if needed.

Policies to access resources are written in a SQL-a-like format. This seemed like a nice way to get people understand how to write them. Only thing I was wondering would it have been easier to go with already existing language?

Example on policy:

Allow group HR to read all-resources in tenancy Subcompany; (or compartment level)

Unfortunately for cost management there is no fully matured consolidated billing yet available but that is on the roadmap.

More info on IAM and tenancies and compartments from here.

3. Virtual Cloud Network (VCN)

Again a concept which was easy to absorb after working with AWS. After you have selected your region and want to start building your infrastructure you need to create your network. In OCI you have VCN and in AWS you have same concept with VCPs.

A VCN can cross multiple Availability Domains in a region. Usually when you create a VCN you reserve specific private CIDR block for your use and under it you will create subnets.

For example create VCN with and then two subnets with &

Subnets are then specific for an Availability Domain and are either public or private. With OCI and subnets Oracle reserves first two IP addresses and the last for their use where as in AWS they reserve 4+1.

Access to your subnet is controlled by Security Lists. You define what ports can be used in & out. With OCI they had possibility to set your Security List rule as stateful or stateless. In AWS you either use security groups (stateful) or network ACLs (stateless). Was nice to simplify this!

To learn what stateless vs stateful is check it from here.

If you want to access internet from your subnet you need to create Internet Gateway and add it to your subnets route table.

This shows VCN with three public subnets. As you can see all subnets have different CIDR block.


You need Dynamic Routing Gateway (DRG) compared to AWS Virtual Private Gateway if you have requirement to access your onpremise datacenter with VPN. Again if you know AWS then these concepts are really easy to pick up!

When you have higher bandwith requirements between your onpremise datacenter and OCI you can use Oracle Fast Connect to achieve higher throughput. This matches to AWS Direct Connect on high level.

If there is requirement to access internet but you don’t want to make your server visible to public you can use Private IP on your route table. This acts as a NAT gateway for the servers.

You can connect multiple VNC’s with VNC Peering however at this point this is limited to tenant and the same region. Improving this was also on the roadmap.

More info on VCN from here.


Starting concepts were almost 1:1 with Amazon Web Services. Some things which I mentioned are still behind compared what AWS offers today but it was good to hear they had so many things under roadmap which should make things easier for customers in the future.

On part 2 I will go through rest of the services on the seminar and also review the lab we did.

Loading customizations to e-Business Suite with Ansible

Happy new year! I’ll continue with Ansible topic with one more post as it has helped our small operations team quite a lot lately.

We still run Oracle e-Business Suite 12.1.3 and the amount of customizations we have is really high(No surprise there!). Due to some historical reasons initially we were loading custom objects to eBS via shell scripts. As time passed this was changed so we used concurrent requests to install those custom objects.

Basically what happens is the concurrent looks for specific named zip object from installation directory and if it exists it executes the shell script. This method is no way perfect but it has worked for us and nobody really had time to improve the script.

In addition to this we always had to go to version control system, checkout the zips for specific tag/release and then upload them to installation directory. Lot of manual and quite static tasks!

So Ansible to the rescue. Using Ansible we found really simple way to reduce manual tasks without touching (yet) the actual installation method. As of now we still use subversion but are already in process on switching to Gitlab and making some changes on deploying the code.

If I break the tasks what Ansible does on high level:

  1. Install subversion & rsync to the server (these are needed)
  2. Delete old tag folder & export subversion tag folder defined in playbook
  3. Register all zips in tag to a variable
  4. Copy files to installation directory
  5. Grep and register user home to variable (apparently there is no easy way to get become_user home directory)
  6. Run CONCSUB on application server to submit concurrent request based on zip file name
  7. Delete old tag folder

Once again as there is lot of passwords involved we have used ansible-vault as well for variables.

1. Install subversion & rsync

Here we first define the startdate if we would like to schedule the requests but next step is to install subversion & rsync via yum. Really basic but I think it is best to have these here because otherwise you would always need to check if they exist or not.

- include: define_startdate.yml
when: sch_date is defined

- name: Install subversion and rsync to Server
yum: name="{{item}}"
- subversion
- rsync

2. Delete old tag folder & Export the subversion tag

Again really basic things. Remove directory and export the tag defined in the playbook.

- name: Delete tag folder {{tag}} if exists
path: "/tmp/{{tag}}"
state: absent

- name: Export Subversion tag {{tag}}
repo: svn://{{testsvn}}/{{tag}}
dest: /tmp/{{tag}}

3. Register all zips in tag to a variable

Here I look the earlier created tag which contains zip files, look all of them and register them into variable.

- name: Register zips to a variable
paths: "/tmp/{{tag}}/"
patterns: "*.zip"
register: install_zip

4. Copy files to installation directory

As I want only to copy the zip files from subversion export directory to the installation directory I found rsync to be good for that purpose. With rsync you need to use delegate_to parameter so it is done on destination server.

- name: Copy files from tag to installation folder
mode: push
src: /tmp/{{tag}}/
dest: "{{ricef_dir}}"
- "--include=*.zip"
delegate_to: "{{ inventory_hostname }}"

5. Grep and register user home directory

As I don’t want to login with apps user and instead use become_user this was one way to get the user home variable registered. This is needed for the concurrent request execution to have environment variables loaded.

- name: grep and register
shell: >
egrep "^{{ ap_user }}:" /etc/passwd | awk -F: '{ print $6 }'
changed_when: false
register: user_home

6. Run CONCSUB to submit concurrent requests

I have changed the three different types of customizations to TYPE1, TYPE2 and TYPE3 but we have three different customization areas and zip files always contain their respective type in the name.

Each type loops now through (with_items) the files we earlier registered in the variable install_zip. When the item matches to specific type it submits the CONCSUB (when part).

Also we submit this using the application owner user (ap_user) and source the shell when submitting the CONCSUB. All other variables are defined in the group_vars file.

- name: Install TYPE1 zips
shell: source "{{user_home.stdout}}"/.bash_profile && CONCSUB apps/"{{apps_pass}}" SYSADMIN "System Administrator" "{{apps_user}}" \
WAIT=N CONCURRENT XBOL XBOCOD "{{run_date|default ('')}}" '$TYPE1_TOP/install' "{{item.path | basename}}" "{{type1_pass}}" '"{{db_host}}"' '"{{db_port}}"' \
with_items: "{{install_zip.files}}"
when: item.path | search("type1")
become: true
become_user: "{{ap_user}}"

- name: Install TYPE2 zips
shell: source "{{user_home.stdout}}"/.bash_profile && CONCSUB apps/"{{apps_pass}}" SYSADMIN "System Administrator" "{{apps_user}}" \
WAIT=N CONCURRENT XBOL XBOCOD "{{run_date|default ('')}}" '$TYPE2_TOP/install' "{{item.path | basename}}" "{{type2_pass}}" '"{{db_host}}"' '"{{db_port}}"' \
with_items: "{{install_zip.files}}"
when: item.path | search("type2")
become: true
become_user: "{{ap_user}}"

- name: Install TYPE3 zips
shell: source "{{user_home.stdout}}"/.bash_profile && CONCSUB apps/"{{apps_pass}}" SYSADMIN "System Administrator" "{{apps_user}}" \
WAIT=N CONCURRENT XBOL XBOCOD "{{run_date|default ('')}}" '$TYPE3_TOP/install' "{{item.path | basename}}" "{{type3_pass}}" '"{{db_host}}"' '"{{db_port}}"' \
with_items: "{{install_zip.files}}"
when: item.path | search("type3")
become: true
become_user: "{{ap_user}}"

7. Remove tag folder after execution

This is just for cleanup.

- name: Removing temporary files
path: "/tmp/{{tag}}"
state: absent


We have another playbook when installing single customization but it follows the same logic. This one is used to load all customizations for specific release and is called using ansible-playbook:

ansible-playbook tag2UAT.yml –ask-vault-pass –extra-vars “tag=REL_2017_12_1 apps_user=MY_APPS_USER”

So the only variables I pass are tag and my own username. All passwords and other environment variables are defined in ansible-vault file group_vars/UAT and so on. If I would want to schedule the concurrents I would pass one more variable for the date.

Even though this is just basic level scripting it has helped us a lot to reduce manual tasks and automate tasks which are error prone.

Two ways to change Oracle passwords with Ansible

We do our share of database cloning for various purposes and usually the cloning process is fairly scripted.

However lately we have been looking more and more into Ansible which as per their site is simple automation/orchestration tool. (Not so sure about simple!) Recently there has been also lot of presentations on automating your Oracle infrastructure with Ansible.

Some good Oracle related blog posts on Ansible are:

Maris Elsins:

Frits Hoogland:

The post from Maris actually inspired me to do simple version of password change to static passwords we use for test environments.

One task we always have for test environments is to change the passwords on the instances and in this post I’ll look two different ways to change them via Ansible.

Initial configuration in this example:

We have a host which is dedicated for Ansible scripts and that hosts all the scripts we have for Ansible.

We have defined the target database host in /etc/ansible/hosts.

We have a playbook file which calls just one role which will be used to change the database passwords.

All the variables are defined in the group_vars/myhost file.

First method:

My idea was to define all required variables first in the group_vars file. As the file contains passwords it is created via ansible-vault create group_vars/myhost so file will be cryped and edited via ansible-vault edit with the password you define when you create the file. In my file I have following variables defined:

db_os_user: oracle
       pass: "mypass1"
      pass: "mypass2"

So nothing more than the OS Oracle user on destination host and the database users with their passwords. I also have a sql script called changepass.sql which has:

set ver off pages 0
alter user &1 identified by "&2";

And then the playbook file dbpass.yml is defined as below:


- name: password check
 hosts: dbhost1
 - dbpassword

So here I define the target host is dbhost1 defined in /etc/ansible/hosts and the role it will execute is dbpassword. The only file I use is then located under ../roles/dbpassword/tasks/main.yml

In the file I make sure there is directory called /usr/local/ansible where I will copy the script which changes the password. When you use become_user to change the executor of process there isn’t to my knowledge good way to get the home directory which is used in final task to source the environment file. That’s why I grep it from /etc/password

After that I will execute the sqlscript as the oracle user as sys.

File looks like this:

  - file:
      path: /usr/local/ansible
      state: directory
      mode: 0755

  - copy:
      src: files/
      dest: /usr/local/ansible/
      mode: 0777
      owner: "{{db_os_user}}"

  - name: grep and register home
      shell: >
      egrep "^{{ db_os_user }}:" /etc/passwd | awk -F: '{ print $6 }'
     changed_when: false
     register: user_home

  - name: change password
     shell: |
        source "{{ user_home.stdout }}"/"{{ SID }}".env && \
        sqlplus -S / as sysdba \
        "@/usr/local/ansible/change_pass.sql" {{ item.key }} {{ item.value.pass }}
     become_user: "{{db_os_user}}"
     become: true
     with_dict: "{{ db_users }}"

In the last task I use with_dict to loop through the group_vars db_users. The key is then defined as item.key and the value by item.value.yourvaluename. That wasn’t said very clearly anywhere so had to search it for some time.

That’s it! Now I just run the playbook with below and passwords are changed!

[ansible@ans_host1]# ansible-playbook --ask-vault-pass dbpass.yml

Vault password:

PLAY [password check] ******************************************************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************************************************************
ok: [dbhost1]

TASK [dbpassword : file] ***************************************************************************************************************************************************************************
ok: [dbhost1]

TASK [dbpassword : copy] ***************************************************************************************************************************************************************************
ok: [dbhost1]

TASK [dbpassword : grep and register] **************************************************************************************************************************************************************
ok: [dbhost1]

TASK [dbpassword : verify user existence] **********************************************************************************************************************************************************
changed: [dbhost1] => (item={'key': u'appsuser1', 'value': {u'pass': u'mypass1'}})
changed: [dbhost1] => (item={'key': u'appsuser2', 'value': {u'pass': u'mypass2'}})

PLAY RECAP *****************************************************************************************************************************************************************************************
dbhost1 : ok=5 changed=1 unreachable=0 failed=0

Second method:

The beautiful thing with Ansible is that there are million ways to do tasks. I was searching further and found that there exists Oracle modules for Ansible by “oravirt” Mikael Sandström.  The modules can be found from:

The only thing you require before using these modules is the cx_Oracle for Python. My test dbhost1 was Oracle Enterprise Linux 5 and I just couldn’t get the installation of cx_Oracle sorted.

That’s why in the second method I take connection from my ansible host (OEL7) which has the Oracle 12.1 instant client installed. For that I had to edit the /etc/ansible/hosts for [dbhost1] to use localhost:

localhost ansible_connection=local ansible_user=oracle

Playbook file:


    - name: password check
      hosts: pprddb
         - dbpass_with_module

This time in the group_vars I have bit more variables:

       pass: "mypass1"
      pass: "mypass2"
db_hostname: myhost1
db_servicename: TEST1
db_user: system
db_password: mypass1
     LD_LIBRARY_PATH: /u01/app/oracle/product/12.1.0/client_2/
     ORACLE_HOME: /u01/app/oracle/product/12.1.0/client_2

If you look oravirt git instructions you can see there is test-modules.yml which has good examples how to use the module. So I will need above variables defined. I think they are quite easy to figure out why they are there. Note that you need two environment variables defined also so the modules run, ORACLE_HOME and LD_LIBRARY_PATH.

In the task file under ../roles/dbpass_with_module/tasks/main.yml I now have only one task which uses the oracle_user which modifies the existing user. With the same you can also create new users if needed and lots of other tasks as well!


   - name: change Oracle password on the host you connect
        oracle_user: hostname="{{db_hostname}}" service_name="{{db_servicename}}"        user="{{db_user}}" password="{{db_password}}" schema="{{item.key}}" s  chema_password="{{item.value.pass}}"
        environment: "{{oracle_env}}"
        with_dict: "{{ db_users }}"

So I define the variables and again loop through all database users similar way than earlier. Also the environment needs to be defined as you can see. The output?

[ansible@ans_host1]# ansible-playbook --ask-vault-pass dbpass_with_module.yml
Vault password:

PLAY [password check] ******************************************************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************************************************************
ok: [localhost]

TASK [dbpass_with_module : change Oracle password on the host you connect] *************************************************************************************************************************
ok: [localhost] => (item={'key': u'appsuser1', 'value': {u'pass': u'mypass1'}})
ok: [localhost] => (item={'key': u'appsuser2', 'value': {u'pass': u'mypass2'}})

PLAY RECAP *****************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

As you can see the play is now run on localhost instead of dbhost1 but it connects to the defined database.


Different ways to achieve same thing but the second one is lot easier to execute. Still it’s always good to see how you can achieve same thing via different ways. This is just one step on our clone automation but something which is easy to take in use.

And huge thanks for people working on the Oracle Ansible modules!


How is my EBS instance using Exadata write-back flash cache?

When we were planning migration from commodity hardware to Exadata X5-2 one open topic we had was if it makes sense to enable write-back flash cache in Exadata.

The default mode is write-through where read operations utilize flash cache but not the write operations. Write-back then uses the flash cache to both read AND write where appropriate (Read more for example here or from Exadata Write-Back Flash Cache – FAQ Doc ID 1500257.1).

Before migration I had read Expert Oracle Exadata book which gives really solid basis for you with Exadata. Thanks for the writers!

In the book they mention that in some cases it is worth considering changing the default write-through to write-back and this is exactly what we did. We figured since the amount of production databases we were going to put on Exadata is low we probably will not utilize flash cache as much as possible so we could enable write-back flash cache.

Also one of the driving factors was that we expected e-Business Suite instances to be write-heavy so we could get some additional benefits due to this.

Problem was our EBS databases were still on! You can only see statistics for flash cache write usage from and forward so there was no good way of seeing instance specific statistics.

Last August we finally upgraded our EBS databases to We have a distributed configuration for EBS which means there are two separate database instances for source (actual ERP instance) and destination (planning instance, VCP).

To see flash cache statistics for instance I used Tanel Poder’s sys script and followed his blog post and then ran the script on both instances.


SQL> @sys cell%flash
old 1: select name, value sys_value from v$sysstat where lower(name) like lower('%&1%')
new 1: select name, value sys_value from v$sysstat where lower(name) like lower('%cell%flash%')

---------------------------------------------------------------- --------------------------
cell writes to flash cache 100498759
cell overwrites in flash cache 83292766
cell partial writes in flash cache 2966
cell flash cache read hits 4569897795

next day:

</strong>NAME VALUE
---------------------------------------------------------------- --------------------------
cell writes to flash cache 112267869
cell overwrites in flash cache 93351887
cell partial writes in flash cache 3360
cell flash cache read hits 5056009206

So initially looking on ERP side where most of the transactions are happening the ratio is around 1:45 with writes/reads utilizing flash cache. I’m looking on to “cell writes to flash cache” and “cell flash cache read hits” on this.

I was quite surprised by this because I thought the amount of cached writes would be lot higher on the ERP instance.


VCP instance is where most of the batch jobs happen over night and it has Demantra instance included in it as well. So lot of read heavy operations. But!

SQL> @sys cell%flash
old 1: select name, value sys_value from v$sysstat where lower(name) like lower('%&1%')
new 1: select name, value sys_value from v$sysstat where lower(name) like lower('%cell%flash%')

---------------------------------------------------------------- --------------------------
cell writes to flash cache 377827994
cell overwrites in flash cache 270095454
cell partial writes in flash cache 8810
cell flash cache read hits 248136982

next day:

---------------------------------------------------------------- --------------------------
cell writes to flash cache 397415193
cell overwrites in flash cache 283811642
cell partial writes in flash cache 9265
cell flash cache read hits 258127575

Now the ratio is actually 1.5:1 and the opposite what was on the ERP side! So there are more requests for write operations compared to read operations although not so much.

This seems to be due to batch jobs writing heavily during night and weekends. When I checked the writes in the evening and in the morning the value had grown a lot compared to daytime activity when writes don’t grow so much.

Just for comparison our OBIEE database:

SQL> /
old 1: select name, value sys_value from v$sysstat where lower(name) like lower('%&1%')
new 1: select name, value sys_value from v$sysstat where lower(name) like lower('%cell%flash%')

---------------------------------------------------------------- --------------------------
cell writes to flash cache 1849558161
cell overwrites in flash cache 1221403212
cell partial writes in flash cache 96084
cell flash cache read hits 8682813632

To sum it up:

Even though I thought the write-back flash cache would be more used on ERP side it actually is used more on planning instance where there are more write heavy operations during the night. This is of course case for us but could be different for some other EBS customers.

Oracle Open World & Intelligent Bot Service

Past week I attended Oracle Open World in San Francisco. I hadn’t been there for last 7 years so it was good to see lot of familiar faces there again. Still what comes to conference itself there are always good presentations but the percentage for hit & miss sessions is a lot higher compared to other conferences in my opinion.

One of the key topics this year was artificial intelligence and machine learning which was on my interest list as well. There are lot of possibilities with these topics for companies and I guess everybody is trying to look how their business could benefit from AI.

From the sessions which I attended one of the most interesting one was Oracle Intelligent Bots hands-on lab. The Intelligent Bot Service isn’t launched yet but should be in the next few weeks. We still got sneak peak on it and could build up our own chatbot during the one hour session.

Going through the dialog low in the hands-on lab

The bot service cloud had nice user interface where you got to build your own chatbot service. My initial thought was that the layout was really simple and they had some good features which makes it easy to build chatbot using Oracle’s own BotML language. They said the language is basically YAML with some differences to it but they did not go through what are the differences though.

I don’t remember exactly what the different modules were which had to be defined but you build up your dialog flow which is then picked up if the user input matches to that flow. From the developer UI you could also easily add more sentences which match to action required and this was clearly one thing they had put effort on.

After that you defined actions further and if there was requirement to get more data then bot would reply to user with a new question.

In the lab we created simple bot which checked user’s balance from her account. This was two-step action as once the keywords were matched the bot asks which account (check, credit card etc) should the balance be checked and then finally replies with correct balance.

Also there is possibility to modify the BotML directly from the editor. Given sometimes editing YAML is bit frustrating they had made it easier with line breaks going to correct place and so on. In further versions they are going to add new GUI editor there as well which Oracle was excited on.

We did not go through data sources as they said there isn’t enough time to do that in one hour session but from the cloud service you are supposed to be able to add your sources and then build the service using those sources. This is the most interesting part how it will work out.

Once the service is released I will definitely check it out further and see how they have priced it and what you can do with it. I’m at least excited myself that Oracle is looking new clever ways to improve customer / user experience overall. Chatbots don’t need to be just for external users but there is definitely room to look ways to integrate this with your e-Business Suite for example.

Supposedly it integrates with Facebook, Slack and so on.

More information on the service from Oracle’s pages: