Getting to know basics of Oracle Cloud Infrastructure Load Balancing service

Feels like I’m jumping bit from topic to another but I had some testing ongoing with OCI Load Balancing service so thought to write a post on it. I’ll also throw few comparisons with AWS ELB so it gives an idea how Oracle has done it’s service.

In OCI the Load Balancing (LB) service is a regional service either public or private in your Virtual Cloud Network (VCN) depending what requirement you have for it and it manages either TCP or HTTP traffic.

If you need a public LB then the service will create two LB’s (primary and standby) in different Availability Domains (AD) so it will provide high availability. This means you will need to provide two different subnets from your VCN for the LB and they will get two different private IP’s. You can’t determine which one will be the primary.

The LB will also be assigned a floating public IP so in case one of the AD’s would go down the IP would be transferred to another LB where as in AWS the DNS name is linked to ELB IP addresses.

If you instead need a private LB this will be configured in one AD only and there will be a floating private IP assigned to the LB. If the AD would go down there is no fall back to another AD so that specific LB would go down.

Creating Load Balancer

For my test I created a public LB with three backend servers servicing http content from port 80. I used three servers so I could demonstrate some of the health check functionality Oracle has implemented.

Creating the Load Balancer screen

In above LB creation screen you can see when I select to create a public LB I need to choose two public subnets which reside in different availability domains. These I had created beforehand.

Also when you create the LB you need to select correct shape for it. The options are 100Mbps, 400Mbps and 8Gbps so there is plenty of variety depending on your requirement and how much you are willing to pay!

The other components which are required for LB are:
Backend Set which is grouping of backend servers and health check and load balancing policy defined
Backend Servers which are the destination of your LB routing
Listener to listen incoming traffic on specific port and route it to a backend set

Like said in the backend set definition you have a health check policy. You will define what path your healthcheck URL (for example /health.html in case of http) resides and in which interval it is polled on. Also things like status code of response are defined here.

You will also define the load balancing policy, whether its round robin, IP hash or least connection. All these are described in the documentation.

Next when you define the actual backend servers with ports is my biggest UI related gripe in the whole service. You either need to know the OCID or IP of the server and you have a nice link to view the instances. But you can’t copy the IP or OCID from anywhere! This could be so easy if you could just have a drop down  where to choose the servers from.

Type the OCID which you have of course saved beforehand to a text file here.

Good help when creating the backend servers is the checkbox to help system create proper security list rules so your servers can be accessed.

Once you have added servers you will have in your backend set you can continue on to the listener. In the listener you will define port it is listening on and related backend set. If you need to you can optionally create a path route set which tells listener which request is routed to which server.

That’s it almost! You still need to edit your security lists so traffic is allowed to backend servers (if you didn’t do this already when creating the backend servers) and that the traffic is allowed to and from to the listener. This is what documentation says about security lists:

To enable backend traffic, your backend server subnets must have appropriate ingress and egress rules in their security lists. When you add backend servers to a backend set, the Load Balancing service Console can suggest rules for you, or you can create your own rules using the Networking service.

And for the listener be sure to checkout this.

Everything has been setup and health of servers is OK

Each part of LB service has different health categories but I wanted to look more on backend set during this test.  The health is divided into following categories OK, WARNING, CRITICAL, UNKNOWN.

If everything checks out fine then status is OK.

If more than one but less than half shows up as CRITICAL, WARNING or UNKNOWN then status is WARNING.

If more than half shows up as CRITICAL, WARNING or UNKNOWN then status is CRITICAL.

And if following conditions are met then status is UNKNOWN: More than half shows up as UNKNOWN, listener is not properly configured or the system could not retrieve metrics.

All this is explained here.

Example when health changes to WARNING when less than half of servers are CRITICAL


If I would say anything about creation of OCI LB compared to AWS ELB then creating ELB has similar components but the AWS UI is bit more user friendly.

With OCI you need to know all components you have to update where as in AWS you go one step forward all the time. All this goes away if you use some orchestration tool of course.

You should also consider how many actual load balancers you need vs if you create one load balancer with multiple listeners. You can also use SSL certificates with your LB and terminate the connection to the LB, use backend SSL or use end-to-end SSL based on your requirements.

Finally link to OCI Load Balancing Service documentation.


OCI network with public and private subnets

When you create your VCN (Virtual Cloud Network) in Oracle Cloud Infrastructure so that you have a virtual network for your compute servers you then create subnets under the VCN. The subnets will contain part of CIDR block you have allocated for the VCN.

If you are not familiar with VCN then good place to start is from VCN FAQ:

So for example your VCN is CIDR block of (65,536 ip’s) and then you create a subnet under it with (256 ip’s) . Oracle reserves two first IP’s and the last IP from each subnet on their use.

Either you will have instances which are faced against public internet or then you want to keep your instances private so only you can access them for example through your corporate network.

What do I need for my subnets?

If you need to create both public and private instances then you should create respective subnets. One subnet can be accessed from the internet and other one can not.

For the subnet which is public you can then allocate a public IP address to your server (or  actually for interface of it). The server will need public IP, a security list rule which allows traffic to specific ports and an Internet gateway which is mapped to the route table assigned to the public subnet.

For the private subnet we don’t need to add public IP or Internet gateway in the route table. In fact when you create a subnet and you choose private subnet it won’t allocate public IP addresses to that subnet.


With OCI you don’t need to add VCN’s CIDR block in the route table but instead if security lists allow then servers which belong to subnets in the same VCN have automatically a route between each other. This is  different compared to AWS!

Below image shows that I have used the default VCN route table for my subnets and it has the Internet gateway assigned for it.


If I don’t specify a route table when creating a subnet it will allocate the default route table to it. You can’t change the route table in the subnet anymore after that to another one! However you can modify the existing route table routes.

So if you share the route table between multiple subnets this could become an issue!

Now I have two subnets – public and private. Both have a default route table assigned which has a route to Internet gateway. I also have a security list which allows SSH traffic inside my subnets.

If I would like to access my private or public subnet from corporate network I would need to add a route to dynamic routing gateway (DRG) which would have VPN tunnel to my coroorcor network.

Accessing your subnets

I have also two VM’s – one in public (public1) and one in private(private1).

Two instances in different subnets

As you can see the other one has public IP address and the other one not and they belong to different subnets.



During VM creation I have created a SSH key which I will use to access my public and private VM. When logging in I will use my default VM user opc and supply the private key file I have created.

[simo@mylinux ~] ssh opc@ -i s1.ppk
Last login: Wed Feb 14 09:54:46 2018 from
[opc@public1 ~]$

That’s it – so I can access my public VM fine. Now if I would need to access my private VM I can use my public VM as a jump server.

This is something you will need to think when creating your network. What is the method accessing your private subnets and how will they access the internet (to download packages etc). Jump servers and NAT gateways are an option in these cases.

As I mentioned earlier subnets within VCN don’t need a route with each other so I should be able to access my private VM from my public VM without modifications to the route table. Let’s test!

[opc@public1 ~]$ ssh opc@ -i s1.ppk
Last login: Wed Feb 14 10:12:44 2018 from
[opc@private1 ~]$

Works smoothly! So to summarize you need to understand which servers you will place in public and which in private subnet. Also think of NAT gateways to access internet from your private subnet. In my example even though I have the same Internet gateway assigned to both subnets I can’t access internet from my private VM.

Oracle doesn’t have NAT gateway as a service yet but instead you need to create your own NAT instance in public subnet and route private subnet traffic through that NAT instance to internet.

Good example on deploying NAT instance with Terraform:

After playing around I will want to remove my subnets so they aren’t left there as they have no further use. Remember that subnets must be empty before deleting them!

Using oci-cli for Oracle Cloud Infrastructure

Slow updates recently as I was getting ready to two AWS exams. Happy to announce that I passed both the Solutions Architect Associate and the SysOps Administrator Associate!

As I’m working with Oracle Cloud Infrastructure (OCI) as well now then next stop will be to pass the OCI Solutions Architect Associate exam. I think it will have lot similarities with the AWS ones and then it should be fairly easy to catch different topics specially on the networking side.

But this post is about oci-cli!

In addition to console in OCI you can use python-based command line interface which is named as oci-cli same as in AWS you can use awscli. I thought brief introduction on it would make a good post.

What do you need to use oci-cli?

First of all you need a user in OCI who has some permissions. You can define the permissions on basis what the requirements are. It can be that user can create VM’s or access iam etc and that policy is assigned to the group the user belongs to.

After you have an existing user  you will need to create API key pair for your new user.

In the Oracle documentation they recommend to use git-bash to generate the keys:

Once you have created the API keys remember to save your private key to safe place! You will need it soon. You will need to go to OCI console and browse to Identity – Users – User details. From there you can click the “Add Public Key” and paste your public key contents in it. If it’s successful you can then see fingerprint on the public key box.


Install and configure oci-cli

To install oci-cli you can follow instructions from here:

You can install it for windows or any computer with bash. Installation is quite straightforward as you will define the installation directory and bin directory for your executable. After that you are ready to use it!

In the later examples I’ve manually changed the OCID’s (Oracle Cloud ID) so if you see some discrepancy that is the reason.

Now I want to configure my oci-cli so it will have necessary information stored. For this I will run on windows the following:

oci.exe setup config

Enter a location for your config [C:\Users\simo\.oci\config]: c:\software\oracle-cli\config

Enter a user OCID: ocid1.user.oc1..3465y5bhdgdgggngndgndgndgndgn

Enter a tenancy OCID: ocid1.tenancy.oc1..4tgreeegeggrgrreg535334343

Enter a region (e.g. eu-frankfurt-1, us-ashburn-1, us-phoenix-1): eu-frankfurt-1
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y
/n]: n

Enter the location of your private key file: c:\path\.oci\oci_api_key.pem

Enter the passphrase for your private key:

Fingerprint: b5:51:f0:ce:79:3d:f6:28:cd:f3:23:12:22:4a:c3:b1

Do you want to write your passphrase to the config file? (if not, you will need to supply it as an argument to the CLI)
[y/N]: y

Config written to c:\software\oracle-cli\config

Few things I must have when running the config – I will need my user OCID, tenancy OCID, the region I’m going to operate on and finally the location of my recently created private key.

That’s it! Now I can run commands through the oci-cli as it has necessary information in it’s config file. Let’s try.

To see list of available options and commands you can just run oci.exe. Available commands are:

audit                Audit Service
bv                     Block Volume Service
compute          Compute Service
db                     Database Service
dns                   API for managing DNS zones, records, and…
iam                   Identity and Access Management Service
lb                       Load Balancing Service
network           Networking Service
os                      Object Storage Service
setup                Setup commands for CLI

So you always need to have the necessary service first and then the subcommand for that service. For example when running the oci.exe iam:

availability-domain                         One or more isolated, fault-tolerant Oracle…
compartment                                    A collection of related resources.
customer-secret-key                        A `CustomerSecretKey` is an Oracle-provided…
dynamic-group                                An dynamic group defines a matching rule.
group                                                 A collection of users who all need the same…
policy                                                 A document that specifies the type of access…
region                                                A localized geographic area, such as Phoenix,…
region-subscription                        An object that represents your tenancy’s…
tag                                                      A tag definition that belongs to a specific…
tag-namespace                                A managed container for defined tags.
user                                                   An individual employee or system that needs…

So to list my users I will run oci.exe iam user list. And similar to above then getting subcommands to iam user command you just run that.

To get list of my users I will also need to supply the compartment-id along the query. You can see this from Identity – Compartments. Remember compartment in OCI was collection of your resources grouped in to the compartment!

oci.exe iam user list --compartment-id ocid1.tenancy.oc1..aaaaaaa

"data": [
"compartment-id": "ocid1.tenancy.oc1..aaaaaaaaj3ute3hbdfqtbosusfqoihv3rwiophci3433fdfddfdfddfdfv454",
"defined-tags": {},
"description": "This is the cloud admin account",
"freeform-tags": {},
"id": "ocid1.user.oc1..aaaaaaaaj3ute3hbdfqtbosusfqoihv3rwiophci3433fdfddfdfddfdfv235",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "cloud.admin",
"time-created": "2018-02-13T08:54:49.231000+00:00"
"compartment-id": "ocid1.tenancy.oc1..aaaaaaaaj3ute3hbdfqtbosusfqoihv3rwiophci3433fdfddfdfddfdfv454",
"defined-tags": {},
"description": "this is the test user",
"freeform-tags": {},
"id": "ocid1.user.oc1..aaaaaaaaj3ute3hbdfqtbosusfqoihv3rwiophci3433fdfddfdfddfdfv238",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "cloud.readonly",
"time-created": "2018-02-13T10:32:52.872000+00:00"
"compartment-id": "ocid1.tenancy.oc1..aaaaaaaaj3ute3hbdfqtbosusfqoihv3rwiophci3433fdfddfdfddfdfv454",
"defined-tags": {},
"description": "Simo V",
"freeform-tags": {},
"id": "ocid1.user.oc1..aaaaaaaaj3ute3hbdfqtbosusfqoihv3rwiophci3433fdfddfdfddfdfv458",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "",
"time-created": "2018-02-13T08:36:06.617000+00:00"
"opc-next-page": "AAAAAAAAAAF0J19EgxQCxqtNSlWbUFrYYCgLLOIArstI-B7dqGJC7-DLBT-BcJEcKH2-rCTfS4r_c4utNr3RbYnsO2eqIXb9Yvz0

The output is by default displayed in JSON but you have option to use –output table for table-a-like output.

That’s it – now you have oci-cli working and can start working using it in addition to the console! There are lot of different possibilities on using it but this post only shows how you get it up and running.

More in the future!



FinnishingThoughts Podcast first episode is out!

For already quite some time I’ve thought it would be great to discuss on interesting topics with some of the Oracle experts out there and now I’m happy to say my first episode on FinnishingThoughts Podcast is out!

Finnishing Thoughts Episode 1



In my first episode my guest is an Oracle Ace Director who works as Accenture’s Head of database management, Julian Dontcheff.

Julian shares his views on latest Oracle’s announcements and how he sees Oracle’s cloud positioning at the moment with all the new products released to the cloud. He also talks on Accenture’s study on Oracle Cloud performance. And we discuss future DBA’s role and how GDPR is impacting DBA’s.

And if you didn’t know what is Julian’s favourite database feature you will hear that as well!

You can read Julian’s blog at

Oracle Cloud performance study:

If you liked the episode please subscribe on it in iTunes!




Attended Oracle Cloud Infrastructure seminar – part 2

In this post I will go through rest of services we discussed in the seminar.  You can read part 1 from here.

5. Compute Services
6. Storage – Block volume & Object storage
7. Load Balancer
8. Database Services
9. A Lab

5. Compute Services

So this is the service where you will create your own compute machines and is comparable to Amazon EC2. Oracle has two main options here. Either Bare Metal (BM) or Virtual Machine (VM). List of machines and their pricing is here.

If you want a machine which is not shared with anyone without virtualization overhead you have the possibility to get a physical machine dedicated for you. But if you are fine going with virtualization then you can just create a virtual machine.

Most of the machines do not come up with local storage but instead of block storage which you need to create and allocate. However for Dense IO operations you have few machine types where you can select NVMe SSD devices. With those you yourself will take care of backups, RAID configuration etc.

After selecting the machine type you need to decide which image you use to create the machine. This is similar to Amazon AMI’s. Images can be Oracle-provided, custom or BYOI (bring your own).

Right now the options with OCI oracle-provided images are Oracle Linux 6 & 7, CentOS 6 & 7, Ubuntu 16.04 LTS and Windows Server 2012 R2 BM & VM. You can create custom images from these images after configuring them further with your own configs. There is limitation of 50GB in size of image though.

When you launch the instance you need to create key pair. Compared what AWS has in Oracle you had to supply your own keys and you could not create it on the fly. Small thing but in AWS it’s quite handy to get the key immediately when you need to play around with instances. When you launch the instance you need to define which compartment it belongs to so only people who require access to this compute server have access to it.

When you are creating the instance you also define the Virtual Cloud Network and Availability Domain it belongs to and after that you define specific subnet which you wan’t the instance to run on. If the instance is on public subnet you can get a public IP so you can connect to the instance using SSH.

My OEL 7 instance which has private and public IP address defined in a public subnet

One major thing which was not available yet in OCI was auto scaling. So you can’t configure group of servers to scale out and in based on need. Once again this was on the roadmap.

You can find more information on the compute service from here.

6. Storage – Block volume & Object storage

When it comes to storage there are two different types of storage similar to AWS. In AWS you have EBS for block storage and S3 for object storage. In OCI you have block volumes and object storage. Let’s start with block volumes.

Block Volume Service

These are the volumes you normally mount to your compute instance. You can dynamically attach these to different instances, clone and take snapshots. After attaching it to instance you can mount and use them like regular volumes.

These come in sizes from 50GB to 2TB and data is encrypted at rest in both volumes and backups. These are also replicated automatically to prevent data loss. They didn’t go through how its replicated but would guess to a different AD at least.

One nice thing was that when you attach the create block volume to your compute instance the OCI will display iSCSI commands to attach the volume to your server. Would be nice if this could be automated somehow though?

iSCSI commands to attach storage to your instance

And if you take a backup of your block volume they are stored on the Object Storage Service encrypted.

More info on block volumes from here.

Object Storage Service

Similar to Amazon S3 this is where you store your unstructured data. OCI seminar mentioned following use cases for Object Storage: Big Data, Archive & Storage, Content Repository.

Guess where you data is stored? In buckets of course like in AWS! However compared to AWS S3 where there is read-after-write consistency for new objects and eventual consistency for deletes and updates in OCI Object Storage you get strong consistency where you get the most recent copy on the disk.  In AWS S3 where you have global bucket names across customers in OCI the bucket name is distinct within namespace of tenancy.

Maximum size of object is 10TB and objects over 50GB need to use multi-part upload and you have option to storage your data either in Standard tier and Archive tier (four hours for first byte to be retrieved). At the moment you can not apply lifecycle policies like in AWS S3 but they said once again it is on the roadmap.

Object storage FAQ can be found from here.

7. Load Balancer

OCI Load Balancer service supports load balancing inside VCN and is a regional service. It supports SSL offloading and you create either public or private load balancer.

Also when you create the load balancer you define the necessary bandwith you want to be associated with. The options were 100 Mbps, 400 Mbps and 8Gbps.

When you configure the load balancer you need the listener, a backend set (logical entity of backend servers), backend servers (the ones which provide content) and health checks. The health checks support http and tcp traffic.

Also you had to configure the load balancing policy, the options were round-robin, IP-hash and least connection.

If I would compare this to AWS ELB there are obviously few functionalities missing and the setup was bit more complicated than in AWS. Still it was quite easy to setup everything if you had some prior knowledge on load balancing.

Load balancer FAQ.

8. Database Services

We actually ran out of time so we didn’t go this service through in detail. However this service is of course the bread and butter of Oracle’s services.

Oracle offers three type of database systems in OCI:

-Bare metal DB
-Exadata DB
-VM Based DB (New feature!)

What is great in their DB services is that you can dynamically scale amount of cores on your DB up or down based on requirement. Oracle manages the underlying infrastructure but customer takes care of database & OS. I haven’t used this service earlier so I can’t say much about it but some new features which recently were published to this service are:

-Support for dataguard
-Integrated backup & restore
-BYOL (Bring your own license model)

Also on the Exadata DB side there are several interesting features. Integration with IAM, database backups to object store and VCN use cases.

More details on the OCI database service here.

9. A Lab

The hands-on part consisted of creating a VCN, Subnet with a Internet Gateway and a compute instance where you mounted block volume. After that you modified the security list with a port 80 rule to allow HTTP traffic and verified the instance is accessible.

In the security lists setup you can click the rule to be stateless if you want

Very basic setup but the aim was to give people base knowledge on the services.

I had some time to do a load balancer configuration also without instructions and created one database on VM. Good example with load balancer was that if you know Amazon services then using OCI is simple.


Overall the services are much like Amazon Web Services which I’ve stated already few times. You see lot of areas where they are working on to get it to same level as AWS so I would assume if you have a large Oracle footprint and considering the cloud licensing model Oracle offers then Oracle is something you should consider on!

If there is one thing I had to complain is the actual user interface. Sure everything is there and it’s simple to navigate but I felt the content on the pages was so damn BIG. Making the actual content bit smaller would make it easier to see all your content you have provisioned.

The actual seminar had a good instructor who clearly was skilled with OCI and they let us ask questions all the time which was refreshing. 🙂

I wish Oracle would move more towards similar open-minded seminar/cloud usage so people could easily adapt on using Oracle’s cloud. The 1-month/300 USD test account is good start but compare that to Amazon’s where you can play around for a whole year it gives people more solid framework on their services.

Let’s hope they do similar tier also!


Attended Oracle Cloud Infrastructure seminar – part 1

I had the opportunity to attend half a day seminar about Oracle Cloud Infrastructure (OCI) what Oracle offers. Here are some notes about it and some comparison to Amazon Web Services which I have been using a lot lately as well.


The seminar was divided in to few different topics which were:

  1. Introduction to Infrastructure services
  2. Identity and Access Management (IAM)
  3. Virtual Cloud Network (VCN)
  4. Compute Services
  5. Storage – Block volume & Object Storage
  6. Load Balancer
  7. Database Services (DBCS)
  8. A Lab

I’ll describe what I learned from each area and in the first post I’ll go through everything up to VCN.

1. Introduction to Infrastructure services

This was just a general walk through on the services and how they are build up. As with AWS Oracle has divided OCI to different regions and each region has multiple Availability Domains (AD) same as AWS has Availability Zones.

Services which are available on high level can be seen from below picture. This is only the infrastructure services and Oracle’s other cloud services were not discussed in this seminar.


2. Identity and Access Management (IAM)

Similar to AWS when you sign up to OCI your account is the root account. After that you are free to create new user accounts with least privilege policy. So by default you don’t have access to anything.

OCI has IAM groups and you can then assign user to one or many groups and groups then dictate what access you have. You could for example have a group for network admins who can then modify network configurations.

OCI has a resource called tenancy which contains all of your OCI resources. However under tenancy there are compartments which is a logical container to isolate and organize your cloud resources. For example you can have specific compartment for your Finance department. You can still share resources across compartments if needed.

Policies to access resources are written in a SQL-a-like format. This seemed like a nice way to get people understand how to write them. Only thing I was wondering would it have been easier to go with already existing language?

Example on policy:

Allow group HR to read all-resources in tenancy Subcompany; (or compartment level)

Unfortunately for cost management there is no fully matured consolidated billing yet available but that is on the roadmap.

More info on IAM and tenancies and compartments from here.

3. Virtual Cloud Network (VCN)

Again a concept which was easy to absorb after working with AWS. After you have selected your region and want to start building your infrastructure you need to create your network. In OCI you have VCN and in AWS you have same concept with VCPs.

A VCN can cross multiple Availability Domains in a region. Usually when you create a VCN you reserve specific private CIDR block for your use and under it you will create subnets.

For example create VCN with and then two subnets with &

Subnets are then specific for an Availability Domain and are either public or private. With OCI and subnets Oracle reserves first two IP addresses and the last for their use where as in AWS they reserve 4+1.

Access to your subnet is controlled by Security Lists. You define what ports can be used in & out. With OCI they had possibility to set your Security List rule as stateful or stateless. In AWS you either use security groups (stateful) or network ACLs (stateless). Was nice to simplify this!

To learn what stateless vs stateful is check it from here.

If you want to access internet from your subnet you need to create Internet Gateway and add it to your subnets route table.

This shows VCN with three public subnets. As you can see all subnets have different CIDR block.


You need Dynamic Routing Gateway (DRG) compared to AWS Virtual Private Gateway if you have requirement to access your onpremise datacenter with VPN. Again if you know AWS then these concepts are really easy to pick up!

When you have higher bandwith requirements between your onpremise datacenter and OCI you can use Oracle Fast Connect to achieve higher throughput. This matches to AWS Direct Connect on high level.

If there is requirement to access internet but you don’t want to make your server visible to public you can use Private IP on your route table. This acts as a NAT gateway for the servers.

You can connect multiple VNC’s with VNC Peering however at this point this is limited to tenant and the same region. Improving this was also on the roadmap.

More info on VCN from here.


Starting concepts were almost 1:1 with Amazon Web Services. Some things which I mentioned are still behind compared what AWS offers today but it was good to hear they had so many things under roadmap which should make things easier for customers in the future.

On part 2 I will go through rest of the services on the seminar and also review the lab we did.


Loading customizations to e-Business Suite with Ansible

Happy new year! I’ll continue with Ansible topic with one more post as it has helped our small operations team quite a lot lately.

We still run Oracle e-Business Suite 12.1.3 and the amount of customizations we have is really high(No surprise there!). Due to some historical reasons initially we were loading custom objects to eBS via shell scripts. As time passed this was changed so we used concurrent requests to install those custom objects.

Basically what happens is the concurrent looks for specific named zip object from installation directory and if it exists it executes the shell script. This method is no way perfect but it has worked for us and nobody really had time to improve the script.

In addition to this we always had to go to version control system, checkout the zips for specific tag/release and then upload them to installation directory. Lot of manual and quite static tasks!

So Ansible to the rescue. Using Ansible we found really simple way to reduce manual tasks without touching (yet) the actual installation method. As of now we still use subversion but are already in process on switching to Gitlab and making some changes on deploying the code.

If I break the tasks what Ansible does on high level:

  1. Install subversion & rsync to the server (these are needed)
  2. Delete old tag folder & export subversion tag folder defined in playbook
  3. Register all zips in tag to a variable
  4. Copy files to installation directory
  5. Grep and register user home to variable (apparently there is no easy way to get become_user home directory)
  6. Run CONCSUB on application server to submit concurrent request based on zip file name
  7. Delete old tag folder

Once again as there is lot of passwords involved we have used ansible-vault as well for variables.

1. Install subversion & rsync

Here we first define the startdate if we would like to schedule the requests but next step is to install subversion & rsync via yum. Really basic but I think it is best to have these here because otherwise you would always need to check if they exist or not.

- include: define_startdate.yml
when: sch_date is defined

- name: Install subversion and rsync to Server
yum: name="{{item}}"
- subversion
- rsync

2. Delete old tag folder & Export the subversion tag

Again really basic things. Remove directory and export the tag defined in the playbook.

- name: Delete tag folder {{tag}} if exists
path: "/tmp/{{tag}}"
state: absent

- name: Export Subversion tag {{tag}}
repo: svn://{{testsvn}}/{{tag}}
dest: /tmp/{{tag}}

3. Register all zips in tag to a variable

Here I look the earlier created tag which contains zip files, look all of them and register them into variable.

- name: Register zips to a variable
paths: "/tmp/{{tag}}/"
patterns: "*.zip"
register: install_zip

4. Copy files to installation directory

As I want only to copy the zip files from subversion export directory to the installation directory I found rsync to be good for that purpose. With rsync you need to use delegate_to parameter so it is done on destination server.

- name: Copy files from tag to installation folder
mode: push
src: /tmp/{{tag}}/
dest: "{{ricef_dir}}"
- "--include=*.zip"
delegate_to: "{{ inventory_hostname }}"

5. Grep and register user home directory

As I don’t want to login with apps user and instead use become_user this was one way to get the user home variable registered. This is needed for the concurrent request execution to have environment variables loaded.

- name: grep and register
shell: >
egrep "^{{ ap_user }}:" /etc/passwd | awk -F: '{ print $6 }'
changed_when: false
register: user_home

6. Run CONCSUB to submit concurrent requests

I have changed the three different types of customizations to TYPE1, TYPE2 and TYPE3 but we have three different customization areas and zip files always contain their respective type in the name.

Each type loops now through (with_items) the files we earlier registered in the variable install_zip. When the item matches to specific type it submits the CONCSUB (when part).

Also we submit this using the application owner user (ap_user) and source the shell when submitting the CONCSUB. All other variables are defined in the group_vars file.

- name: Install TYPE1 zips
shell: source "{{user_home.stdout}}"/.bash_profile && CONCSUB apps/"{{apps_pass}}" SYSADMIN "System Administrator" "{{apps_user}}" \
WAIT=N CONCURRENT XBOL XBOCOD "{{run_date|default ('')}}" '$TYPE1_TOP/install' "{{item.path | basename}}" "{{type1_pass}}" '"{{db_host}}"' '"{{db_port}}"' \
with_items: "{{install_zip.files}}"
when: item.path | search("type1")
become: true
become_user: "{{ap_user}}"

- name: Install TYPE2 zips
shell: source "{{user_home.stdout}}"/.bash_profile && CONCSUB apps/"{{apps_pass}}" SYSADMIN "System Administrator" "{{apps_user}}" \
WAIT=N CONCURRENT XBOL XBOCOD "{{run_date|default ('')}}" '$TYPE2_TOP/install' "{{item.path | basename}}" "{{type2_pass}}" '"{{db_host}}"' '"{{db_port}}"' \
with_items: "{{install_zip.files}}"
when: item.path | search("type2")
become: true
become_user: "{{ap_user}}"

- name: Install TYPE3 zips
shell: source "{{user_home.stdout}}"/.bash_profile && CONCSUB apps/"{{apps_pass}}" SYSADMIN "System Administrator" "{{apps_user}}" \
WAIT=N CONCURRENT XBOL XBOCOD "{{run_date|default ('')}}" '$TYPE3_TOP/install' "{{item.path | basename}}" "{{type3_pass}}" '"{{db_host}}"' '"{{db_port}}"' \
with_items: "{{install_zip.files}}"
when: item.path | search("type3")
become: true
become_user: "{{ap_user}}"

7. Remove tag folder after execution

This is just for cleanup.

- name: Removing temporary files
path: "/tmp/{{tag}}"
state: absent


We have another playbook when installing single customization but it follows the same logic. This one is used to load all customizations for specific release and is called using ansible-playbook:

ansible-playbook tag2UAT.yml –ask-vault-pass –extra-vars “tag=REL_2017_12_1 apps_user=MY_APPS_USER”

So the only variables I pass are tag and my own username. All passwords and other environment variables are defined in ansible-vault file group_vars/UAT and so on. If I would want to schedule the concurrents I would pass one more variable for the date.

Even though this is just basic level scripting it has helped us a lot to reduce manual tasks and automate tasks which are error prone.