Different environments and Terraform workspaces with Oracle Cloud Infrastructure

While we want to provision most of the things through Infrastructure as Code and use Terraform with our Oracle Cloud Infrastructure we also had to make sure each change will be properly tested before moving to production.

So what is a convenient way of deploying your infrastructure with different variables to different environments?

If you search around you see there are a lot of opinions how to do this. You might want to separate your environments on different folders so they have different variables and call your modules from there or you might want to have different tfvars files for different environments and call these when you run terraform plan/apply.

There is also the option we are looking now which is to use terraform workspaces. Workspace is sort of separate environment when you run terraform plan/apply and it will use separate state for each workspace. Sounds good? Yes!

What we did with workspaces

We have three different branches and environments:

  • default (basically dev)
  • pprd (pre-production)
  • prod

When we use workspaces then the default one is active when you start. If you want to use different one you need to create it first on the folder you are running terraform with:

terraform workspace new pprd

This will also change to the workspace you created. Later on if you want to switch between workspaces you can use:

terraform workspace select prod

From development perspective if I develop my change in default workspace, execute it and see all is ok it will get moved to next branch. When it’s there we will use another workspace and there will be a new state file for that environment/branch.

So what is the point of all this?

The point is that we can easily use different variables defined in the same variables.tf we always use. What you need to do is define variables as a map like this:

variable "bucket_name" {

type="map"

default=

{"default" = "dev-admin-bucket-tf"

"pprd"="pprd-admin-bucket-tf"

"prod"="prod-admin-bucket-tf"

}

}
In this case I have a different bucket_name variable depending on which workspace is being used.
And when creating the resource I have it like this:
bucket_name = "${lookup(var.bucket_name, terraform.workspace)}"
From the map I pick the value which matches with current terraform.workspace. Simple!

Conclusion

Using workspaces is an option you could consider if you have multiple environments.

One problem I noticed is that if you would like to have a list in your variable which would be inside map this isn’t so easy to do. That way you could pass list of variables lot easier to your module (if you use modules).

While we gather more experience on using these I will probably have more refined option later on but for now it seems like a feasible option with your infrastructure.

Advertisements

ODC Appreciation Day: Terraform

First time I’m taking part of Oracle Development Community Appreciation Day which Tim Hall has created to post about technology you use daily.

My topic will be Terraform which is still fairly new technology for me after having used it only for around a year now. But what a game changer it has been!

I like to talk about how lot of Oracle products are not that cloud-native and if you want to take some benefits from the cloud with your Oracle solution you should find out ways for it. Well if you can’t do it for the actual application go with the infrastructure!

Terraform is an orchestration tool which records the state of your infrastructure. With it you can create, change and remove the infrastructure you have created and deliver whole stack as far as the provider supports by Infrastructure as Code.

Just some months Oracle Cloud Infrastructure (OCI) became also an official provider for Terraform. There are providers for all major cloud vendors AWS, GCP, Azure and for several others.

I’ve always liked coding and scripting but running Oracle databases, e-Business Suite and Oracle BI you haven’t had much possibilities for it. Some years ago we started to use Ansible but now when we are using Oracle Cloud Infrastructure I think Terraform is really the way to deploy your infrastructure and then continue with Ansible for example on those components you can’t deploy with Terraform.

If you are starting to migrate some of your workloads to OCI my recommendation is to evaluate use of Infrastructure as Code and start using it right from the start! Sure you will need to put some effort into it when you start but the benefits of versioning and automating your infrastructure come from there.

With OCI provider there are still some shortcomings but Oracle has been very active to improve the provider. Also with Terraform the version 0.12 should be coming out in October which has huge improvements how you can manage everything!

I’ve written a lot of blog posts about OCI and Terraform in the past. Here are some for further reading:

https://finnishingthoughts.wordpress.com/2018/04/04/using-oracle-cloud-infrastructure-with-terraform-modules/

https://finnishingthoughts.wordpress.com/2018/04/27/deep-dive-into-oci-with-compartments-users-groups-and-policies/

https://finnishingthoughts.wordpress.com/2018/05/29/create-db-system-to-oracle-cloud-infrastructure-with-terraform/

https://finnishingthoughts.wordpress.com/2018/06/06/create-multiple-instances-with-one-terraform-module-in-oracle-cloud-infrastructure/

https://finnishingthoughts.wordpress.com/2018/09/24/oci-block-volumes-and-terraform/

Have a great ODC Appreciation Day!

 

OCI block volumes and Terraform

So we already know how to spin up instances with and without Terraform but what about when you need to attach a block volume to your fresh instance which you’ve created?

You have an option to login to console and just add either paravirtualized or iscsi volume to your compute instance. If you add paravirtualized volume it will show up immediately but if you add an iSCSI volume you will need to run additional commands so it will be visible in your instance.

Depending on your use case remember that paravirtualized volumes will/might have worse performance compared to iSCSI volume.

On this post I’ll show how to do it via console and main topic will be on how to create block volume and attach it to instance with Terraform. In the end I will mention some cases how you could continue with automation and one problem I noticed when using OCI.

You can also do this via oci-cli but I will not go through that on this post.

Console

Just to show from console you can create and attach the block volumes to an instance.

oci_create_bv
Create block volume
oci_create_bv2
Attach block volume

Terraform

My initial idea on creating volume and volume attachment to an instance was to do this via two separate modules. However I ran into issue which forced me to do this in one module.

If I use a separate module for block volume attachment and create more than one volume the problem was Terraform doesn’t know amount of variables in the passed list to volume attachment module and throws an error. Easy way to get around this was to combine these two modules. Usually you anyway attach volume to instance when you create it.

If you have a requirement to create block volumes without attachments then you could potentially create a new module for it so you use two different volumes.

Terraform code calling modules

In the actual main.tf on creating the instance and volume I have created the instance first by calling create instance module. I’ve shown that in previous blog post so I will skip that and go straight on creating the volume.

This is what it looks like:

module "CreateVolume" {

source="../block_volume"

volume_count="${var.volume_count}"

tenancy_ocid="${var.tenancy_ocid}"

compartment_ocid="${lookup(data.oci_identity_compartments.GetCompartments.compartments[0],"id")}"

volume_availability_domain="${lookup(data.oci_core_subnets.GetPublicSubnet.subnets[0],"availability_domain")}"

volume_display_name= ["${var.volume_display_name}"]

volume_size_in_gbs= ["${var.volume_size_in_gbs}"]

instance_id="${module.CreatePublicInstance.instanceId[0]}"

volume_attachment_type= ["${var.volume_attachment_type}"]

}

Let’s break this down.

In the start I use static variable volume_count to define how many volumes I will make.

I’ve used data sources to get compartment ocid and the public subnet availability domain. Remember that you create the block volume always on the same availability domain as your instance resides as it will be attached to your instance so one way of remembering that is that it needs to be physically close to it.

Next I pass two variables as lists to the module, that’s the reason they are closed with []. If I look how they are defined in variables.tf they look like this:

variable “volume_display_name” {type = “list” default = [“MyVolume1″,”MyVolume2”]}

variable “volume_size_in_gbs” {type =”list” default = [“50″,”60”]}

I want to make modules reusable so you can pass as many variables as needed via lists so if there are requirement to create for example two volumes it can be done by calling the module once.

Next I will pass the instance ocid from the create instance module and as I’ve created only one instance I use [0] to identify the correct ocid. If I would have two instances I would probably call this module twice with [0] and [1] in each respectively.

And finally I pass the volume attachment type as defined via list. For these I use paravirtualized as type.

Terraform module code

The actual module looks like this:

variable "tenancy_ocid" {}

variable "compartment_ocid" {}

variable "volume_availability_domain" {}

variable "volume_display_name" {type= "list"}

variable "volume_size_in_gbs" {type = "list"}

variable "volume_count" {}

variable "instance_id" {}

variable "volume_attachment_type" {type = "list"}


resource "oci_core_volume" "CreateVolume" {

    count="${var.volume_count}"

    availability_domain = "${var.volume_availability_domain}"

    compartment_id = "${var.compartment_ocid}"

    display_name = “${var.volume_display_name[count.index]}”
    size_in_gbs = “${var.volume_size_in_gbs[count.index]}”
}

resource "oci_core_volume_attachment" "CreateVolumeAttachment" {

   
    count="${var.volume_count}"

    attachment_type = "${var.volume_attachment_type[count.index]}"

    instance_id = "${var.instance_id}"

    volume_id = "${oci_core_volume.CreateVolume.*.id[count.index]}"

}

Few pointers from this. When I use count inside resource it will create that many resources as the count variable has. That’s why some of the variables are passed as list and then defined as [count.index] so it takes correct value from it.

You can also notice reference of the previous oci_core_volume resource and usage of splat character there. Usage of splat is briefly mentioned here.

Running Terraform

Now when I run terraform init (note that OCI is now official provider so you don’t need to define provider anymore but just load it via init) and terraform apply it will create five resources, one instance, two block volumes and two volume attachments.

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Creating instance and two volumes sized 50GB and 60GB took around 1 minute and I can see they are successfully attached to the instance.

oci_create_bv3
Created & attached block volumes via Terraform

How to go from here?

What I would to do next is to automate also creation of disks on the operating system side and mounting them automatically. While I was experimenting with this I noticed that you can’t guarantee what assignment each volume will get. The root volume can be sda, sdb etc and the created block volumes can also be in any order.

Even in above the disk assignments are like this:

Disk /dev/sda: 64.4 GB, 64424509440 bytes, 125829120 sectors  <– 60GB volume

Disk /dev/sdb: 50.0 GB, 50010783744 bytes, 97677312 sectors <– Root volume

Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors <– 50GB volume

This results that scripting to automate provisioning further is more complicated than expected. With AWS provider you can define the EBS volume assignment so it will be easier.

This could be also use case for Ansible or looking if you can use cloud-init script in Terraform. I haven’t tested yet how cloud-init would work in this case.

I really fancy the module approach with Terraform and specially when there are more and more requirements to build different components we can utilize same code base for all. Definitely use Terraform right from the start with OCI!

 

Should I use tags in Oracle Cloud Infrastructure?

In Oracle Cloud Infrastructure you have two options to tag your resources, defined tags which are setup by your administrator or free form tags which can be applied by any user who has access to a resource.

So why should you tag resources?

In my opinion when you start your OCI project this is definitely a concept which you shouldn’t overlook! Even though you can separate your resources in different compartments everyone should setup set of pre-defined tags which are applied to resources.

With this you can list and organize resources lot easier and perhaps also later on do tag-based billing? It’s an easy step to overlook and definitely not the most attractive one so that’s why I wanted to highlight this in a post.

I’d even go as far as saying having a tag strategy is critical for your cloud operations in the long run.

Free form tagging

With free form tagging you just select your resource and enter a key value pair what you want to apply.

tagging-2

tagging-1
Adding a free form tag to a resource

Free form tags give a lot of freedom and you can just use them as you like but it’s better to start using defined tags as then you can standardize tagging used within the compartment.

Defined tags

As written earlier these are predefined tags what you can assign to specific compartment so users who have access resources under that compartment can tag them.

Defined tags you find under Menu => Governance => Tag Namespaces

First you need to select your compartment and then click the Create Namespace Definition. I will create namespace resource-tags under my OracleEBS compartment.

tagging-3
Don’t be fooled to add tags here unless you want to tag your tag namespace!

After creating the namespace definition you can add the actual tags into it. I’ve added two tags to my namespace. Tags for owner and environment.

tagging-4

Now the tags are ready to be used. If I create an instance in Oracle-EBS compartment I can choose the tag namespace I created and then define the tags for owner and environment.

tagging-6

The problem I see is that even though I have a tag namespace with multiple tags I can’t force that they are used. If we have defined five tags what should be used within a namespace, user can still choose only one tag and leave rest of them blank.

That’s why its good to use Terraform to create your instances. You can have module ready for instance creation and in the Terraform template you have tags defined in the variables file.

Terraform

I wrote a simple module which creates the namespace and also tags for the namespace from the list. That way you can create as many tags per namespace you want and the module will loop it through. The module looks like this:

variable "tenancy_ocid" {}
variable "compartment_ocid" {}
variable "tag_namespace_description" {}
variable "tag_namespace_name" {}
variable "is_retired" {}
variable "tag_names" {type = "list"}
variable "tag_description" {type = "list"}

resource "oci_identity_tag_namespace" "CreateTagNamespace" {

    compartment_id = "${var.compartment_ocid}"
    description = "${var.tag_namespace_description}"
    name = "${var.tag_namespace_name}"
    is_retired = "${var.is_retired}"

}

resource "oci_identity_tag" "CreateTag" {

    count = "${length(var.tag_names)}"
    description = "${var.tag_description[count.index]}"
    name = "${var.tag_names[count.index]}"
    tag_namespace_id = "${oci_identity_tag_namespace.CreateTagNamespace.id}"
    is_retired = false

}

Only thing to highlight are two variables coming in as a list and then using the count under resource to loop variables through. I had four tags in my variables.tf which then got created within this single module. Quite handy!

Next if I want to use tags in the instance creation I will just add variable as per documentation:

defined_tags = {"Operations.CostCenter"= "42"}

To pass this as a variable you should have it defined in your variables.tf as a map:

variable "project-tag" {
type="map"
default = {project-tags.costcenter = "42"}

And in the module which creates instance I take the variable as map also and then use the defined_tags to assign it:

...
variable "project_tag" {type = "map"}
resource "oci_core_instance" "CreateInstance" {
    count = "${var.server_count}"
    availability_domain = "${var.instance_availability_domain}"
    compartment_id = "${var.compartment_ocid}"
    image = "${var.image_id}"
    shape = "${var.instance_shape}"
    defined_tags = "${var.project_tag}"

When I now create instances through Terraform I can see the defined tag I assigned is applied.

tagging-8

And if there is need to apply more defined tags you can easily pass them in the map which you define in variables.tf.

Summary

Like I wrote earlier you can’t force tags. This would be good so there wouldn’t exist resources without tags you want to use. However using Terraform gives you a better control to manage what variables are requires when you create a resource.

Remember you can’t delete a namespace, you can only retire a namespace including tags so it’s not used anymore.

Also once you have created tags you can then filter resources based on the tag.

tagging-7

At the moment you can only add tags and filter your resources based on tag but hopefully in the long run you can also use them for cost distribution. Having compartments available gives you already flexibility on IAM so you can easily control access with them and don’t need tags for that.

I highly recommend that you take tags into use right from the start as you will need them to group your resources and perhaps you get some advantages in the future as well if you have proper tag setup!

 

Oracle Cloud Infrastructure Service Gateway

Recently Oracle announced Service Gateway for Oracle Cloud Infrastructure (OCI). One of the problematic areas what I have found with OCI is that if you use Object Storage for example for your database backups you have been required to have public internet access from your OCI subnets either by placing instance to public subnet or using a NAT instance between.

Service Gateway is going to change this as now you can access object storage through your private subnet by setting the route rule towards service gateway only without need to access public internet.

This is great news! I wanted to try this out with below example.

Creating and testing Service Gateway

For this example I had created following:

  • VCN with a private subnet
  • Empty routing table
  • Empty security list
  • One instance in the private subnet with oci-cli installed
  • A bucket in object storage

Below are instance and private subnet details.

sg-oci-1
Instance is created in the Suomenlinna compartment without public IP address

 

sg-oci-2
Private subnet with own Routetable and a securitylist

First I will need to create the Service Gateway under Networking => My Test VCN and selecting Service Gateway from the left.

sg-oci-3
To create SGW just select compartment, a name for SGW and the services available. Currently only ObjectStorage service is available.
sg-oci-4
After creation SGW shows up available almost immediately.

After creating the Service Gateway I now need to create a route rule on my routing table for the private subnet. If you’ve done some VCN configuration earlier this is no different than selecting an Internet Gateway for your public subnet.

sg-oci-5
When configuring the route rule you select target type as Service Gateway and define destination service, compartment and select the SGW you created in earlier step.

So now when we have the routing in place we are ready to test!

I’m logged into my OCI instance in the private subnet (by using a jump server in between). I will use oci-cli to list my buckets in the Suomenlinna compartment.


[opc@instance-20180627-1115 ~]$ oci os bucket list --compartment-id ocid1.compartment.oc1..aaaaaaaafidv5tggg5lxxeb4tf35nkjyl5z4ehdauiiwgk4zhetoq2uehl7a
ServiceError:
{
"code": "NamespaceNotFound",
"message": "You do not have authorization to perform this request, or the requested resource could not be found.",
"opc-request-id": "07178F60467D445ABCB891E721B44A20",
"status": 404
}

What! Something is missing?

Remember always to configure also security lists as by default everything is denied. Accessing Object Storage is no different.

I will just make necessary change to my security list as shown below.

sg-oci-6

Time for a new try!


[opc@instance-20180627-1115 ~]$ oci os bucket list --compartment-id ocid1.compartment.oc1..aaaaaaaafidv5tggg5lxxeb4tf35nkjyl5z4ehdauiiwgk4zhetoq2uehl7a
{
"data": [
{
"compartment-id": "ocid1.compartment.oc1..aaaaaaaafidv5tggg5lxxeb4tf35nkjyl5z4ehdauiiwgk4zhetoq2uehl7a",
"created-by": "ocid1.saml2idp.oc1..aaaaaaaab6mng7jcan6vncjxehd6mkhlobzm4redvlthq2l4nhmqrow7hnza/fivan.bscoperations@uponor.com",
"defined-tags": null,
"etag": "62809e66-083e-453e-bc78-916a54dc84a1",
"freeform-tags": null,
"name": "test-bucket",
"namespace": "simo",
"time-created": "2018-06-25T07:18:23.919000+00:00"
}
]
}

Working, now I can see my test-bucket on my namespace!

Summary

Service Gateway is really good addition to basic functionality you need with OCI. I think for a lot of people having a necessity to use public internet for your database backups could have been an issue.

Next service I’m waiting which would be required is a NAT Gateway service so you wouldn’t need to create your own NAT instances in the public subnet like you have to do today.

Create multiple instances with one Terraform module in Oracle Cloud Infrastructure

Since I’ve become a big fan of modules a case where you might need them is by creating multiple instances at the same time for your Terraform project.

One option is that you have the main Terraform project and call the modules from projects main.tf file multiple times like this:

module “CreatePublicInstance1” {
source=”../instance”
}
module “CreatePublicInstance2” {
source=”../instance”
}
module “CreatePublicInstance3” {
source=”../instance”
}
But there is other option where you use the count variable in a resource. You can easily scale the amount of resources by providing the count parameter to the module and it will loop the resource as many times you have defined.
Note that at this time there are some limitations with count – it can not be a value from other module but rather you need to define it as a static variable.

Terraform project for creating instances

Again I have three files in my Terraform project named “Create_three_instances”. The files are:

  • variables.tf
  • main.tf
  • outputs.tf

Let’s take a look of them.

In the variables.tf I have defined the necessary variables for this project. In this case when I’m creating instances I have some variables pointing to existing resources related to compartment, network and instance image/shape. These are required as in the main.tf I will need to get existing OCID’s for subnets, ADs etc. I have also defined a variable server_count which is the amount of servers I want to create in this example.

In the main.tf I get existing data by using the variables.

terraform_oci_data.PNG

As you can see I first get the compartment information by passing tenancy_ocid and filter the result by compartment name. After that I use the compartment OCID to get Availability Domains and VCNs.

With compartment and VCN OCID I get the subnet information and limit the result with the subnet name. Lookups are very easy to use to get already existing data created by my network project earlier!

Next I call the actual module which creates the instances.

terraform_oci_create_instance

Calling this module I pass the server_count variable, some data through lookups which are required and also some static variables which were defined in the variables.tf.

One additional thing to note is the instance_create_vnic_details_private_ip. As you can see I actually pass the cidr_block of the subnet. Why? You will see when the actual module is being used.

Finally in the outputs.tf I will just print out the instance name, public and private IPs.

terraform_outputs_main

Using count when creating the resource

Now by looking the actual resource “oci_core_instance” “CreateInstance” you can see the usage of the count.

terraform_oci_count

I pass the server_count variable to count in the start of the resource and this determines how many times the create resource will be looped through.

As you might want to or you have to separate the naming for some resources I’ve used  ${count.index} in some places to have separation without creating a list of variables in the project’s variables.tf. In my opinion list might hard to keep updated and by using the numbering you can still have different naming.

If you take a look on the private_ip variable you see now the usage of the subnet cidr block I mentioned earlier. I use built-in function cidrhost to give specific private IP addresses for each host. You could let OCI to allocate those automatically but in some cases you want to define them.

Definition of cidrhost is as follows:

cidrhost(iprange, hostnum) – Takes an IP address range in CIDR notation and creates an IP address with the given host number. If given host number is negative, the count starts from the end of the range. For example, cidrhost(“10.0.0.0/8”, 2) returns 10.0.0.2 and cidrhost(“10.0.0.0/8”, -2) returns 10.255.255.254.

Reason why I have +3 in the end is that Oracle reserves two first IP addresses in the start of every subnet so the first instance will receive .3 IP address and continue to grow from there.

I also have outputs.tf in the module. There I need to use splat syntax (*) to send the list out.

output “instancePublicIP” {value = [“${oci_core_instance.CreateInstance.*.public_ip}”]}

Creating the resources

Now when I’m ready to create the resource I will run the usual terraform init, plan and apply. If you want to know more about those check my earlier blog post and video from here.

When I run terraform plan it will show that it will create three new resources:

Plan: 3 to add, 0 to change, 0 to destroy.

That matches the server_count variable. Once I’ve executed apply the outputs for the instances are:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

instanceName = [
publicinstance-0,
publicinstance-1,
publicinstance-2
]
instancePrivateIP = [
10.0.2.3,
10.0.2.4,
10.0.2.5
]
instancePublicIP = [
130.61.XX.81,
130.61.XX.238,
130.61.XX.133
]

Video

Here is video also showing the exact same thing as above. You can see the code as well as running the Terraform project.

Summary

When you use Terraform modules the main goal would be to make easily repeatable code for your infrastructure. As Terraform isn’t always so flexible to different cases using count is one way to scale your resources up based on the need.

Lot of valuable information about Terraform I’ve got and used also here is from Yevgeniy Brikman’s book Terraform: Up and Running: Writing Infrastructure as CodeDefinitely buy it if you are interested to learn more!

Create DB system to Oracle Cloud Infrastructure with Terraform

One of the most important areas with Oracle Cloud Infrastructure is that you can utilize Oracle databases easily as part of your infrastructure. There are different options how you can provision the databases but as Terraform is the supported orchestration tool with OCI you should investigate if using it is feasible for your project.

This way all the databases created in your tenancy will be done through Terraform in the same way specially if you utilize Terraform modules which are sort of pre-build parts which are written for standard operations. When different “projects” utilize those modules they always call them in the same way and there is no need to write them again each time.

In this post I’ll share simple way of creating the module and using it to create a database.

Basics of DB systems in OCI

In OCI you have different options for your DB system. It can run either on VM, BM or Exadata instance. For each there are variety of different shapes available in terms of CPU, Memory, storage available etc. (And different Exadata shapes obviously)

For VMs you can create 2-node RAC configuration.

Database editions which are available:

  • Standard Edition
  • Enterprise Edition
  • Enterprise Edition – High Performance
  • Enterprise Edition – Extreme Performance (required for 2-node RAC DB systems)

And currently supported DB versions:

  • Oracle Database 18c Release 1 (18.1)
  • Oracle Database 12c Release 2 (12.2)
  • Oracle Database 12c Release 1 (12.1)
  • Oracle Database 11g Release 2 (11.2)

For licensing you have the option either to bring your own license (BYOL) or have the license included. This reflects in the cost of your system.

One important thing to remember is that the database is deployed into public subnet. You don’t need to define Internet Gateway but the database system requires public subnet. I would imagine some people don’t like this idea so much even though you can limit the access completely from outside.

Whole DB service documentation can be found from here.

Terraform setup

I’ve placed necessary Terraform files inside two folders. The module folder is named“database” and inside it I have two files:

  • main.tf – this one has code which calls resource oci_database_db_system to create the DB system
  • outputs.tf – this one sends variables out that I want

The project folder is named “create_db_system”  and it has three files inside.

  • main.tf – this one gets necessary data from different sources and calls database module by passing different variables which are specific to this case
  • variables.tf – all the custom variables defined in this file
  • outputs.tf – outputs which I want to print after successfully running Terraform

Terraform module

Setting up the database module is fairly straightforward. Good example case can be found from OCI Terraform documentation here.

Note that the required parameter for resource oci_database_db_system data_storage_size_in_gb is named elsewhere as data_storage_size_in_gbbut seems correct way of calling it is without the s.

I’ve used the module so that I don’t do any data gathering inside the module but rather pass only “ready” variables into it.

My resource looks like this:

 

resource "oci_database_db_system" "CreateDBSystem" {

    #Required

    availability_domain = "${var.db_system_availability_domain}"

    compartment_id = "${var.compartment_id}"

    cpu_core_count = "${var.db_system_cpu_core_count}"

    database_edition = "${var.db_system_database_edition}"

    db_home {

        #Required

        database {

            #Required

            admin_password = "${var.db_system_db_home_database_admin_password}"

            db_name = "${var.db_system_db_home_database_db_name}"

            character_set = "${var.db_system_db_home_database_character_set}"

            db_backup_config {

            ncharacter_set = "${var.db_system_db_home_database_ncharacter_set}"

            pdb_name = "${var.db_system_db_home_database_pdb_name}"

        }

        db_version = "${var.db_system_db_home_db_version}"

        #Optional

        display_name = "${var.db_system_db_home_display_name}"

    }

               hostname="${var.db_system_hostname}"

    shape = "${var.db_system_shape}"

    ssh_public_keys = ["${var.ssh_public_key}"]

    subnet_id = "${var.db_subnet_id}"

    data_storage_percentage = "${var.db_system_data_storage_percentage}"

    data_storage_size_in_gb = "${var.db_system_data_storage_size_in_gbs}" #For VMs only

    license_model = "${var.db_system_license_model}"

    node_count = "${var.db_system_node_count}"

}

Terraform project

For the project itself I’ve defined all the necessary variables inside variables.tf – either these variables are passed to database module or I use them to get specific OCID’s what I require further on.

You define default variable like this:

variable “compartment_display_name” {default = “PrivDBCompartment”}

In the main.tf I fetch data using variables like here to get the compartment data:

 

data "oci_identity_compartments" "GetCompartments" {

#Required - tenancy OCID

  compartment_id = "${var.tenancy_ocid}"

filter{

name="name"

values= ["${var.compartment_display_name}"]

}

}

This one is bit misleading as I pass tenancy OCID as compartment_id to get filtered list of available compartments.

The compartment is then used to get specific DB subnet I want to use for the DB system:

 

data "oci_core_subnets" "GetDBSubnet" {

  #Required

  compartment_id = "${lookup(data.oci_identity_compartments.GetCompartments.compartments[0],"id")}"

vcn_id="${lookup(data.oci_core_vcns.GetVcns.virtual_networks[0],"id")}"

display_name="${var.db_subnet_name}"

}
To understand what variables you need to pass to get list of subnets you can see the documentation example. Same applies to all other list requests, check the documentation!

 

To use the data you then use lookups to get specific values you want. I didn’t understand this in the start how it works but all available variables can be seen from documentation and then you can just use it like above to get required variables for availability domains etc.

Another good example how to use lookup is to have the necessary db_system_availability_domain variable defined from above GetDBSubnet:

 


db_system_availability_domain = 
"${lookup(data.oci_core_subnets.GetDBSubnet.subnets[0],"availability_domain")}"

 

I’ve defined the GetDBSubnet earlier and now I just fetch the availability_domain the subnet belongs to. This way you can get all the variables related to some specific resource.

From the variables.tf still few settings to point out to this exercise:

“db_system_database_edition” {default = “ENTERPRISE_EDITION”}
“db_system_db_home_db_version” {default = “12.1.0.2”}
“db_system_shape” {default = “VM.Standard2.1”}
“db_system_license_model” {default = “LICENSE_INCLUDED”}

My database will be a VM 2.1 shape with 12.1.0.2 Enterprise Edition database with license included option.

Creating the Database system

Now I’m ready to create the DB system. As usual I will run first terraform init to initialize the modules and the OCI provider for terraform.

After that I will run terraform plan to see what changes will be applied and this time I will have only 1 change to add. I’ve created compartment with network already earlier using different terraform project.

Plan: 1 to add, 0 to change, 0 to destroy.

After that I will run terraform apply to do the actual changes. Creating the database took around 70 minutes in total with my configuration.

module.cr_private_db.oci_database_db_system.CreateDBSystem: Creation complete after 1h12m42s (ID: ocid1.dbsystem.oc1.eu-frankfurt-1.abthe…34qsdaozzgyb254hnr7w6nlkpaz7tshpfdy2oa)

Since I’ve set my ssh keys I can login to the DB system by providing my private key with opc user. If I look pmon processes:

[oracle@hostsimo dbhome_1]$ ps -ef|grep pmon|grep oracle
oracle 65038 47622 0 08:38 pts/0 00:00:00 grep pmon
oracle 76130 1 0 08:13 ? 00:00:00 ora_pmon_SIMO1

Besides grid ASM there is my database now also running.

 

oci_db_system

Thing to remember is that if you want to allow connections to listener port modify your security lists to allow traffic! This should be part of your network setup.

After the test I’ll run terraform destroy to remove the database and that completes in bit over 7 minutes.

module.cr_private_db.oci_database_db_system.CreateDBSystem: Destruction complete after 7m19s

 

Summary

You can easily automatize your database creation through Terraform modules. Initial module setup takes some time so that you figure out the parameters you need but once that is done you get all your databases created in standardized way!

This was a simple example but for real world cases modules will be more complex and will require more work obviously.