Written by Antxon González, Support Engineer @ Abiquo

Introduction

Imagine that you need to develop a cloud-based application, and you will use Abiquo for its development, testing and production. During the development and testing stages, you will have to configure, create, deploy, and tune the application resources many times until it reaches the production stage. Once in production, you may have to update its parameters and redeploy it from time to time to provide fixes or improvements. Doing this manually can be painful and error-prone, but you can avoid hassles by using Terraform to take care of these tasks, allowing you to forget about them and concentrate on the details instead. In this article, we will show how you can use Terraform and Abiquo to achieve this:

Defining the example application

The sample application is a typical Cloud application: It will have a single VM image that will be deployed and scaled to create VMs to meet demand on the application. These VMs will provide their service through a load balancer, and they will run in an Abiquo Virtual Datacenter, which may be based on a public cloud provider such as AWS or a VMware based private datacenter. So, for this example, the cloud resources involved would be:

  1. An existing virtual datacenter, hardware profile, and VM template for the application
  2. A load balancer as the application public IP
  3. A virtual appliance for the application VMs
  4. A VM as the application master template VM
  5. A scaling group that will scale the application master VM on demand

Now let’s look at these resources one by one.

1. Existing resources

For the sake of simplicity, let’s assume that the virtual datacenter already exists, as well as the VM template and the hardware profile, and that they are based on an Azure public cloud region. The VDC name is devel, and the hardware profile and VM template names are Standard_A1 and myTemplate respectively. The corresponding resources would be these:

data "abiquo_vdc" "demo" { name = "devel" }
data "abiquo_hp" "demo" {
location = "${data.abiquo_vdc.demo.location}"
name = "Standard_A1"
}
data "abiquo_template" "demo" {
templates = "${data.abiquo_vdc.demo.templates}"
name = "myTemplate"
}

abiquo_vdc, abiquo_hp and abiquo_template are data sources, which represent read only views of already existing resources. The information provided in the data sources allows Terraform to find these resources and to use them in the definitions of other resources. Abiquo Virtual Hardware Profiles and Templates change depending on the Virtual Datacenter being used, so, they have an explicit dependency on the abiquo_vdc data source.

Use Terraform and Abiquo to develop cloud-based applications easier than ever.

Abiquo will allow you to configure, create, deploy, and tune application resources with Terraform. Avoid the hassle of tedious and time-consuming tasks. We will help you automate processes and efficiently manage your cloud resources.

Contact us today and we’ll connect for a personalized demo according to your company’s needs.

 

 

2. Load balancer

Load balancers are powerful building blocks for Cloud applications. Their configuration and capacities go beyond the scope of this article, but Abiquo provides a common API to create them independently of the underlying technology. The Terraform provider allows the deployment of load balancers in Abiquo with the abiquo_lb resource:

resource "abiquo_lb" "demo" {
virtualdatacenter = "${data.abiquo_vdc.demo.id}"
name = "demo"
internal = false
algorithm = "Default"
routingrules = [
{ protocolin = "TCP" , protocolout = "TCP" , portin = 80 , portout = 80 }
]
}

This load balancer will be created in the abiquo_vdc defined previously, and it will be a public facing load balancer using the default load balancing algorithm of the underlying technology to dispatch incoming connections to port 80. The VMs consuming these connections will provide their service on the same port. Abiquo will choose the public IP automatically, and notify Terraform of it during its creation.

3. Virtual appliance

In Abiquo, all VMs are defined inside virtual appliances, which represent applications running in Abiquo Virtual Datacenters. The virtual appliance must be defined in the same abiquo_vdc the abiquo_lb was defined in:

resource "abiquo_vapp" "demo" {
virtualdatacenter = "${data.abiquo_vdc.demo.id}"
name = "demo"
}

 

4. Application master VM

The application master VM is the heart of the cloud application. It will use the load balancer to provide service to the application users. The application is predefined by the VM template, which should be already configured to work as expected after deployment:

resource "abiquo_vm" "demo" {
deploy = true
label = "demo"
virtualappliance = "${abiquo_vapp.demo.id}"
hardwareprofile = "${data.abiquo_hp.demo.id}"
virtualmachinetemplate = "${data.abiquo_template.demo.id}"
lbs = [ "${abiquo_lb.demo.id}" ]
}

The VM will use the abiquo_hp and abiquo_template data sources and the abiquo_vapp and abiquo_lb resources that we declared previously. We also instruct terraform to deploy the VM once created. We could set the deploy attribute to false to test the terraform configuration before testing the application, which would save time during the development and improve the terraform configuration tuning and testing for the application.

 

5. The scaling group

The scaling group is the most complex part from the cloud point of view. In Abiquo, an on-demand scaling group requires these resources:

  1. A scaling group based on a master VM.
  2. Alarms triggering the scaling alerts depending on the master VM metrics.
  3. Alerts triggering the scale-in and scale-out actions on the scaling group.
  4. The scale-in and scale-out actions themselves.

Scaling groups have a minimum and a maximum number of instances, a time period between scaling actions, the number of VMs to scale in and out in each case, and they must define the scaling group master VM and the virtual appliance they will be running in. Our example scaling group is this:

resource "abiquo_sg" "demo" {
mastervirtualmachine = "${abiquo_vm.demo.id}"
virtualappliance = "${abiquo_vapp.demo.id}"
name = "demo"
cooldown = 60
min = 2
max = 8
scale_in = [ { numberofinstances = 1 } ]
scale_out = [ { numberofinstances = 1 } ]
}

The virtual appliance and master VM of the scaling groups are the previously defined abiquo_vapp and abiquo_vm respectively. The scaling group will have at least 2 running VMs and a maximum of 8 VMs, and it will scale 1 by 1 in 60 second periods at most. Alarms in Abiquo are conditions depending on resource metrics. As the VM load will be mostly evenly distributed thanks to the load balancer, we only need two alarms depending on the master VM load to trigger the scale in and scale out actions. Their definitions will mirror each other:

resource "abiquo_alarm" "decrease" {
target = "${abiquo_vm.demo.id}"
name = "demo decrease"
metric = "vcpu_load"
timerange = 2
statistic = "maximum"
formula = "lessthan"
threshold = 80
}
resource "abiquo_alarm" "increase" {
target = "${abiquo_vm.demo.id}"
name = "demo increase"
metric = "vcpu_load"
timerange = 2
statistic = "minimum"
formula = "greaterthan"
threshold = 90
}
These alarms trigger when the maximum and minimum vCPU load is less than 80% and greater than 90% respectively. The load range between both is the scaling group stability threshold, in which its size won’t change. The available metrics will depend on the underlying technology, but the rest would remain the same. Alerts are conditions based on alarms that may trigger complex action plans in the cloud platform. We will need two alerts based on the previous alarms:
resource "abiquo_alert" "increase" {
virtualappliance = "${abiquo_vapp.demo.id}"
name = "demo increase"
alarms = [ "${abiquo_alarm.increase.id}" ]
subscribers = [ "developers@demo.com" ]
}
resource "abiquo_alert" "decrease" {
virtualappliance = "${abiquo_vapp.demo.id}"
name = "demo decrease"
alarms = [ "${abiquo_alarm.decrease.id}" ]
subscribers = [ "developers@demo.com" ]
}
Also, we indicate that the Developers team will receive emails each time these conditions are met. The last entities of the scaling group are scale-in and scale-out actions plan themselves:
resource "abiquo_plan" "increase" {
virtualmachine = "${abiquo_sg.demo.mastervirtualmachine}"
name = "increase"
entries = [ { type = "SCALE_OUT" } ]
triggers = [ "${abiquo_alert.increase.id}" ]
}
resource "abiquo_plan" "decrease" {
virtualmachine = "${abiquo_sg.demo.mastervirtualmachine}"
name = "decrease"
entries = [ { type = "SCALE_IN" } ]
triggers = [ "${abiquo_alert.decrease.id}" ]
}

The action plans are defined on top of the scaling group, and the scaling-in and scaling-out alerts. To ensure the action plans are created once the scaling group containing the master VM already exists, we declare an explicit dependency for the action plans virtualmachine on the scaling group mastervirtualmachine. This will prevent terraform from creating the plan until the scaling group has been created.

Save time and money with Abiquo by reducing management complexity, offer different network SLAs to your customers and improve usability.

The result

Once everything is in place, we can deploy the application by running the terraform init and apply actions. As a bonus, we can generate a graph showing the dependencies of all the resources in the terraform configuration:

Should we need to change any parameter during the application development, we could change it in the configuration and Terraform would take any dependencies into account when applying the changes. So, if the master VM template was changed, Terraform would delete the VM and any resources depending on it, deploy it again from the new template, and automatically recreate the alarms, alerts, action plans, and the scaling group that depends on the VM.

Next steps

This is only an example of the flexibility and power that customers can achieve using Abiquo and Terraform together. If you think carefully about it, you will realize that there are 3 main input and 1 main output parameters in this example: The VM template, the application VDC, the Hardware Profile, and the Load Balancer public address respectively. This could be encapsulated inside a terraform module that could be instantiated with different parameters and reused on demand, and we could extend this module by adding or combining firewall resources or modules to it. This would allow us to deploy the same resources with different input parameters, and it could become a building block for other bigger applications. The same reasoning could be applied to the enterprise management we showed in our previous article in the series, improving the customer onboarding and management workflows by defining customer templates containing the users, roles, pricing, and scope parameters inside an onboarding module. We will explore these possibilities in a future article, so, keep an eye on our newsletter for more information!

Final note

For more information on the topics in the article, check the links below:

Finally, the sample Terraform Configuration is based on the Terraform Abiquo Provider for the soon to be released Abiquo 4.4. The same configuration will work with minor changes to the abiquo_alarm resources.  

About Terraform

HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.