Using Terraform with Abiquo

By Antxon Gonzalez, Support Engineer @ Abiquo

What is Terraform?

Terraform is an Infrastructure as Code software that allows you to manage and provision resources in different providers through a declarative language to describe the desired results. Terraform evaluates the resources state in the providers and performs the actions required to reach the described state in a programmatic way, allowing you to use version control and automation to manage the infrastructure you have in the providers.

In this article, we introduce the Abiquo Terraform Provider to control your Abiquo Cloud resources.

Terraform configuration files

Terraform configuration files can use two different formats, the native format and JSON. We will focus on the first one, which is the preferred format also, as it is human friendly and easier to understand. Files in the native format use the .tf extension and are called Terraform configurations.

First, we need to configure the Abiquo provider with a provider block. Provider blocks define the parameters you need to configure the resources in the provider, as API endpoints or credentials. In our case:

provider “abiquo” {
    endpoint = “https://my.abiquo.endpoint:443/api”
    username = “user”
    password = “pass”
    insecure = true
}

You will need to change the endpoint, username and password fields accordingly to your environment. The insecure parameter is used to specify whether the endpoint certificate must be validated or not. In our case, as we are using a self signed certificate, we set this to true.

Once the provider is configured, we need to describe the resources we will manage with Terraform and their desired states. These resources can be any Abiquo API resources, as enterprises, users, firewalls, VMs or scaling groups, and so on.
We will focus on the enterprise and limit resources to give you an idea of the general feeling. Let’s suppose we are Cloud administrators, and we have to create a new enterprise, allow it to use two datacenters and limit the amount of CPU and RAM it may use globally and in each datacenter. The enterprise resource block would look like this:

resource "abiquo_enterprise" "NewEnterprise" {
    name = "NewCustomer"
    cpusoft  = 300   , cpuhard  = 500
    ramsoft  = 15000 , ramhard  = 20000
}

The first string after the resource keyword identifies the provider and the resource type, separated by an underscore. The second string is the Terraform resource identifier. If it changes, Terraform will deem the old identifier as intended for destruction, and create another Resource for the new identifier. Inside the blocks, we define its attributes, in this case, its name and some global resource limits.

Abiquo requires Enterprises to have a limit resource associated with a location to allow the Enterprise to use it. So, we will define such resources now:

resource "abiquo_limit" "NewEnterpriseLimit1" {
  enterprise = "${abiquo_enterprise.NewEnterprise.id}"
  location   =  "https://my.abiquo.endpoint:443/api/admin/datacenters/1"
 
  cpusoft  = 200   , cpuhard  = 300
  ramsoft  = 13000 , ramhard  = 14000
  vlansoft = 32    , vlanhard = 64
}
 
resource "abiquo_limit" "NewEnterpriseLimit2" {
  enterprise = "${abiquo_enterprise.NewEnterprise.id}"
  location   =  "https://my.abiquo.endpoint:443/api/admin/datacenters/2"
 
  cpusoft  = 150   , cpuhard  = 250
  ramsoft  = 11000 , ramhard  = 12000
}

The limit resources look like the enterprise resource. Their type is abiquo_limit, and their identifiers are NewCustomerLimit1 and NewCustomerLimit2. Among their attributes, there are two worth looking at: The enterprise and the location.

The location points to an Abiquo API resource directly by its URL. The enterprise uses the Terraform interpolation syntax, which means its value will be the id value of the NewEnterprise enterprise resource, creating a dependency on it. In the Terraform Abiquo provider, these ID are the resources URLs. This is the way we reflect relationships between Abiquo resources in Terraform.

Terraform will take care of all these dependencies automatically, and ensure the NewEnterprise resource exists before it creates the limit resources.

Using Terraform

Usually, the configuration files are kept in self-contained folders which hold some extra files which keep the provider and resources states. This allows Terraform to detect any change on the defined resources, and to perform the required operations to make the resources match their definitions. So, the first step is to put all the code inside a .tf file in an empty folder, and initialize Terraform:

terraform init

Once initialized, we can check the current plan. This will show which resources will be created, modified or destroyed to make the resources definitions match the configuration files:

terraform plan

To apply such changes, we need to execute the command below. Terraform will show us the plan again, and ask for explicit confirmation before performing the changes:

terraform apply

Finally, we can destroy the resources to get rid of them in the provider. Again, Terraform will require explicit confirmation before performing any changes:

terraform destroy

Should we need to change the customer global CPU limits, or the vlan limits in the first datacenter, we just have to edit the relevant resources and apply plan again. If the customer decides to stop using the second datacenter, we just need to remove the delete the second limit resource, and apply the changes again. Terraform will verify the current state of the resources, compare it to the desire state, and perform the required changes by itself. freeing us from the burden of doing it manually though the UI and providing us tools to keep the Infrastructure configuration as code.

Next steps

Abiquo is a hybrid cloud provider which manages resources in heterogeneous environments with the same homogeneous API. As such, it provides a lot of different API resources which depend on each other to create arbitrarily complex applications and environments. Terraform allows you to define heterogeneous resources in an homogeneous language, and it will keep track of any changes in such definitions to make the infrastructure match them.

The Terraform Abiquo provider is a recent development done by Antxon Gonzalez from Abiquo’s Engineering Team that allows you to get the benefits from both at the same time. It can be used by the Cloud administrators to keep track of the resources an enterprise and its users can use, or by the end users to define the applications they need to deploy and the VMs they will use. Let us know if you are interested in it, and we will be glad to elaborate further!

About Abiquo

Abiquo delivers the industry’s leading cloud orchestration software for service provider clouds; allowing customers to quickly build and monetise cloud services, whilst managing hybrid, private or public cloud infrastructure from one intuitive portal – adding value through greater efficiency, visibility, simplicity and control.

Abiquo is privately held, and operates from headquarters in the UK with offices in Europe, and through its extensive global partner network. For more information, contact us.

By Marc Cirauqui, Support Engineer @ Abiquo

You might have heard of Docker, right? The wikipedia entry for it states it is “a software technology providing containers”, which have been around for some time, but nowadays and thanks to Docker they have become easier to use with a set of tools built around them that allow new ways to develop, package and deploy applications.

Docker is installed on top of an already running operating system. So. with packages for multiple platforms available you can just follow the installation instructions for your platform and you are ready to go. But there is another way to automate the installation.

One of the many tools built by Docker is Docker Machine, which is a tool that automates the deployment and installation of Docker hosts in compatible platforms. There are two main use cases for this tool.

1. Use Docker in an unsupported platform

Let’s Say you use a Mac or Windows laptop and want to use Docker. Using Docker Machine allows you to create a VM in your laptop (using VirtualBox for example) and setup that VM to run the Docker daemon with a single command.

2. Create remote Docker hosts

Docker machine has drivers for multiple public cloud offerings, and that allows you to create and provision Docker hosts in multiple clouds using a single tool. Then, each of those machines can be managed using the Docker client and other standard tools installed in your laptop. And this is the use case we will focus on today.

We have been working on creating a driver for Docker machine that will allow this tool to provision Docker hosts on Abiquo clouds. The driver is available at GitHub and is open source and free to use.

Installing the driver is very simple. Provided you already have Docker machine installed in your system, you just need to grab the latest release of the driver here, and copy it somewhere in your PATH. Once you do that, Docker machine will pick up the driver automatically.

In order to create a Docker host in your Abiquo cloud, you will need to provide some details though. Let’s see an example.

$ docker-machine create -d abiquo \
      --abiquo-api-insecure \
      --abiquo-api-username myuser \
      --abiquo-api-password mypass \
      --abiquo-api-url https://my.abiquo.com/api \
      --abiquo-vdc 'MyVDC'
      --abiquo-template-name ubuntu1704 \
      --abiquo-public-ip \
      --abiquo-hwprofile 'medium' \
      --abiquo-ssh-key ~/.ssh/id_rsa \
      docker-test

So, we obviously have to tell Docker machine to use the Abiquo driver (`-d abiquo`), then give Abiquo API details (abiquo-api-username and abiquo-password for the user and pass, abiquo-api-url for the API endpoint, and in this case, abiquo-api-insecure since this environment was using a self signed certificate).

The next options are to specify the virtual datacenter to use (`abiquo-vdc`), the template used to create the VM (`abiquo-template-name`) and a hardware profile to use. In this example we are also instructing Docker machine to allocate a public IP to the VM so we can reach it and the SSH key file to use. The reference of available options can be found in the GitHub repo readme. Finally, the name of the Docker host as referenced by Docker machine is provided.

Docker machine then will take care of deploying the VM and then connect through SSH and perform all the necessary steps so Docker daemon runs on the VM. We also have a command that will setup the client in our workstation to consume the Docker API from that daemon:

$ eval $(docker-machine env docker-test)

From this point, any `docker` commands we run will be run on the created machine.

About Abiquo

Abiquo delivers the industry’s leading cloud orchestration software for service provider clouds; allowing customers to quickly build and monetise cloud services, whilst managing hybrid, private or public cloud infrastructure from one intuitive portal – adding value through greater efficiency, visibility, simplicity and control.

Abiquo is privately held, and operates from headquarters in the UK with offices in Europe, and through its extensive global partner network. For more information, contact us.

By Marc Cirauqui, Support Engineer @ Abiquo

One of the questions we most frequently get from customers is regarding the management of the templates catalogue. In Abiquo, every enterprise has its own appliance library where templates are stored and can be added to the library in many different ways.

Templates consist of a default VM configuration (such as CPU, RAM or NIC drivers) and a set of disks. Although templates in Abiquo can have multiple disks, basic templates consist of only the boot disk for the OS guest, leaving further storage configuration to the user at deploy time.

Packer is a tool from Hashicorp that allows creation of such templates in a programmatic way. The basic idea is to provide the ISO image to install the OS along some kickstart or preseed file (although you can just type in commands using VNC capabilities in Packer) to get the OS installed, then run some scripts or provisioning steps, and finally process the resulting artifacts to get some valuable output. Of course, you have multiple builders allowing you to build templates in different systems, multiple kinds of provisioners and post processors, and of course there is plenty of documentation on how to write your custom plugins for Packer.

That last part is the one I want to talk about today.

As of now, you can find a Packer post processor that takes the resulting artifact of a local build and uploads it to Abiquo as a template. You can find the code in GitHub, as usual. The “readme” file in the repo will show you how to compile the plugin yourself or where to grab the latest binary release.

Now, assuming you have some Packer template, adding the Abiquo post processor is pretty straightforward. In the `post-processors` block of the template, you will need to add a block like the following:

{
  "variables": {
    "abiquo_username": "{{env `ABIQUO_USERNAME`}}",
    "abiquo_password": "{{env `ABIQUO_PASSWORD`}}"
  },
  "builders": [
    {
      "type": "vmware-iso",
      "boot_command": [
        "root",
...
  ],
  "post-processors": [
...
    {
      "type": "abiquo",
      "api_url": "https://my.abiquo.com/api",
      "api_username": "{{user `abiquo_username`}}",
      "api_password": "{{user `abiquo_password`}}",
      "datacenter": "MyDC",
      "template_name": "{{user `vm_name`}}",
      "description": "{{user `info`}} {{timestamp}}",
      "category": "{{user `category`}}",
      "cpu": "{{user `cpu`}}",
      "ram_mb": "{{user `ram`}}",
      "login_user": "{{user `ssh_username`}}",
      "login_password": "{{user `ssh_password`}}",
      "eth_driver": "VIRTIO",
      "chef_enabled": "false",
      "icon_url": "{{user `icon`}}"
    }
  ]
}

As you can see in this example there are 2 variables defined at the top to get the username and password for the Abiquo API from environment variables. In the post processor configuration you can see entries to configure the Abiquo API endpoint and several parameters of the template (for a complete reference check the GitHub repo README file)

Now, when you run this build, the post processor will connect to the specified API endpoint using the provided credentials and create a template in the apps library for the specified datacenter and that template will have one disk, which will be the VM disk built by Packer. Since Abiquo supports multiple disk formats and has a V2V conversion system, you can upload any supported disk format and Abiquo will run the necessary conversions so you can deploy your new image without issues.

Another feature regarding templates that was introduced back in Abiquo 3.6 is the ability to replace the boot disk of a template. This allows you to do some kind of versioning on a template. Imagine you have a Windows 2012 R2 template, you run the Packer build and upload it to Abiquo using the name “Windows Server 2012 R2 base OS”. At some point (probably next month) Microsoft will publish a new set of patches for Windows 2012 R2, so your template will need to be updated. With this feature, you keep the same template named “Windows Server 2012 R2 base OS” but replacing the base disk with a new one including the new patches. From that point forward, VMs deployed from that template will already include this new patches.

The Packer post processor for Abiquo also uses this feature, so if it founds a template with the same name it will replace its disk with the current build instead of creating a new template. That way, every time you run the Packer build you will get the template (or templates) updated in Abiquo.

About Abiquo

Abiquo delivers the industry’s leading cloud orchestration software for service provider clouds; allowing customers to quickly build and monetise cloud services, whilst managing hybrid, private or public cloud infrastructure from one intuitive portal – adding value through greater efficiency, visibility, simplicity and control.

Abiquo is privately held, and operates from headquarters in the UK with offices in Europe, and through its extensive global partner network. For more information, contact us.

By Marc Cirauqui, Support Engineer @ Abiquo

For those of you who don’t know about Vagrant, it is an open-source software product for building and maintaining portable virtual software development environments and, so, helping you to move forward in adopting a devops culture, making possible for a developer to bring up more similar environments to those deployed in production.

Out of the box, Vagrant supports some providers like Virtualbox, VMware (Fusion or Workstation) or Hyper-V. This means that Vagrant can automatically create the VMs your environment needs in these providers, then apply the configuration needed for your application to run. Well, as of now, you can add Abiquo as a provider for Vagrant, making it capable of deploying new VMs in Abiquo and applying the configuration over those VMs.

In order to work with Vagrant, you need a Vagrantfile where you define all the VMs your environment needs, and for every VM, the provisioning steps needed (configuration). Let’s take a look at a very basic Vagrantfile.

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.synced_folder ".", "/vagrant", type: "rsync"

  config.vm.define "test" do |test|
    test.vm.provision "shell",
      inline: "echo Hello, World"
  end
end

If you just run vagrant up with this Vagrantfile, a VM will be created in VirtualBox (the default provider unless otherwise specified), and a single configuration step will be run, a simple Hello, World message will be displayed.
Now, getting the Abiquo plugin for Vagrant is quite straightforward. Of course you need Vagrant installed, then launch a terminal and type:

vagrant plugin install abiquo_vagrant

This will download and install the necessary files so Vagrant can deploy VMs in Abiquo. Now we can modify a bit our Vagrantfile. Let’s make it look like:

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.synced_folder ".", "/vagrant", type: "rsync"

  config.vm.define "test" do |test|
    test.vm.provision "shell",
      inline: "echo Hello, World"
  end

  config.vm.provider :abiquo do |provider, override|
    override.vm.box = 'abiquo'
    override.vm.box_url = "https://github.com/abiquo/vagrant_abiquo/raw/master/box/abiquo.box"
    override.vm.hostname = 'abiquo-test'

    provider.abiquo_connection_data = {
      abiquo_api_url: 'https://my.abiquo.cloud/api',
      abiquo_username: 'myself',
      abiquo_password: 'mypass'
    }
    provider.virtualdatacenter = 'Barcelona'
    provider.virtualappliance = 'Vagrant Tests'
    provider.template = 'Centos 7 x86_64'

    override.ssh.private_key_path = '~/.ssh/id_rsa'
  end
end

Now, we added a new block, describing the configuration of the Abiquo provider in this environment. As you can see, we have to provide a fake box as we will be deploying one of the templates available in Abiquo instead of the regular boxes used in VirtualBox. Then we provide the connection info for the Abiquo API, providing its endpoint and credentials, and finally, the VirtualDatacenter and VirtualAppliance where we want to deploy the VM and which template shall it use.
Great! Now it’s as simple as running:

vagrant up --provider=abiquo

You will see Vagrant connects and checks for the availability of the template, VirtualDatacenter, etc. Then creates and deploys the VM, and finally will connect to it and run the provisioning step defined for the VM. Easy!

Now, this is a very basic example on how to get Vagrant to work with Abiquo, but there are a lot of examples on different application environments that can be deployed with Vagrant, which you can now modify so the application they describe runs in your Abiquo based cloud instead of your development workstation.

For example, the guys from CoreOS have a Vagrantfile to deploy Kubernetes on top of CoreOS, which you can modify to launch the Kubernetes cluster over your Abiquo based cloud.

About Vagrant

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

As well as Abiquo, Vagrant is trusted by thousands of developers, operators, and designers everyday.

You can find all the information about Vagrant in its website, vagrantup.com

About Abiquo

Abiquo delivers the industry’s leading cloud orchestration software for service provider clouds; allowing customers to quickly build and monetise cloud services, whilst managing hybrid, private or public cloud infrastructure from one intuitive portal – adding value through greater efficiency, visibility, simplicity and control.

Abiquo is privately held, and operates from headquarters in the UK with offices in Europe, and through its extensive global partner network. For more information, contact us.

Break out of the silo

Since the inception of Docker, developers have been excited with the possibilities it offers for portable applications and scalability. Their business’ IT departments, though, have what looks like another silo and technology to manage.

Abiquo, the leaders in hybrid cloud management, added the ability to manage Docker hosts and enhance their usage with multi-NIC and multi tenant networking and security early in 2015. Now, Abiquo is pleased to announce a new set of features for Docker integration in Abiquo 3.8 to enable companies to productize and commercialize container services to their workgroups.

IT infrastructure experts have been looking for enterprise-class monitoring, policy-based utilization controls and the same kind of anti-affinity configuration that can be used to great effect in hypervisors such as VMware ESX.

What’s coming in Abiquo 3.8

  • Monitor Docker host resources, container resources and application metrics
  • Create alarms based on monitoring information. Compare containers, send a mail based on conditions
  • Improved scheduler/allocator for Docker. Now you can define oversubscription strategies as well as use our anti-affinity functionality to prevent certain containers being deployed to the same Docker host
  • Updated Docker support up to version 1.9

This update follows Abiquo’s strategy of enabling enterprises and service providers to deliver container capabilities as a new technology capability in a data center, providing the same enterprise capabilities that the product offers for virtualization technologies or public cloud. One interface, one API, your choice of physical infrastructure and many integration capabilities..

Follow us on Twitter, LinkedIN or at www.abiquo.com for the latest on the Abiquo Hybrid Cloud solution.

Ping us on contact@abiquo.com, @abiquo or LinkedIN if you are at Dockercon Europe and want to meet up and discuss how we can support your needs to get Docker from under the desk, into the data center!

Read more about Abiquo’s Docker support here

WizardCloud isn’t just about technology

Offering a cloud service is not only about deploying VM instances, managing templates or delivering DevOps agility to your organisation. When you decide to productise and customise your cloud service, you need to think about other collateral actions that will directly contribute to the success of your cloud project.

Features such as multi-branding and multi-language support are a big help and Abiquo has incorporated them from the very beginning. But styling and localizing the user interface might not be enough. There will be differences between the vocabulary used in a DevOps environment and that used in an enterprise, or an MSP service. So Abiquo has always enabled our customers to customise the vocabulary and terms their users will see. For example, a tenant in an MSP environment might be a department in an enterprise space. In other environments this could be a project or a unit.

Use your own business terms

In Abiquo 3.2 we have improved this supporting feature by separating “default” terms from “customised” terms in two different files, enabling you to more easily preserve your customisation between releases, thereby simplifying the maintenance process.

And our reseller model support enables you to manage different brands on the same Abiquo platform, by deploying different language files and other customisations in separate domains, thereby enabling full function delegation to reseller companies that will offer your cloud services to their end customers.

But now we’ve taken this a step further. In the new Abiquo 3.2 release we incorporate two important new capabilities to improve the take-up and efficiency of your cloud platform.

Wizards and Tutorials

Even if you think your platform is the easiest platform in the world to use, new users probably have a different view when presented with a rich set of portal functions. Abiquo has always had comprehensive documentation and “getting started” guides on the Abiquo wiki but as nobody really likes to read manuals, we’ve added a key capability to help get your users familiar with the platform.

Abiquo 3.2 incorporates the ability to create custom wizards and tutorials for your users. Abiquo will include some of these out of the box, and you can develop your own step-by-step tutorials to enable users to learn about your specific features and processes, and wizards to guide them through tasks. Add videos, animated GIFs, HTML, and whatever else you need to guide the user through the platform. You can identify any Abiquo element by ID or name, and highlight it. You can also prevent the user from moving to the next step until they click a specific button, and much more cool stuff.

Even better, you can create specific tutorials for each role on the platform, as you will see in the following video.

You may like to use this to introduce new features, services or application templates to your users – it’s much more immediate and interactive than an email newsletter!

Add Javascript Snippets

Do you have a Support system? Do you want to know if your users like your latest feature? Do you want to incorporate a chat to interact with them? Nowadays there are hundreds (or even thousands) of SaaS companies that offer their advanced systems to incorporate in your site. Why not in Abiquo? Abiquo 3.2 enables you to organize all these snippets and show them in your Abiquo UI. When you do this, the result is really cool!

snippets

 

The above screenshot shows how you can add a UserVoice widget to ask your users about their satisfaction with the service, and enable them to add new ideas or send a message. On the right, a chat box provides direct on-line contact between your user and the service desk.

You can also use this with analytics tools such as Google Analytics to understand how your users interact with your Abiquo cloud platform.

These are two of the new features added in Abiquo 3.2 to boost your production cloud services. But there are more to come, because for Abiquo these kinds of capabilities are as important as supporting a new cloud provider or a new technology. Because we understand that user satisfaction will be a key indicator in measuring the success of your service.

As always, you will see our new features very soon in Abiquo anyCloud

Some time ago, we had a couple of inquiries about support for running KVM hypervisors with Open vSwitch (OVS). Older versions of OVS included the brcompat module, which made OVS work with regular Linux bridges instead of its own virtual switches. This meant Abiquo would behave as if it were using regular bridges. However, with recent versions of OVS, this brcompat module has been deprecated and it is not working as well as it should. Support for OVS is in our development roadmap, but in the meantime, we will explain how to “hack” an unsupported version of this feature using libvirt “domain events”.

(more…)

In our latest release, Abiquo has introduced an Outbound Event Stream API. This lets you use events that are recorded in your Abiquo powered cloud platform for various purposes.
While it’s easy to see how these events can be used for monitoring and alerts, they can also be used to increase revenue from your platform.

Here are a couple of ideas to get you going:

Creating sales opportunities

In Abiquo, you can set ‘soft limits’ and ‘hard limits’ for each Enterprise (or “tenant” if you prefer) and each Virtual Datacenter. If you set the soft limit to, say, 80% of the hard limit, then you could use the Outbound Event Stream API to send an email to your inside sales team when your customer reaches the soft limit. If your organisation uses a CRM solution such as Salesforce, you can even have this change automatically updated in your CRM records.

Abiquo soft limits

Abiquo allow customers to set ‘hard’ and ‘soft’ limits to monitor customers’ usage

Getting closer to the customer

One of the big challenges in self-service environment is that your customer will be creating virtual machines and implementing new applications without you necessarily knowing about it. For a managed service provider, or even an internal IT provider, this reduces the understanding you have of customer projects.

Using the Outbound Event Stream API, you could alert a Solution Architect or Account Manager, perhaps with a CRM integration, to when a customer implements a new Virtual Datacenter or Virtual Appliance. You could also do this with the Abiquo Reporting Enhancement Pack of course, for an overview of this activity.

Learn more about Abiquo’s Outbound Event Stream API and how it can benefit your business on the Abiquo Wiki

Abiquo’s award-winning user interface provides a simple, intuitive interface for both end users and systems administrators to manage virtualized cloud environments. However when you need to deliver business information to the people who need it, Abiquo Reports can be an ideal platform.

The recent release of Abiquo Reports contains a number of new reports that specifically aim to visualise key business metrics that help understand how your users are utilizing the Abiquo environment. For example, the new System Activity report delivers a wealth of summary information to help make business decisions and to target business activity, by identifying:

  • Growth information on the number of Users, Enterprises, Virtual Datacenters, Virtual Applications and Virtual Machines

  • Charts and heat maps to help visualize when the system is being used.

  • Identification of of the top growing and shrinking Enterprises by various metrics (by user count , VM count, and by activity)

Reports Infrastructure growth

User Audit Report

Additionally, the new Infrastructure Growth report uses Abiquo’s detailed accounting data to encapsulate key system level trending and growth information of the key resources, and can be used to easily summarize and visualize system growth within a single page, easy-to-read report.

Datacentre Planning report

Abiquo Reports uses Jaspsersoft,the most flexible, cost effective and widely deployed Business Intelligence suite in the world, enabling better decision making through highly interactive, web-based reports, dashboards and analysis. Leveraging a commercial open source business model, Jaspersoft provides end-to-end BI capabilities at a fraction of the cost of other vendors. Visit www.jaspersoft.com to learn more.

The use of Jaspersoft allows for automated scheduling of reports in multiple formats, and provides a framework for customer that wish to develop their own reports.

Visit our Wiki for more information on Abiquo Reports with Jaspersoft

Abiquo supports two basic charging models for cloud resources:

  • Allocation – where customers are allocated a pool of cloud resources (vCPU,vRAM,storage etc.) and are charged for those resources regardless of whether they are consumed or not

  • Consumption – where customers are charged for the resources they have actually consumed, typically on an hourly basis.

However, the needs of Abiquo customers are often more complex with discounts, commitment, burst and many other models coming into effect and in order to meet those customer needs, and to also deliver flexible accounting and billing solutions Abiquo have developed a billing plugin integration.

This allows Abiquo’s rich set of raw data (allocation, and consumption) to be imported to an easy to read CSV file, or directly to a billing solution such as Aria, Zuora, Ubersmith or CloudCruiser. Billing data can be aggregated in a number of different ways by Abiquo in order to support a variety of different tennant models.

As well as standard resources such as vRam, vCPU, and Storage, Abiquo can also provide accounting data for other services that may incur additional costs on the cloud platform. Functionality such as reserved hardware, high availability can all be metered, along with additional resources such as specific template images or hypervisors.

The billing integration also supports custom connectors. Allowing data from third party sources (e.g. networking bandwidth) to also be tracked and aggregated with other usage data that will form the final bill.

Click here to learn more about the Abiquo billing integration or click here to find out more about Accounting in Abiquo.