This is one of those buzz terms that is all the rage with the advent of public clouds, but the idea has been in practice for a while in the VMware world (PowerCLI). It is the ability to programmatically provision resources using templates, commands, loops, and conditionals. It encompasses both deployments based on templates and pure code, but is often used when referring to the simplified frameworks for template deployment. This article is going to outline where the template based deployment frameworks generally fail when trying to act as “code”.
I have started referring to these template based deployment frameworks as Infrastructure as Config. They fit the mold of being good at static configuration only. They are best at defining your current state resource configuration.
They work great if you are fine with managing separate templates for each of your deployed resources. In order for this to scale in a larger shop, you will need to find ways to create re-usable templates that have conditional configuration blocks and resources.
Conditionals are the building blocks for making something flexible and re-usable. If something is true, create this resource, if not, ignore and move on.
With the main cloud template frameworks, if you try to implement a lot of re-use using conditionals you quickly run into their limitations.
Developers live and breath conditionals. They are the glue that provides flexibility to a program. Software could not exist without them. Computers could not exist without them. Micro-processors are made up of transistors or logic gates, which are extremely simplified binary conditionals.
Terraform is close. Much closer now that version 0.12 has been released. Even still, with the lack of dependency support between modules, it breaks a major requirement for re-usability.
Google Cloud Deployment Manager looks to be the perfect example of Infrastructure as Code. I haven’t had an opportunity to really put it through its paces to find all of its limitations. Too bad it only works for Google’s cloud and not the rest.
ARM templates are based on JSON and follow a schema that defines what can be included in the template. This strict adherence to the JSON structure and the schema is nice for maintaining structure and order, but it limits how far they can expand functionality that would allow it to work as “code”. Only recently, have they strayed from that strict adherence and added support for JSON with comments. JSON is not very readable and dealing with the brackets and commas can be infuriating. I would say it is a step back from XML as a human readable data format.
Some recently introduced resource types such as Blueprints can not be deployed using ARM templates. You need to interface directly with the REST API to create them. This has caused me to question Microsoft’s future plan for the ARM template deployment framework.
Nested ARM templates have their use when deploying a larger collection of resources.
You can only create resource within a limit of 5 resource groups with nested deployments. Most use cases will not run into this limitation, but for large nested deployments, this can be a problem.
Allows you to reference another ARM template from within your template. These referenced templates need to exist in a Storage Account.
This is the most limited template framework available, but its biggest issue is with how quickly it supports new functionality released for AWS. If you want to make use of the latest and greatest functionality in your CloudFormation templates, you will be waiting for 3 - 12 months after the feature’s release for support to be added to CloudFormation. Being proactive and submitting a feature request ticket seems to help speed things along.
CloudFormation is good at deployment management. The state of your deployment or stack is stored within your account allowing controlled updates, rollbacks, and cross-referencing existing stacks. This is its strongest feature.
In order to make Cloud Formation templates manageable, I have given in and started using custom resources and macros for adding advanced functionality. These are Lambda functions that are triggered by CloudFormation at deployment time. Custom resources allow a Lambda function to handle all steps of the resource’s creation/rollback. Macros are Lambda functions that transform a template before deployment. These are advanced and poorly documented features of CloudFormation and not at all friendly to non-developers.
There are other hurdles with these methods though. The AWS SDKs available in the Lambda back-end are generally outdated by 3 - 9 months.
These operate similar to ARMs linked templates as in they allow you to reference another Cloud Formation template from within your parent template. These referenced templates need to exist in an S3 bucket and can’t be local files.
This feature is only useful if you are deploying identical resources across multiple regions or AWS accounts. This is only beneficial to large scale global sized organizations.
Terraform uses a very programmatic syntax, but maintaining readability. A lot of its programming features are not very elegantly implemented. They appear to be afterthoughts that were added in in a way that does not inflict any structural changes. An example would be if you were to conditionally choose to not make a resource, you would add an inline condition check for its count property. Not very obvious.
Some of the issues I had with Terraform were solved with the release of v0.12. My biggest gripe with Terraform is still the insecure state file. The remote state file option is a workaround, but is still likely to be poorly implemented. It really needs to support having encrypted values in the state file or references to protected values stored in KeyVaults (Azure) or SSM parameters (AWS).
The v0.12 release was a re-write, as far as they were concerned. I feel it doesn’t fully qualify as such as they were not willing to break backwards compatibility. I really feel they need to treat the current 0.x code-base as a test run and do a full re-think and re-write from the ground up for the 1.0 release.
This is a publicly or privately hosted management solution for your Terraform templates and state files. It also comes with a graphical interface for creating a deployment template, containing only modules, called Configuration Designer.
Modules allow for the re-use of a collection of variables and templates and they can be local or exist on external cloud storage or code repository. They have a major limitation. You can’t have them depend on other modules. Each module you deploy will need to be self contained or you will need to deploy them separately, defeating the purpose of using modules.
Google Cloud’s Deployment Manager is the only framework that I would label as Infrastructure as Code. It is bases on Python 3.x and you can create you templates in pure Python or Jinja2. Jinja2 is YAML with support for inline scripting. This gives you the flexibility of code with the readability of YAML.
You can import other templates from within your template. These referenced templates are pulled from the local folder.
In Azure, to properly support conditionals and save yourself from the dependency hell, you will need to create PowerShell scripts to deploy your ARM templates. Once you get to this point, you will start to wonder why you are even using ARM templates, as there is limited deployment management with ARM deployments. You might as well create resources with pure PowerShell and the Az module.
This is a set of tools for developing Python code to generate CloudFormation templates. It is very similar to Troposphere. Amazon must have came to the same conclusion, that CloudFormation isn’t truly going to pass as Infrastructure as Code. The setup of the CDK requires a bit of work and is still hamstrung by the limitations of CloudFormation. Since this is pure Python and the setup is very foreign to non-developers, you need to be a developer to manage and understand it. This will be a road block to adoption by the traditional infrastructure crowd.
This applies to all clouds that provide an SDK in your favourite language. This is the most flexible and efficient form of Infrastructure as Code. The downside is you lose the tracking of deployments and the templates defining how resources were created, that may be required by managers or auditors.
All public clouds have an Application Programming Interface (API) that usually follow the REST communication style. For those who want to be on the cutting edge and have a lot of time on their hands, this is for you. With this method you will be recreating the same calls to the HTTPS REST APIs that the SDK would perform. You will not be limited by the release schedule of the SDK when adding support for new functionality. The documentation on how to properly interface with the APIs are sometime incomplete or cryptic. If you ever need support, you will generally be directed to use the SDK instead.
James started out as a web developer who dabbled in hardware and open sourced software development. He then switched to IT infrastructure and spent many years with virtualized servers, networking, storage, and domain management.
His wide range of talents and experience were not being used properly in traditional IT. So he made the switch to cloud computing, as it was a perfect fit.
For the last 2 years he have been dedicated to automating and providing secure cloud solutions for our clients.
James started out as a web developer with an interest in hardware and open sourced software development. He made the switch to IT infrastructure and spent many years with server virtualization, networking, storage, and domain management.
After exhausting all challenges and learning opportunities provided by traditional IT infrastructure and a desire to fully utilize his developer background, he made the switch to cloud computing.
For the last 3 years he has been dedicated to automating and providing secure cloud solutions on AWS and Azure for our clients.