By Mirco Domenico Morsuillo, DevOps Engineer
Whether in industry or services, every company needs an IT infrastructure that supports the business. And as the components of the infrastructure increase, so does its complexity: when, in fact, an organization has hundreds of servers, users, networks, switches and applications - each with its own version and environment - automation simplifies the overall management of the structure, with tangible effects in terms of saving time and resources.
Developing an IT process automation system requires identifying the tools best suited to your needs. A good ally for managing the infrastructure is, for example, Terraform; for the application of all middleware-side configurations, on the other hand, Ansible is an excellent choice, especially combined with Git, where you can save parameters to be passed right to Ansible to apply them to configurations. A topic we have also explored here. It is also important to note that these are completely open-source tools.
In Kirey Group, a team of professionals specialized in Ansible has been working side by side for years with enterprise companies in the implementation of automation at the infrastructure level, measuring the potential of this tool in its application to different architectures and technologies. Ansible, in fact, is versatile and can adapt to a very wide range of situations: it has been used to configure environments such as TXSeries, Jboss/Wildfly, F5, Docker, and many others.
In a recent project, currently underway and under development, the Kirey Group team is supporting a major Public Administration client in automating the installation and configuration of middleware technologies such as Wildfly, Jboss, and Apache. Currently, the project includes a focus on the Wildfly service, via Ansible, and the next steps have already been defined:
First, we made an analysis of the client's infrastructure and server organization to understand how to manage in the long run both the developments of all Ansible roles (which contain the tasks that apply the configurations) and the management of the variables of the individual application sites. After delving into the client's reality, we made a comprehensive analysis of the procedures needed to automate a Wildfly installation and configuration, without leaving out exceptions. After the analysis, each individual process to be automated was disassembled into a series of microprocesses, then again into individual tasks. For example, in the case of Wildfly we can recognize the following processes, which have not yet been disassembled:
Next, the processes were surveyed on Git to keep track of the to-do. Having visibility of the target hosts allows to proceed with the simultaneous creation of the inventory file (file where the target systems are defined and grouped) and the variable files, both default and customizable by the end user; in the case of Wildfly, for example, among the default variables can be found the default installed modules and drivers, while among the user-customized variables the version of Wildfly, the addition of custom modules, etc. Correctly creating the inventory and variable files linked to it is a crucial part, since if addressed correctly it will make it easier for developers to use the automation as well.
After performing the analysis, deriving the inventory, and making a draft of the variable files, the actual development begins. In this case, given the client's complex infrastructure and the need to reuse some of the code for the development of future roles (which will need to configure other application technologies), it was decided to perform Ansible role development in the galaxy mode. This means that each role is developed as a modular component on a dedicated Gitlab project, so that it can be recalled/integrated into other roles/playbooks. This mode of development requires a greater effort of time and shrewdness than the classic mode, in which all roles are within a project and tightly bound together since the role must be "dynamic" and adapted by the variables provided to it. For example, the role that installs the JDK may have to install an Open JDK or Oracle JDK, have a particular version, a custom installation path, or other case scenarios.
In the first phase, the architecture we find is as follows:
Instead, we’re working on a target architecture as follows:
As can be seen from the schemes, the implementation of AWX via Helm/Kubernetes makes it possible to manage user access via LDAP and to manage in detail the permissions that users have on applying configurations via Ansible, so that, for example, only certain people can make changes in production. You can also see how all installation packages are on Artifactory, which Ansible refers to for downloads, and the focal role Git has in managing/versioning both code, variables, and inventory organization.
In this way, the client has the ability to configure and reconfigure all technologies covered by automation; being saved and versioned in Git, the code allows the client to immediately roll back in case of problems and, thus, reapply past configurations. Even years later, it will be possible to check how the architectural configuration has evolved and, through AWX, to know who applied which configurations and when. This is a significant benefit: it is not taken for granted to have the ability to reconfigure an application from scratch in one easy step, recreating the application cluster with minimal effort in the event of its total loss. In addition, this process gives even those without extensive product knowledge the ability to apply configurations.
These advantages, along with other important benefits that we summarize below, apply to all organizations that choose to automate IT processes using these tools:
In conclusion, it is obvious why most organizations today are moving in this direction: the benefits far outweigh the investment of the IT process automation project, which brings long-term benefits in terms of security, functionality, and savings on time and resources.