Contact

Technology

Mar 28, 2019

Automating F5 BIG-IP Configuration With Ansible

Bradley Mickunas

Bradley Mickunas

Automating F5 BIG-IP Configuration With Ansible

Recently my teammate and I source controlled F5 configuration based on the integration between F5 BIG-IP and Ansible to create and update F5 BIG-IP objects. Our client had multiple F5 BIG-IP devices across four application environments, and the configuration had diverged over time. There were ad-hoc virtual servers and inconsistent naming conventions in each environment (typical side effects of manually configuring anything). The F5 BIG-IP Ansible modules are shipped automatically with Ansible, and we used Ansible 2.7 which covered all the configuration we needed for virtual servers, irules, pools and nodes. The SMTP monitor for one application was the only exception, which we created manually in each environment. Overall the experience was good and worth repeating. Our successful approach is described below.

We considered environment segmentation, variable reuse, and simplifying bigip_* module execution. If you are leading a team or organization, source control your F5 BIG-IP configuration. It will improve repeatability of your environments, decrease or eliminate the cost of manually configuring the F5 BIG-IP (aka toil), and provide a source of truth for people relying on F5 configuration for their applications. Once you are done, you have the starting point for further automation and self-service tools.

Environment Segmentation

We had four application environments: DEV, INT, TEST, and PROD. We deployed new configuration in the first two environments, then let the new configuration sit in the TEST environment for a few QA cycles before deploying to PROD. The release to production was a success since there were no issues with the deployed configuration after we switched the DNS entries to the new virtual IP addresses.

To protect the PROD environment from unexpected playbook execution, we had a production and non-production user account and a corresponding Ansible agent for deploying configuration to the F5s. Therefore, the non-production Ansible agent was unable to execute playbooks against the production F5. The separation protected the production environment from any playbooks which might be run locally for development using the production Ansible inventory. In addition to multiple Ansible agents, the playbooks were executed via Jenkins Pipeline scripts which specified which agent would run the playbook. We made the Jenkins Pipeline scripts available to app developers using non-prod and prod specific tabs in Jenkins.

Ansible Playbook Strategy

We divided the applications into logical subsets and created a playbook for each subset. Two subsets were complex in the sense they had over 50 web applications. As a result of the complexity, we wrapped BIG-IP modules with Ansible roles, looped through dictionaries to decrease the number of tasks, and divided the playbooks into object subsets to decrease execution time for small changes.

Simplifying BIG-IP Modules With Ansible Roles

Each logical subset of configuration had its own playbook for creating the necessary F5 configuration for the applications. We wrapped each bigip_* module with an Ansible role for the following reasons:

  1. Create the ability to verify required variables are defined.

  2. Reuse common values via role default variables for the module (e.g., snat attribute for bigip_virtual_server).

  3. Enforce naming conventions for all the F5 objects such as

    vs_{{ f5_environment_alias }}{{ virtual_server_name }}_{{ virtual_server_port }}, where the user of the role would provide the name and port while the Ansible role composed the parts according to the naming convention.

Verifying Required Variables

Some BIG-IP modules required unique variables, so we used the fail module with when statements to check if variables were defined and printed a helpful message if necessary.

Here is an example of the validation from the roles/f5-virtual-server/tasks/main.yml:

1. ---2. - name: fail when the application name is undefined 3. fail:4. msg: "ERROR: The application name (virtual_server_name_app) is undefined and expected whenever the role is included in a playbook"5. when: virtual_server_name_app is not defined 6. 7. - name: fail when the virtual server port is undefined 8. fail:9. msg: "ERROR: The virtual server port (virtual_server_port) is undefined and expected whenever the role is included in a playbook"10. when: virtual_server_port is not defined 11. 12. - name: fail when the destination address is undefined 13. fail:14. msg: "ERROR: The destination address (virtual_server_destination) is undefined and expected whenever the role is included in a playbook"15. when: virtual_server_destination is not defined 16. 17. - name: Create virtual server for {{ virtual_server_name_app }} on {{ virtual_server_port }}18. bigip_virtual_server:19. description: "{{ virtual_server_description }}"20. destination: "{{ virtual_server_destination }}"21. irules: "{{ virtual_server_irules | default([]) }}"22. name: "vs_{{ f5_environment_prefix }}{{ virtual_server_name_app }}_{{ virtual_server_port }}"23. partition: "{{ f5_partition }}"24. pool: "{{ virtual_server_pool }}"25. profiles: "{{ virtual_server_profile_list | default([]) }}"26. provider:27. server: "{{ f5_server }}"28. validate_certs: "{{ f5_validate_certs }}"29. port: "{{ virtual_server_port}}"30. snat: "{{ virtual_server_snat }}"31. state: "{{ virtual_server_state }}"32. port_translation: "{{ virtual_port_translation }}"

Overriding Role Defaults With Role Specific Variables

Ideally you could scan through the roles of a playbook and easily determine the objects and the values needed for that particular subset of configuration. Our complex subsets used multiple roles of the same type, which eventually revealed a misunderstanding on my part with the scope of the role specific variables. We expected our variables listed under the role to only apply at role execution time and then reset to the role default value; however, the syntax we used caused the variable to apply at role execution and also for any following roles or tasks.

Look at the comment on line 19 to see an example of my misunderstanding with the scope of the role variable:

1. - role: f5-virtual-server 2. vars:3. virtual_server_name_app: "{{ activemq_name }}"4. virtual_server_description: "Virtual server for the ActiveMQ"5. virtual_server_port: "{{ activemq_port }}"6. virtual_server_destination: "{{ f5_destination_ip_mw }}"7. virtual_server_pool: "pool_{{ f5_environment_prefix }}{{ activemq_name }}_{{ activemq_port }}"8. virtual_server_profile_list:9. - /Common/http 10. - /Common/oneconnect 11. 12. - role: f5-virtual-server 13. vars:14. virtual_server_name_app: "smtp-server"15. virtual_server_description: "Virtual server for the SMTP"16. virtual_server_port: "{{ smtp_port }}"17. virtual_server_destination: "{{ f5_destination_ip_mw }}"18. virtual_server_pool: "pool_{{ f5_environment_prefix }}{{ virtual_server_name_app }}_{{ smtp_port }}"19. virtual_server_profile_list: "" # This must be defined. If not, it will use the values defined for the previous virtual server

The vars: key below the role: key overrides the default value for all roles and tasks following it. No longer will the virtual_server_profile_list match the default value as an empty string like I expected, but rather the subsequent roles will apply the /Common/http and /Common/oneconnect profile from line 8 through 10 to the smtp-server virtual server in line 12 through 19.

If you expect your variable to strictly apply for a single execution of a role, wrap the role in curly brackets without the vars: key or use the newer syntax with include_role. Subsequent roles with the same variable name will use the default value of the role.

Here is an example of having role specific variables and preserving the role’s default value with curly brackets:

1. - { role: f5-virtual-server, 2. virtual_server_name_app: "{{ activemq_name }}", 3. virtual_server_description: "Virtual server for the ActiveMQ", 4. virtual_server_port: "{{ activemq_port }}", 5. virtual_server_destination: "{{ f5_destination_ip_mw }}", 6. virtual_server_pool: "pool_{{ f5_environment_prefix }}{{ activemq_name }}_{{ activemq_port }}", 7. virtual_server_profile_list: [ “/Common/http”, “/Common/oneconnect” ]8. }9. 10. – { role: f5-virtual-server, 11. virtual_server_name_app: "smtp-server", 12. virtual_server_description: "Virtual server for the SMTP", 13. virtual_server_port: "{{ smtp_port }}", 14. virtual_server_destination: "{{ f5_destination_ip_mw }}", 15. virtual_server_pool: "pool_{{ f5_environment_prefix }}{{ virtual_server_name_app }}_{{ smtp_port }}"16. }

Here is an example of having role specific variables and preserving the role’s default value with the include_role module:

1. – include_role:2. name: f5-virtual-server, 3. vars:4. virtual_server_name_app: "{{ activemq_name }}", 5. virtual_server_description: "Virtual server for the ActiveMQ", 6. virtual_server_port: "{{ activemq_port }}", 7. virtual_server_destination: "{{ f5_destination_ip_mw }}", 8. virtual_server_pool: "pool_{{ f5_environment_prefix }}{{ activemq_name }}_{{ activemq_port }}", 9. virtual_server_profile_list: [ “/Common/http”, “/Common/oneconnect” ]10. 11. – include_role: 12. name: f5-virtual-server, 13. vars:14. virtual_server_name_app: "smtp-server", 15. virtual_server_description: "Virtual server for the SMTP", 16. virtual_server_port: "{{ smtp_port }}", 17. virtual_server_destination: "{{ f5_destination_ip_mw }}", 18. virtual_server_pool: "pool_{{ f5_environment_prefix }}{{ virtual_server_name_app }}_{{ smtp_port }}"

Reusing Application Attributes by Looping Through Web Applications

For the more complex subsets of configuration, we looped through dictionaries of web application attributes to create pools and monitors for each web application. We used the include_role module to avoid repeating the same role over and over again.

1. - include_role:2. name: f5-pool-member 3. vars:4. node_group: "{{ item.value.node_group }}"5. pool_name: "pool_{{ f5_environment_prefix }}{{ layer_name }}-{{ item.value.instance | replace('_', '-') }}-{{ item.key | replace('_', '-') }}_{{ tc_instance_dict[item.value.instance].port }}"6. pool_member_port: "{{ tc_instance_dict[item.value.instance].port }}"7. loop: "{{ lookup('dict', http_monitor_webapps) }}"8. when: f5ConfigSubset is undefined or f5ConfigSubset == "pool-members"

Saving Time by Breaking up Playbooks by F5 Object

Some configuration subsets took 10 to 20 minutes to complete, so we added when statements on tasks and roles to limit the scope of execution. The when statements allowed a task based on the value of an extra variable called f5ConfigSubset. In case we made a change to a virtual-server, we could execute a playbook and skip everything but the virtual-servers, decreasing execution time from 10 to 20 minutes to one to two minutes. We had a bad experience with tags and dynamic includes in the past. A misunderstanding of tag inheritance allowed unintended changes to the PROD environment, so we favored when statements over Ansible tags.

Further Automation And Self-Service

Once you have F5 BIG-IP configuration in source control, explore further automation and self-service tools for your teams.

Regarding automation, deploy your applications to production during business hours by disabling nodes with the bigip_node module to establish zero downtime deployments. Be sure to wait for existing connections to timeout or complete.

Regarding self-service, create a template playbook for a standard load balanced pool. With input parameters by developers, the playbook can be personalized for a new application or experiment, enabling faster innovation on the BIG-IP Platform. Developers can provision new virtual servers for load balanced pools in minutes with naming conventions and best practices already applied.

Speed, Consistency and Reliability

We had a good experience creating these playbooks, and they helped us accomplish much in a short amount of time. We needed the additional time for coordinating changes across the organization and its impact on vendors.

The consistency across environments was lacking prior to these playbooks, and our troubleshooting and searching through objects was easier as a result of the playbooks. We were able to watch the playbooks run instead of click configuring these objects in multiple environments, minimizing the contribution to any carpal tunnel symptoms and reducing the risk of typos.

The playbooks served the purpose of documenting the application configuration. With all the configuration in source control, further automation and self-service tools can be established into normal operations to achieve benefits like application deployments during the day. Therefore, I recommend automating all F5 BIG-IP configuration with something like Ansible from source control rather than relying on the F5 BIG-IP as your source of truth. F5 is either supporting or contributing to several other automation and orchestration tools. You can see the list in Automating F5 Application Services: A Practical Guide.

Can we help you version control and automate your F5 configuration? If you have any questions, you can reach us at findoutmore@credera.com.

Conversation Icon

Contact Us

Ready to achieve your vision? We're here to help.

We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

Searching for a new career?

View job openings