- Published on
Ansible Troubleshooting
- Authors
- Name
- Jackson Chen
References
https://docs.ansible.com/developers.html
https://docs.ansible.com/ansible/latest/command_guide/intro_adhoc.html
Index of all modules and plugins
https://docs.ansible.com/ansible/latest/collections/all_plugins.html
Ansible command line tools
https://docs.ansible.com/ansible/latest/command_guide/index.html
Syntax Check
using the flag --syntax-check
ansible-playbook --syntax-check
Return Values
https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html
Ansible modules normally return a data structure that can be registered into a variable, or seen directly when output by the ansible program.
#***** Return Values
# Common
. backup_file
For those modules that implement backup=no|yes when manipulating files, a path to the backup file created.
"backup_file": "./foo.txt.32729.2020-07-30@06:24:19~"
. changed
A boolean indicating if the task had to make changes to the target or delegated host.
"changed": true
. diff
Information on differences between the previous and current state. Often a dictionary with entries before and after, which will then be formatted by the callback plugin to a diff view.
. failed
A boolean that indicates if the task was failed or not.
"failed": false
. invocation
Information on how the module was invoked.
. msg
A string with a generic message relayed to the user.
"msg": "line added"
. rc
Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, and so on),
this field contains ‘return code’ of these utilities.
"rc": 257
. results
If this key exists, it indicates that a loop was present for the task and that it contains a list of the normal module ‘result’ per item.
. skipped
A boolean that indicates if the task was skipped or not
"skipped": true
. stderr
. stderr_lines
. stdout
. stdout_lines
# Internal use
These keys can be added by modules but will be removed from registered variables; they are ‘consumed’ by Ansible itself
. ansible_facts
. exception
This key can contain traceback information caused by an exception in a module. It will only be displayed on high verbosity (-vvv)
. warnings
. deprecations
Special Variables
Magic variables
These variables cannot be set directly by the user; Ansible will always override them to reflect internal state.
# ansible_check_mode
Boolean that indicates if we are in check mode or not
# ansible_collection_name
The name of the collection the task that is executing is a part of. In the format of namespace.collection
# ansible_config_file
The full path of used Ansible configuration file
# ansible_dependent_role_names
The names of the roles currently imported into the current play as dependencies of other plays
# ansible_diff_mode
Boolean that indicates if we are in diff mode or not
# ansible_forks
Integer reflecting the number of maximum forks available to this run
# ansible_index_var
The name of the value provided to loop_control.index_var. Added in 2.9
# ansible_inventory_sources
List of sources used as inventory
# ansible_limit
Contents of the --limit CLI option for the current execution of Ansible
# ansible_loop
A dictionary/map containing extended loop information when enabled through loop_control.extended
# ansible_loop_var
The name of the value provided to loop_control.loop_var. Added in 2.8
# ansible_parent_role_names
When the current role is being executed by means of an include_role or import_role action,
this variable contains a list of all parent roles, with the most recent role (in other words,
the role that included/imported this role) being the first item in the list.
When multiple inclusions occur,
this list lists the last role (in other words, the role that included this role) as the first item in the list.
It is also possible that a specific role exists more than once in this list.
For example: When role A includes role B, inside role B, ansible_parent_role_names will equal to ['A'].
If role B then includes role C, the list becomes ['B', 'A'].
# ansible_parent_role_paths
When the current role is being executed by means of an include_role or import_role action,
this variable contains a list of all parent roles paths, with the most recent role (in other words,
the role that included/imported this role) being the first item in the list.
Please refer to ansible_parent_role_names for the order of items in this list.
# ansible_play_batch
List of active hosts in the current play run limited by the serial, aka ‘batch’.
Failed/Unreachable hosts are not considered ‘active’.
# ansible_play_hosts
List of hosts in the current play run, not limited by the serial. Failed/Unreachable hosts are excluded from this list.
# ansible_play_hosts_all
List of all the hosts that were targeted by the play
# ansible_play_name
The name of the currently executed play. Added in 2.8. (name attribute of the play, not file name of the playbook.)
# ansible_play_role_names
The names of the roles currently imported into the current play.
This list does not contain the role names that are implicitly included through dependencies.
# ansible_playbook_python
The path to the python interpreter being used by Ansible on the control node
# ansible_role_name
The fully qualified collection role name, in the format of namespace.collection.role_name
# ansible_role_names
The names of the roles currently imported into the current play,
or roles referenced as dependencies of the roles imported into the current play.
# ansible_run_tags
Contents of the --tags CLI option, which specifies which tags will be included for the current run.
Note that if --tags is not passed, this variable will default to ["all"].
# ansible_search_path
Current search path for action plugins and lookups,
in other words, where we search for relative paths when you do template: src=myfile
# ansible_skip_tags
Contents of the --skip-tags CLI option, which specifies which tags will be skipped for the current run.
# ansible_verbosity
Current verbosity setting for Ansible
# ansible_version
Dictionary/map that contains information about the current running version of ansible,
it has the following keys: full, major, minor, revision and string.
# group_names
List of groups the current host is part of, it always reflects the inventory_hostname and ignores delegation.
# groups
A dictionary/map with all the groups in inventory and each group has the list of hosts that belong to it
# hostvars
A dictionary/map with all the hosts in inventory and variables assigned to them
# inventory_dir
The directory of the inventory source in which the inventory_hostname was first defined.
This always reflects the inventory_hostname and ignores delegation.
# inventory_hostname
The inventory name for the ‘current’ host being iterated over in the play.
This is not affected by delegation, it always reflects the original host for the task
# inventory_hostname_short
The short version of inventory_hostname, is the first section after splitting it via ..
As an example, for the inventory_hostname of www.example.com, www would be the inventory_hostname_short
This is affected by delegation, so it will reflect the ‘short name’ of the delegated host
# inventory_file
The file name of the inventory source in which the inventory_hostname was first defined.
Ignores delegation and always reflects the information for the inventory_hostname.
# omit
Special variable that allows you to ‘omit’ an option in a task,
for example - user: name=bob home={{ bobs_home|default(omit) }}
# playbook_dir
The path to the directory of the current playbook being executed.
NOTE: This might be different than directory of the playbook passed to the ansible-playbook command line
when a playbook contains a import_playbook statement.
# role_name
The name of the role currently being executed.
# role_path
The path to the dir of the currently running role
Facts
These are variables that contain information pertinent to the current host (inventory_hostname). They are only available if gathered first. When a playbook executes, each play runs a hidden task, called gathering facts, using the setup module. This gathers information about the remote node you're automating, and the details are available under the variable ansible_facts.
# ansible_facts
Contains any facts gathered or cached for the inventory_hostname Facts are normally gathered by the setup module automatically in a play,
but any module can return facts.
# ansible_local
Contains any ‘local facts’ gathered or cached for the inventory_hostname.
The keys available depend on the custom facts created. See the setup module and facts.d or local facts for more details.
Connection variables
Connection variables are normally used to set the specifics on how to execute actions on a target. Most of them correspond to connection plugins, but not all are specific to them; other plugins like shell, terminal and become are normally involved. Only the common ones are described as each connection/become/shell/etc plugin can define its own overrides and specific variables. See Controlling how Ansible behaves: precedence rules for how connection variables interact with configuration settings, command-line options, and playbook keywords.
# ansible_become_user
The user Ansible ‘becomes’ after using privilege escalation. This must be available to the ‘login user’.
# ansible_connection
The connection plugin actually used for the task on the target host.
# ansible_host
The ip/name of the target host to use instead of inventory_hostname.
# ansible_python_interpreter
The path to the Python executable Ansible should use on the target host.
# ansible_user
The user Ansible ‘logs in’ as.
Ansible Patterns
https://docs.ansible.com/ansible/latest/inventory_guide/intro_patterns.html#intro-patterns
Limitations of patterns
Patterns depend on inventory. If a host or group is not listed in your inventory, you cannot use a pattern to target it.
This table lists common patterns for targeting inventory hosts and groups.
Description Pattern(s) Targets
------------------------------------------------------------
All hosts all (or *)
One host host1
Multiple hosts host1:host2 (or host1,host2)
One group webservers
Multiple groups webservers:dbservers
Excluding groups webservers:!atlanta all hosts in webservers except those in atlanta
Intersection of groups webservers:&staging any hosts in webservers that are also in staging
# Once you know the basic patterns, you can combine them. This example:
webservers:dbservers:&staging:!phoenix
# You can use wildcard patterns with FQDNs or IP addresses, as long as the hosts are named in your inventory by FQDN or IP address:
192.0.*
*.example.com
*.com
# You can mix wildcard patterns and groups at the same time:
one*.com:dbservers
Pattern processing order
The processing is a bit special and happens in the following order
1. : and ,
2. &
3. !
# Negated limit. Note that single quotes MUST be used to prevent bash interpolation.
# --limit or "-l" to limit the target hosts
$ ansible all -m <module> -a "<module options>" --limit 'all:!host1'
Using variables in patterns
You can use variables to enable passing group specifiers with the -e argument to ansible-playbook:
# "-e" as extra variable argument
ansible-playbook -e 'webservers:!{{ excluded }}:&{{ required }}' -i <inventory-file> -t <tag> playbook.yml
ansible adhoc commands
# show ansible command options
ansible <ENTER>
# Syntax
ansible - m <module> -a "<arguments>"
# example: ansible -m user -a "name=testuser" # create "testuser"
ansible -m user -a "name=testuser state=absent" # remove the user
ansible localhost -m copy
-a 'content="Enter system maintenance\n" dest=/etc/motd'
-u devops
--become
Run as specified user
# privilege escalation
ansible localhost -m command -a 'id' -u <run-as-user> --become
# privilege escalation
ansible <ansible-host> -i <inventory-file> -m <command/module> -a <argument> --become
To connect as a different user
# By default, /usr/bin/ansible will run from your user account
# to connect as a different user
ansbile <ansible-host> -i <inventory-file> -m setup -u <username>
prompt for running user password and become password
# prompt for user password, and become password
# -kk
ansible-playbook <ansible-hostname> -i <inventory-file> -t <tag|tag1,tag2> <playbook.yml> -kk
Run command as user, and escalate as root
# If you add --ask-become-pass, or -k
# Ansible prompt you for the password to use for privilege escalation
# Note:
# By default, it is using default "command" module
ansible <ansible-host> -i <inventory-file> -a "command" -u username --become [--ask-become-pass]
Run ansbile command using different module
# By default, ansible is using "command" module, to use different module,
# use -m for module name
ansible <ansible-host> -i <inventory-file> -m shell -a 'echo $TERM'
ping the required ansible host
# ping managed hosts
# will receive "pong" if ping successful
ansible <ansible-hostname> -i <inventory-file> -m ping
Gathering facts
# Facts represent discovered variables about a system
# You can use facts to implement conditional execution of tasks
# To see all facts
ansible -m setup localhost
ansible <ansible-host|group> -i <inventory-file> -m setup
Check mode or Dry run
In check mode, Ansible does not make any changes to remote systems. Ansible prints the command only. It does not run the commands.
# "-C" for check mode
ansible <ansible-host|group> -i <inventory-file> -m <module> -a "command option" -C
Ansible debug module
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/debug_module.html
ansible.builtin.debug module – Print statements during execution . This module prints statements during execution and can be useful for debugging variables or expressions without necessarily halting the playbook. . Useful for debugging together with the ‘when:’ directive. . This module is also supported for Windows targets
Check variable value
# Check variable value as part of debugging
ansible <ansible-host> -i <inventory-file> -m debug -a var=<variable-name-to-verify>
# Examples
- name: Print the gateway for each host when defined
ansible.builtin.debug:
msg: System {{ inventory_hostname }} has gateway {{ ansible_default_ipv4.gateway }}
when: ansible_default_ipv4.gateway is defined
- name: Get uptime information
ansible.builtin.shell: /usr/bin/uptime
register: result
- name: Print return information from the previous task
ansible.builtin.debug:
var: result
verbosity: 2
- name: Display all variables/facts known for a host
ansible.builtin.debug:
var: hostvars[inventory_hostname]
verbosity: 4
- name: Prints two lines of messages, but only if there is an environment value set
ansible.builtin.debug:
msg:
- "Provisioning based on YOUR_KEY which is: {{ lookup('ansible.builtin.env', 'YOUR_KEY') }}"
- "These servers were built using the password of '{{ password_used }}'. Please retain this for later use."
Start and Step
https://docs.ansible.com/ansible/2.9/user_guide/playbooks_startnstep.html
This shows a few alternative ways to run playbooks. These modes are very useful for testing new plays or debugging.
Start-at-task
If you want to start executing your playbook at a particular task, you can do so with the --start-at-task option:
ansible-playbook playbook.yml --start-at-task="install packages"
Step
Playbooks can also be executed interactively with --step This will cause ansible to stop on each task, and ask if it should execute that task. Answering “y” will execute the task, answering “n” will skip the task, and answering “c” will continue executing all the remaining tasks without asking
ansible-playbook playbook.yml --step
How to step through ansible playbook
ansible-playbook <ansbile-host> -i <inventory-file> -t <tag> <ansible-playbook.yml> --step
Run playbook with an vault file for extra environment variables
ansible-playbook -e @vault.yml -l <ansible-host> -i <intenvory-file> -t <tag|tag1,tag2> <playbook.yml> --ask-vault-pass
Run playbook and prompt for running user password
# Ask for the password - user who is running playbook
ansible-playbook -l <ansible-host> -i <intenvory-file> -t <tag|tag1,tag2> <playbook.yml> --ask-pass
To reboot server or group of servers
# Reboot a single server
ansible <ansible-host> -i <inventory-file> -a "/sbin/reboot"
# Reboot a group of servers
# By default, Ansible user only has 5 simultansious processes
# If there are mmore hosts than 5 hosts, increase the parallel forks
ansible <ansible-host-group> -i <inventory-file> -a "/sbin/reboot" -f <num of servers>
Useful ansbile adhoc command for operations
# Check system uptime
# by default, "command" module is used
ansible all -i <path/to/inventory/file> -m command -a uptime
ansible all -i <path/to/inventory/file> -a uptime
# Check free memory or memory usage
ansible all -i <inventory-file> -a "free -m"
# Check physical memory allocated to the hosts
# display the first two lines of the /proc/meminfo file on each host
ansible all -i <inventory-file> -m shell -a "cat /proc/meminfo|head -2"
# To list all running processes on a specific host in an inventory file
ansible specific_host -i inventory_file -m command -a 'ps aux'
# Check ansbile hosts free disk spaces
ansbile <ansible host|group> -i <inventory> -a "df -h"
# To start the service
ansbile <ansible host|group> -i <inventory> -m service -a "name=nginx state=started" --become
# To restart the service
ansbile <ansible host|group> -i <inventory> -m service -a "name=nginx state=restarted" --become
Tell ansible to run the command as the superuser with sudo
# "-b" Tells ansible to run the command as the superuser, with sudo
ansible <ansible-host> -i <inventory-file> -b -m shell -a 'sudo ls -la /home/ansible'
raw module in ansible - switch command and configuration
https://rayka-co.com/lesson/anisble-adhoc-comands-for-network-monitoring-troubleshooting/
The raw module is the module that doesn’t translate our commands and ansible purley transfer our commands on remote devices via ssh. The raw module is mainly used for monitoring and troubleshooting and not for configuration.
# with "-k", ansible ask us to enter the password interactively
ansible switch01 -m raw -u admin -a 'show version' -k
ansible switch01 -m raw -u admin -a 'show version' -k | grep grep “CHANGED\|images”
ansible playbook log file
Certain settings in Ansible are adjustable with a configuration file. The stock configuration should be sufficient for most users, but there may be reasons you would want to change them.
# ansible configuration file
# Normally, it is in /etc/ansible/ansible.cfg
ansible.cfg
# Configure ansible playbook log file path
# It will be saved to your /Documents directory
[defaults]
log_path = ~/Documents/playbook.log.txt
# Other methods
Before running ansible-playbook run the following commands to enable logging:
1. Specify the location for the log file.
export ANSIBLE_LOG_PATH=~/Documents/ansible.log
2. Enable Debug
export ANSIBLE_DEBUG=True
3. To check that generated log file.
less $ANSIBLE_LOG_PATH
# Offical plugings
starting in Ansible 2.4, you can use the debug output callback plugin:
# ansible.cfg:
[defaults]
stdout_callback = debug
How to see more debug message
Add -v or --verbose to see more debug messages.
Adding multiple -v will increase the verbosity, the builtin plugins currently evaluate up to -vvvvvv. A reasonable level to start is -vvv, connection debugging might require -vvvv
Deep dive into ansible logging
https://blog.devgenius.io/a-deep-dive-intologging-mechanisms-in-ansible-f78b6466e82c
If you’re planning to heavily depend on Ansible for configuration management and deployments, it is important to gather logs of playbooks for debugging and auditing purposes.
- Using environment variable ANSIBLE_LOG_PATH We can set the environment variable through the bash profile or even on individual commands.
export ANSIBLE_LOG_PATH=/tmp/ansible_log.log
Now , if we run ansible-playbook my-playbook.yaml playbook, the log will be written to the log file mentioned through ANSIBLE_LOG_PATH . If the file is not present, ansible will create it for you.
- Using ansible.cfg Another way to configure logging is through the configuration file. When we install ansible, it automatically creates a configuration file with the defaults. For Linux, it’s usually present in /etc/ansible/ansible.cfg
[defaults]
log_path = ~/Documents/ansible_log.log
3. If we only want to see the changed status, we have to update our ansible.cfg file by disabling ok hosts using display_ok_hosts attribute.
[defaults]
log_path = ~/Documents/ansible_log.log
stdout_callback=default
display_skipped_hosts=no ## To skip logging when task is skipped
display_ok_hosts=no
4. Show Ansible log as JSON file.
To show Ansible log as JSON, all we need to do is set JSON as callback plugin.
[defaults]
stdout_callback=json
5. Enable profiling or add timestamps for the task and play execution
The callback plugins provide an extremely easy way to add profiling information to your playbooks.
Adding timestamp information is vital for logging continuous jobs.
Also, if you want to understand how much time is spent for each tasks, all we need to do is enable appropriate plugins.
The callback ansible.posix.timer adds timer to the playbook. Enable the callback plugin using ansible.cfg
[defaults]
log_path = ~/Documents/ansible_log.log
deprecation_warnings=False
stdout_callback=default
callbacks_enabled=ansible.posix.timer
The callback plugin ansible.posix.profile_tasks is a powerful callback,
that adds timestamps for each task along with the stats of time taken for each task.
[defaults]
log_path = /tmp/ansible_log_from_config.log
deprecation_warnings=False
callbacks_enabled=ansible.posix.profile_tasks
6. Maintain separate logs per host
If we need to separate logs per host, use log_plays .
The callback will create log file per host in /var/log/ansible/hosts directory.
To enable plugin use callbacks_enabled=community.general.log_plays attribute.
To override default logs directory, create a new configuration for the plugin.
The logs are stored in a directory logs relative to ansible.cfg file
[defaults]
log_path = /tmp/ansible_log_from_config.log
callbacks_enabled=community.general.log_plays # Enable
[callback_log_plays]
log_folder = logs # Create in logs folder
We can list all the available Ansible callbacks by using the command
ansible-doc -t callback -l
To install the general community plugins use
ansible-galaxy collection install community.general
Use debug to output task running result for troubleshooting
1. Add register:results to the task
# keep running next task if the current task failed
- name: Task to do
ignore_errors: true
register: results
2. Write a task to output the result or error
- name: Result or error output
debug:
var: results
Or
- name: Result or error output
debug:
msg: "{{ results }}"
Using "tee" to output to the log file at the same time
ansible -m ping all | tee > /tmp/ansible.log
cat /tmp/ansible.log
Validating tasks - Check Mode and Diff Mode
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_checkmode.html
Ansible provides two modes of execution that validate tasks: check mode and diff mode. These modes can be used separately or together. They are useful when you are creating or editing a playbook or role and you want to know what it will do. In check mode, Ansible runs without making any changes on remote systems. Modules that support check mode report the changes they would have made. Modules that do not support check mode report nothing and do nothing. In diff mode, Ansible provides before-and-after comparisons. Modules that support diff mode display detailed information. You can combine check mode and diff mode for detailed validation of your playbook or role.
Using check mode
Check mode is just a simulation. It will not generate output for tasks that use conditionals based on registered variables (results of prior tasks). However, it is great for validating configuration management playbooks that run on one node at a time. To run a playbook in check mode:
ansible-playbook foo.yml --check
Using diff mode
https://blog.devops.dev/writing-ansible-modules-with-support-for-diff-mode-cae70de1c25f
The --diff option for ansible-playbook can be used alone or with --check. When you run in diff mode, any module that supports diff mode reports the changes made or, if used with --check, the changes that would have been made. Diff mode is most common in modules that manipulate files (for example, the template module) but other modules might also show ‘before and after’ information (for example, the user module).
ansible-playbook foo.yml --check --diff --limit foo.example.com
tasks:
- name: This task will report a diff when the file changes
ansible.builtin.template:
src: secret.conf.j2
dest: /etc/secret.conf
owner: root
group: root
mode: '0600'
diff: true
# Run adhoc command
ansible <host-pattern> -m copy -CD -a "src=<your local file> dest=<remote file or location>"
-C # Check mode, check whether a change would occur, rather than perform the change
-D # Report what change would occur if the change were to be make
Note: we could compare two files in the system
- name: Generate diff
command: diff /tmp/abc.txt /tmp/def.txt
register: diff_result
- name: Show diff result
debug:
var: diff_result
Ansible create a python script locally and copy it to the remote host
Ansible creates the python script locally and copy it to the remote host .py or a packed set of python scripts.
# How to keep a copy of the python scripts
# execute Ansible with ANSIBLE_KEEP_REMOTE_FILES set to 1
ANSIBLE_KEEP_REMOTE_FILES=1 ansible-playbook playbook.yml
Check the temporary directory on the target machine,
by default under the $HOME/.ansible/tmp/ for the connecting user
If the files were packed, there are instructions in the comments inside the file on how to expand the set.
Requires gtar/unzip command on target host
# using expolde
~/.ansible/tmp$ ansible-tmp-1536719125.6725047-209276989541110/setup.py explode
Controlling where task run - delegation and local actions
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_delegation.html
By default, Ansible gathers facts and executes all tasks on the machines that match the hosts line of your playbook.
Tasks that cannot be delegated
Some tasks always executed on the control node. These tasks, including include, add_host, and debug, cannot be delegated. You can determine if an action can be delegated from the connection attribute documentation. If the connection attribute indicates support is False or None, then the action does not use a connection and cannot be delegate.
Delegating tasks
If you want to perform a task on one host with the reference to other hosts, use the delegate_to keyword on a task. This is ideal for managing nodes in a load-balanced pool or for controlling outage windows. You can use delegation with the serial keyword to control the number of hosts executing at one time.
---
- hosts: webservers
serial: 5
tasks:
- name: Take out of load balancer pool
ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: Actual steps would go here
ansible.builtin.yum:
name: acme-web-stack
state: latest
- name: Add back to load balancer pool
ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
The first and third tasks in this play run on 127.0.0.1, which is the machine running Ansible.
Note:
We could use the shorthand syntax "local_action" for "delegate_to: 127.0.0.1"
tasks:
- name: Send summary mail
local_action:
module: community.general.mail
subject: "Summary Mail"
to: "{{ mail_recipient }}"
body: "{{ mail_body }}"
run_once: True
Local playbooks
It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used to run a playbook inside an OS installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the hosts: line to hosts: 127.0.0.1 and then run the playbook like so:
ansible-playbook playbook.yml --connection=local
How to run python scripts
https://toptechtips.github.io/2023-06-10-ansible-python/
script module:
. It copies your local python script (from the Ansible controller host) to the remote/managed host and executes it remotely. . This module is best for running scripts that are stored on your playbook
- hosts: docker-server
tasks:
# Example 1 - Script Module
- name: Execute Python Script using the script module
ansible.builtin.script:
cmd: ../../lib/example.py
executable: /usr/bin/python3
register: result
- debug:
msg: "{{ result }}"
shell module
. Executes command(s) using the default shell environment of the remote host, just like you would run a command on the shell terminal of that remote machine . Useful for running commands that require shell-specific features
e.g. having access to variables like $HOSTNAME and operations like "*", "<", ">", "|", ";" and "&"
. Better used for executing simple commands or even lines of commands . Can run into security risks - It’s important to validate and sanitize user inputs to prevent command injection attacks.
- name: Copy local python script copy to remote
ansible.builtin.copy:
src: ../../lib/example.py
dest: /home/user/projects/example.py
- name: Execute Python Script using the shell module
ansible.builtin.shell:
cmd: python3 /home/user/projects/example.py
register: result
- debug:
msg: "{{ result }}"
command module
. Executes commands on the remote host without involving the shell . Has a similar use case to shell, but is the secure alternative . you won’t be able to use shell-like features and syntax
- name: Copy local python script copy to remote
ansible.builtin.copy:
src: ../../lib/example.py
dest: /home/user/projects/example.py
- name: Execute Python Script using the command module
ansible.builtin.command:
cmd: python3 /home/user/projects/example.py
register: result
- debug:
msg: "{{ result }}"
raw module
. Executes command(s) direct on remote host without going through the module subsystem . Use this with caution as it bypasses Ansible’s built-in safety features . Useful when you need maximum flexibility of the commands you want to use . Some useful use cases include installing python in a remote host that does not have python already installed or even speaking to remote hosts like routers which will not have python installed.
- name: Copy local python script copy to remote
ansible.builtin.copy:
src: ../../lib/example.py
dest: /home/user/projects/example.py
- name: Execute Python Script using the raw module
ansible.builtin.raw: python3 /home/user/projects/example.py
register: result
- debug:
msg: "{{ result }}"
Controlling playbook execution
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html
Runing on a single machine with run_once
If you want a task to run only on the first host in your batch of hosts, set run_once to true on that task
- name: task to run
run_once: true
Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as:
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
Setting the number of forks
By default, ansible use 5 forks, if you have the processing power available and want to use more forks, you can set the number in ansible.cfg:
[defaults]
forks = 30
Using keywords to control execution
In addition to strategies, several keywords also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with serial keyword
Other keywords that affect play execution include ignore_errors, ignore_unreachable, and any_errors_fatal. These options are documented in Error handling in playbooks.
Setting the batch size with serial
By default, Ansible runs in parallel against all the hosts in the pattern you set in the hosts: field of each play. If you want to manage only a few machines at a time, for example during a rolling update, you can define how many hosts Ansible should manage at a single time using the serial keyword:
- name: test play
hosts: webservers
serial: 3 # ------- run at 3 remote ansible hosts at a time
gather_facts: False
tasks:
- name: first task
command: hostname
- name: second task
command: hostname
How to speed up your ansible playbook
https://www.redhat.com/sysadmin/faster-ansible-playbook-execution
How to identify slow ansbile tasks with callback plugins
A specific task in a playbook might look simple, but it can be why the playbook is executing slowly. You can enable callback plugins such as timer, profile_tasks, and profile_roles to find a task's time consumption and identify which jobs are slowing down your plays.
# Configure ansible.cfg with the plugins:
[defaults]
inventory = ./hosts
callbacks_enabled = timer, profile_tasks, profile_roles
Configure parallelism
Anible uses batches for task execution, which are controlled by a parameter called forks. The default value for forks is 5, which means Ansible executes a task on the first five hosts, waits for the task to complete, and then takes the next batch of five hosts, and so on. Once all hosts finish the task, Ansible moves to the next tasks with a batch of five hosts again.
You can increase the value of forks in ansible.cfg, enabling Ansible to execute a task on more hosts in parallel:
[defaults]
inventory = ./hosts
forks=50
You can also change the value of forks dynamically while executing a playbook by using the --forks option (-f for short):
$ ansible-playbook site.yaml --forks 50
A word of warning: When Ansible works on multiple managed nodes, it uses more computing resources (CPU and memory). Based on your Ansible control node machine capacity, configure forks appropriately and responsibly.
Configure SSH optimization
Establishing a secure shell (SSH) connection is a relatively slow process that runs in the background. The global execution time increases significantly when you have more tasks in a playbook and more managed nodes to execute the tasks.
You can use ControlMaster and ControlPersist features in ansible.cfg (in the ssh_connection section) to mitigate this issue.
. ControlMaster allows multiple simultaneous SSH sessions with a remote host to use a single network connection. This saves time on an SSH connection's initial processes because later SSH sessions use the first SSH connection for task execution. . ControlPersist indicates how long the SSH keeps an idle connection open in the background. For example, ControlPersist=60s keeps the connection idle for 60 seconds:
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
Disable host key checking in a dynamic environment
By default, Ansible checks and verifies SSH host keys to safeguard against server spoofing and man-in-the-middle attacks. This also consumes time. If your environment contains immutable managed nodes (virtual machines or containers), then the key is different when the host is reinstalled or recreated. You can disable host key checking for such environments by adding the host_key_checking parameter in your ansible.cfg file and setting it to False:
[defaults]
host_key_checking = False
Use pipelining
When Ansible uses SSH, several SSH operations happen in the background for copying files, scripts, and other execution commands. You can reduce the number of SSH connections by enabling the pipelining parameter (it's disabled by default) in ansible.cfg:
# ansible.cfg
pipelining = True
Use execution strategies
By default, Ansible waits for every host to finish a task before moving to the next task, which is called linear strategy.
If you don't have dependencies on tasks or managed nodes, you can change strategy to free, which allows Ansible to execute tasks on managed hosts until the end of the play without waiting for other hosts to finish their tasks:
- hosts: production servers
strategy: free
tasks:
You can develop or use more strategy plugins as needed, such as Mitogen, which uses Python-based executions and connections.
Use async tasks
When a task executes, Ansible waits for it to complete before closing the connection to the managed node. This can become a bottleneck when you have tasks with longer execution times
such as disk backups, package installation, and so on
because it increases global execution time. If the following tasks do not depend on this long-running task, you can use the async mode with an appropriate poll interval to tell Ansible not to wait and proceed with the next tasks:
- name: Async Demo
hosts: nodes
tasks:
- name: Initiate custom snapshot
shell:
"/opt/diskutils/snapshot.sh init"
async: 120 # Maximum allowed time in Seconds
poll: 05 # Polling Interval in Seconds
Interactive input - prompts
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_prompts.html
If you want your playbook to prompt the user for certain input, add a ‘vars_prompt’ section. Prompting the user for variables lets you avoid recording sensitive data like passwords. In addition to security, prompts support flexibility. For example, if you use one playbook across multiple software releases, you could prompt for the particular release version.
. Hashing values supplied by vars_prompt . Allowing special characters in vars_prompt values
- hosts: all
vars_prompt:
- name: username
prompt: What is your username?
private: false
- name: password
prompt: What is your password?
tasks:
- name: Print a message
ansible.builtin.debug:
msg: 'Logging in as {{ username }}'
vars_prompt:
- name: release_version
prompt: Product release version
default: "1.0"
Hashing values supplied by vars_prompt
You can hash the entered value so you can use it, for example, with the user module to define a password:
vars_prompt:
- name: my_password2
prompt: Enter password2
private: true
encrypt: sha512_crypt
confirm: true
salt_size: 7
Ansible uses the crypt library as a fallback. Ansible supports at most four crypt schemes, depending on your platform at most the following crypt schemes are supported: . bcrypt - BCrypt . md5_crypt - MD5 Crypt . sha256_crypt - SHA-256 Crypt . sha512_crypt - SHA-512 Crypt
Allowing special characters in vars_prompt values
Some special characters, such as { and % can create templating errors.
If you need to accept special characters, use the unsafe option:
vars_prompt:
- name: my_password_with_weird_chars
prompt: Enter password
unsafe: true
private: true
Ansible vault
https://docs.ansible.com/ansible/latest/vault_guide/index.html
Ansible Vault encrypts variables and files so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles. To use Ansible Vault you need one or more passwords to encrypt and decrypt content.
Choosing between a single password and multiple passwords
If you have a small team or few sensitive values, you can use a single password for everything you encrypt with Ansible Vault. Store your vault password securely in a file or a secret manager as described below.
If you have a larger team or many sensitive values, you can use multiple passwords. For example, you can use different passwords for different users or different levels of access. Depending on your needs, you might want a different password for each encrypted file, for each directory, or each environment. You might have a playbook that includes two vars files, one for the dev environment and one for the production environment, encrypted with two different passwords. When you run the playbook, you can select the correct vault password for the environment you are targeting using a vault ID
We could set ansible.cfg to ask for vault password automaitcally without need to type --ask-vault-pass
[defaults]
ask_vault_pass = True
# Without ansbile.cfg settings
ansible-playbook -e @vault.yml -l <ansible host|group> -i <inventory-file> -t <tag|tag1,tag2> playbook.yml --ask-vault-pass --ask-pass
# with ansible.cfg settings
ansible-playbook -e @vault.yml -l <ansible host|group> -i <inventory-file> -t <tag|tag1,tag2> playbook.yml --ask-pass
List ansbile hosts and tasks
ansible-playbook playbook.yml --syntax-check
ansible-playbook playbook.yml --list-hosts
ansible-playbook playbook.yml --list-tasks
ansible playbook verbose mode
ansible-playbook playbook.yml -vvv