Published on

Ansible Automation

Authors
  • Name
    Jackson Chen

Ansible Automation

https://www.ansible.com/

Ansible Documentation

https://access.redhat.com/documentation

Ansible collections

#*********** Instroducing to Ansible
Ansilbe is Python program (Python based)

Modules are worker (3,000 modules)

# Input
command input or playbook (YAML, .yml)

Playbook (yaml file)
    Play1
     -> task1   # every task refer to module, module do the work
     -> task2   # every task is a python file
    Play2
     -> task1
     -> task2

a task refer to module
    input: arguments can be parameters, options, arguments


# inventory (managed hosts)
    # we defined managed hosts in inventory, can be .ini or .yml file
    static, and dynamic inventory
    
    # dynamic inventory -> dynamiclist.py   (python program file)

default connection is ssh, which connects to the managed hosts

# ansbile control node (can be RHEL system, or Mac system)

#********* example command
mkdir dir1
mkdir dir1  <------- error, cannot create directory, file exists
mkdir -p dir1   # it will successfuly run without error "-p"
echo $?     # look at the previous return code,  if return "0", then successful

rpm -q ansible  # verify ansible installation
which ansible   # show the installation path

# type ansbile and press enter, it shows ansible related files
ansible
    ansible     ansible-connection  ansible-doc <------------ reading and understand ansible, readme file
    ansible-inventory   ansible-pull
    ansible-config      ansible-console     ansible-galaxy
    ansible-playbook    ansible-vault

ansible-doc -l  # list all the doc modules
ansible user    # reading about ansible module about user

ansible-doc

https://docs.ansible.com/ansible/latest/cli/ansible-doc.html

Displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short “snippet” which can be pasted into a playbook.

Ansible module is a command-line tool, which works just as another command on Linux. While we are creating a playbook and unsure about any module or plugin, then we shall use ansible-doc to read the documentation of related plugin or module. This makes us understand the insights of a plugin or module and related possibilities to use the same.

Ansible-doc help

ansible-doc -h
  ansible-doc --help

ansible-doc <module name>   # Get information about module
ansible-doc -type <plugin type>   # get details of a plugin

Ansible-doc available parameters and options

# Below is the list of available parameters and related available options.
“-v ” or “–version” 
  To display the version of installed Ansible
“-F” or “list_files” 
  To show plugin’s name and their related source
“-M” or “—module_path” 
  To specify module library path, 
  if multiple, separate then with Default is ‘~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules’.
“-j” or “–json”
  To display the output in JSON
“-l” or “–list” 
  To display the list of available
“-s” or “snippet” 
  For specified plugins display the playbook
“t” or “type” 
  To choose the plugin type; default is a module; 
  The available plugins are 
      vars
      strategy
      module
      shell
      netconf
      lookup
      inventory
      httpapi
      connection
      cliconf
      callback
      cache
      become

Ansible-doc usage example

ansible-doc -t connection -l    # get information about connection plugin, and plugin usage
ansible-doc -t connection -s ssh

# get information about snippet
ansible-doc -s reboot     # checking the outpour of the Ansible reboot module

# check the output in JSON format
ansible-doc fetch -j

# Check the installed Ansible package configuration on the controller system
ansible-doc --version

# list all the modules available
ansible-doc -l    # list

# get all the module with source file location
ansible-doc -F

Ansible operation

1. shell    # run shell command, using "shell" module. 
              It will always run shell command, even it has run before

2. command
3. raw      # raw - does not use Python, such as managing switches
# ansbile version
ansbile --version     # show ansible version

# If want to create ansible environment variable
export ANSIBLE_CONFIG = /ANSIBLE/ansible.cfg

# ansible-doc
ansible-doc --list | wc -l    # find number of ansible moduels

ansible-doc  --list   # List ansible modules
  ansible-doc -t <plugin>  --list   
    # Example:  ansible-doc  -t lookup --list   # similary to "man lookup"
ansible-doc  <module-name>    # help about the ansible module, such as setup, user

ansible-doc  yum
    /EXAMPLES         # read about yum module, and search "EXAMPLES"

# ansible-config
ansible-config dump   # List all ansible configuration
ansible-config  list

# ansible commands
ansible <ENTEr>       # show ansible command options
    # example:  ansible -m setup  localhost   # show ansible FACTS about the localhost

ansible - m <module> -a "<arguments>"
    # example:  ansible  -m  user  -a "name=testuser"     # create "testuser"
                ansible  -m  user  -a "name=testuser  state=absent"    # remove the user
    ansible  localhost  -m copy  
                        -a 'content="Enter system maintenance\n"  dest=/etc/motd'
                        -u devops
                        --become

ansible  --list-host all     # list all managed hosts
ansible  --list-hosts  ungrouped   # list all ungrouped hosts

ansible  localhost  -m  command  -a 'id'  -u  <run-as-user>
ansible  localhost  -m  command  -a 'id'  -u  <run-as-user>  --become     # privilege escalation

# ping managed hosts
ansible  all  -m ping       # will receive "pong" if ping successful

# inventory
ansible-inventory  --graph    # graph ansible inventory
ansible-inventory --graph -i <path-to-inventory-file>
ansible  all  -i  inventory  --list-hosts   # list all managed hosts from the inventory

# ansible-play
ansible-play  <playbook.yml>   --check    # dry run / check playbook



# ansible moduel can be Powershell, interactive with Windows

ansible is agentless, only install on ansible control system. No agent required to be install on managed hosts

There are different connection plug-in for different sysetm, network devices

# windows managed hosts -> winrm, Powershell


#********* How to install ansible
sudo yum install ansible    # install ansible package and dependences

ansible --version   # show ansible version installed
    ansible 2.8.0       # shows the installed ansible version
    config file = /etc/ansible/ansible.cfg      # using the ansible configuration file

# *********** ansible ad-hoc command
# run*********** the -m "module" on localhost
ansible -m setup localhost | grep ansible_python_version    
    ansible_python_version:  3.6.8      
        Note: It will grep the line contains "ansible_python_version"

#***************** Deploying Ansisble
file inventory      # file <filename>

# content of "inventory" file - static inventory file
-----------------------------------------------
servera.lab.test.lab

[websevers]         # grouping the servers together - section
serverb.lab.test.lab        # server[b:c].lab.test.lab
serverc.lab.test.lab

[dbservers]     # grouping
serverd.lab.test.lab

[hypervisors]
192.168.10.[21:29]  # groups of servers with IP address range

# the group 'boston" has chidren, the children are groups, it is a nested group
[boston:children]   
webservers
dbservers
------------------------------------------------

ansible --list-hosts all        # list all ansible inventory servers

ansible --list-hosts ungrouped      # list inventory server not in group

ansible --list-hosts hupervisors    # list the group members
ansible --list-hosts boston     

Note:
type "ansible" and press enter, to show all commands relate to "ansible"

ansible-inventory [options] [host|group]

ansible-inventory --graph   # list ansible inventory in graph output

tail /etc/ansible/hosts     

# take inventory file and list all the hosts
ansible all -i inventory --list-hosts       

^all^ungrouped      # run the previous command, and replace "all" with "ungrouped"
    ansible ungrouped -i inventory --list-hosts <----------- actual command
^ungrouped^development  # run the previous command, and replace"ungrouped" with "development"
    ansible development -i inventory --list-hosts

#**************** manage ansible configuration file
Note: ansible can be used by normal user, not just root user

# 1. default ansible.cfg
/etc/ansible/ansible.cfg    # default ansible.cfg file location

# 2. non root user ansible.cfg
~/.ansible.cfg      # non-root user ansible.cfg will overwrite default ansible.cfg file

# 3. current login user ansible.cfg
./ansible.cfg       
    # ansible.cfg in the login user will over write the non root user "~/.ansible.cfg"

# 4. ansible variable
ANSIBLE_CONFIG=

# verify the ansible installed version, and show the ansible configuration file location
ansible --version   
#--- output of ansible --version
    ansible 2.8.0
    config file = /etc/ansible/ansible.cfg
    configured module search path = ['root/.ansible/plugins/modules','/usr/share/ansible/plugins/modules']
    ansible python module location = /usr/lib/python3.6/site-packages/ansible
    executable location = /usr/bin/ansible
    python version = 3.6.8

Note:
1. /etc/ansible/ansible.cfg can only be edited by root user

#*** for non root user
ls -la      # run the command on the non-root user
    .ansible.cfg    
        # it shows the non root user has its own ansible file, and the non root user has edit permission

# When the user run "ansible --version" it will use the non root user ansible.cfg in the non root user directory
#---------- output of non root user "ansible --version"
    ansible 2.8.0
    config file = /home/student/ansible.cfg
    configured module search path = ['/home/student/.ansible/plugins/modules','/usr/share/ansible/plugins/modules']
    ansible python module location = /usr/lib/python3.6/site-packages/ansible
    executable location = /usr/bin/ansible
    python version = 3.6.8

#*** prefer method
1. Create a directory, such as "ansible_test" to host/house the following two files
cd ansible_test/    # navigate to ansible_test directory
ls -l   # list the content of directory in "ansible_test", there are two ansible files
    ansible.cfg
    inventory

When run "ansible --version", it shows the folowing content
    ansible 2.8.0
    config file = /home/student/ansible_test/ansible.cfg
    configured module search path = ['/home/student/.ansible/plugins/modules','/usr/share/ansible/plugins/modules']
    ansible python module location = /usr/lib/python3.6/site-packages/ansible
    executable location = /usr/bin/ansible
    python version = 3.6.8

ls -l /ANSIBLE/     # list the content of directory ANSIBLE     <------   /<diretroy name>/
    ansible.cfg

then, create a variable that points to the above ANSIBLE directory and the ansible configuration file
export ANSIBLE_CONFIG=/ANSIBLE/ansible.cfg

No matter where we are, when we type the command "ansible --version", it will always reference to the variable ANSIBLE_CONFIG
    ansible 2.8.0
    config file = /ANSIBLE/ansible.cfg
    configured module search path = ['/home/student/.ansible/plugins/modules','/usr/share/ansible/plugins/modules']
    ansible python module location = /usr/lib/python3.6/site-packages/ansible
    executable location = /usr/bin/ansible
    python version = 3.6.8


ls -l .ansible.cfg
    Note: even there is .ansible.cfg in the current login user directory, 
        but the ANSIBLE_CONFIG variable will overwrite the current user's .ansible.cfg

grep ^[^#]  /etc/ansible/ansible.cfg    # grep the line not starts with #
    Note: It shows ansible.cfg is an ini file

unset ANSIBLE_CONFIG    # remove the global variable ANSIBLE_CONFIG

#********* example of ansible.cfg
[defaults]
inventory = inventory       # inventory = ./inventory   another location for the inventory file
remote_user = devops    # when ansible ssh to the managed host, using "devops" as the remote_user
            # ensure the service account "devops" exists in all managed hosts

[privilege_escalation]
become = true       # using lower case
become_method = sudo
become_user = root
become_ask_pass = false
-------------------------

What it means that under the default section, there is ansible inventory, and it reference to an inventory file
The file is a relative file. If it is an absolute file, then there will be /.../..../inventory
In the same directory of ansible.cfg file, there is a "inventory" file


ansible-inventory --graph   # it will use the current user's ansible inventory file
ansible-inventory --graph -i /etc/ansible/hosts     # it will use the input file at /etc/ansible/hosts

ansible-config list # command that will list the configuration directive/information <-------- good command

# ansible using "remote_user" specified in ansible.cfg
    remote_user = root  
        # default, should configure to use differnt user/service account, 
          as root quite often does not have ssh permission

# When run command on the managed host, we need escalation privilege, after remote to the managed host
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False       # we need to install public key on the managed host
                # the private key will be in ansible host

sudo !!     # "!!' refer to the pervious command

Note:
True = true = Yes = yes = 1 
    # they all mean true, use the same value declaration in ansible config file for consistency
False = false = No = no = 0 
    # they all mean false

#** To see all the implies value will be used in ansible.cfg
ansible-config dump

# How to use in practice
ssh devops@servera  # using ssh key, so it does not prompt for user passsword
            # deploy the public key inject to the new servers, 
              so no password requried when remote manage the servers

ansible myself --list-hosts # ansilbe, list hosts in group call "myself"
                # localhost is in implicit group called "myself"

 
#********* Running ansible ad hoc commands
vim ansible.cfg
ansible [inventoryhost|inventorygroup]  -m command [para|argument]      
    # the name must be existing in inventory file

ansible system1.lab.test.lab -m command -a date     # -m module, -a argument
# rc=0      # return zero   successful

ansible-doc -l      # list all the modules, about 3,000 modules
ansible-doc -l | wc -l  # count number of lines, number of modules
ansible-doc -user   # look at the user module document, similar to man page
    /usr/lib/python3.6/site-packages/ansible/modules/system/user.py     
            # this is where the python profile/file

id <username>   # find out whether the user exist

ansible server1.lab.com -m user -a name=<username>  # create a user <username>  -a, append/add
id <username>   # it will then be able to the user


ansible server1.lab.com -m user -a "name=<username> state=absent"   # remove the user

# raw module - for managed host that does not have shell or python progam installed

# it will show the output of list of options/argument/parameters (the same) 
# ah hoc command to overwrite the ansible configuration file entries
ansible <enter>     
    -k  --ask-pass  ask for connection password
    -u  REMOTE_USER --user=REMOTE_USER
    -c  CONNECTION  --connection=CONNECTION


sudo cat /etc/sudoers.d/devops
    devops ALL=(ALL) NOPASSWD: ALL      # output of the sudo command

#*** Example of the inventory file
[control_node]
localhost

[intranetweb]
servera.lab.com
-----------------------------------

#**** Note - output color
# mud color - change successful
# green - no change required, as change already exists
# red - failed

ansible all - m ping        # all   - against all the managed hosts in the inventory file
                # -m ping   ping module, different from "ping 1.2.3.4"
    "ping", "poing"     # return result

ansible localhost -m command -a 'id'    # run ansible command against localhost, 
                    # -m command, using command module, (-m command) can be obmitted
                    # when obmit "-u <username>", it will use the current login user as the remote uesr
ansible localhost -a 'id'   # works the same as above command
ansible localhost -m command -a 'id' -u devops  # -u devlops, using remote user (devops)

ansible localhost -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd' -u devops --become     
        # --become, elevation privilege
ansible all -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd' -u devops --become   

# To verify the changes
ansible all -m command -a 'cat /etc/motd' -u devops

# using ansible to install nmap package
ansible server1.lab.com -m yum -a "name=nmap statue=latest" # -m yum    using yum module
    # state=latest   install the latest version if not the latest, or not installed

rpm -q nmap # verify nmap installation

#********** python is very sensitive to identitation, need space, NOT "tab"
yaml file as ansible playbook file  # yaml indentidation is very important

# using yaml syntax check program to verify the syntax
ansible-playbook about.yml --syntax-check

visual studio code # good editor for yaml

boolean # style guide, name element consistent
True = true = Yes = yes = 1
False = false = No = no = 0

#******* ansible playbook syntax
# Task should have name
# Every task in playbook, ansible control node will send a python script to the managed host to be executed

``` # first line in the yaml file, start of the yaml file, optional
- name:  This is the first play     # - name:   There is a space between "-" and "name", then ":" and then space
  host:  serverq.lab.com        # inventory host, must exists in the inventory file
  task:
    
    # -  name   two space for identation, consistency indentation is very important
    - name: This is first task      
      debug:
      msg:  This is task 1 of play 1    
                    # has a blank line between task, for easily troubleshooting
    - name: This is second task
      debug:
      msg:  This is task 2 of play 1
---                 # The end of the yml file
-------------------------------------------------

ansible-playbook about.yml --syntax-check


vim
:help modelines

#*** example of ~/.vimrc    file
autocmd FileType yaml setlocal ai ts=2 sw=2 et nu cuc
autocmd FileType yaml colo desert
-----------------------------------------

#*** Do a dry run of the playbook, similar to PowerShell
ansible-playbook example.yml -C     # capital "C"
    # ansible stop process when it encouter the 1st error, any task after error will NOT be executed

#*** How to debug ansible
ansible-playbook example.yml -v     
ansible-playbook example.yml -vv
ansible-playbook example.yml -vvv   # mostly used for basic troubleshooting
ansible-playbook example.yml -vvvv  # up to 4 "v", the up most debug level


#*** example playbook
# There are two plays inside one playbook
---
-  Name:  Install and start Apache HTTPD    # using two space in between for visability
   hosts:  web
   tasks:
     -  name:  httpd package is present
        yum:                    # using "yum" module
          name:  httpd              # key - Name, value - httpd
          state:  present

     -  name:  Creaet firewall rules
        firewalld:
          service:  http
          state:  enabled
          immediate:  true
          permanent:  true 

- name: test connectivity to web servers
  # test the web page from the localhost of ansible control node, not from the web servers.
  hosts: localhost  
            # This is important to ensure firewall is opened
  become:  false
  task:

    -  name: Connect to web server
       uri:
         url:  http://servera.lab.com
         return_content:  yes
         status_code:  200
----------------------------------------------------

ansible-playbook playbook.yml --syntax-check    # check the playbook syntax
ansible-playbook playbook.yml       # run the playbook


less <filename>
    gg  navigate to the top of the page
    /^<word>    Search line starts with word
    /^EXAMPLES  show/search examples


#***** example of playbook, single line and multiple lines
---
-  name:  Lines examples
   hosts: server1.lab.com
   tasks:
    
     -  name: lots of lines
        copy:
          content:  |       # "|" one line at a time
            This is line 1
            This is line 2  
          dest:  /var/temp/lots_of_lines

     -  name: one long line
        copy:
          content:  >       # ">" all lines are merged as one line
            This is line 1
            This is line 1 continue 
          dest:  /var/temp/one_lines
---------------------------------------


#**************** Managing variables and facts *******************************
Begin must with word, can have "_"  # No valid, such as "."

# Reference
Documentation - Ansible -> using variables


#**** Example of variable in ansible
# simple_var.yml
#-------------
 - name: Simple variable example
   hosts: dev
   var:   <---------------- declare variable

     packages:  <------------ variable declaration
       - nmap
       - httpd
       - php
       - mod_php
       - mod_ssl

   tasks:

     - name: Install software
       yum:
         name: "{{ packages }}"     
            # "packages" is variable, refer using inside {{1 space <var-name> 1 space}}, include in " "
         state: prsent
#----------------------------

rpm -q nmap # verify whether nmap has been installed

# Note: variable can be declared in a variable file
variable has precedence:  
    4. inventory group_vars/all    
        # weak variable, it will apply to "all" file, in group_vars directory, in inventory file
    5. playbook group_vars/all  
        # one step stronger than "4", apply to "all" in group_vars directroy, in playbook
    6. inventory group_vars/*   
        # one step stronger than "5", apply to any file declared in group_vars directory, in inventory file
    12. play vars   
        # ansisble playbook "var" variable, has precedence value "12", stronger than precedence 1 to 11
    ...
    21. Include params
    22. extra vars (always win precedence)


# *** important: keep it simple, consistent

getent passwd <username>    # verify whether <username> exist, get entry

#******* How to use the variable for debug the run book
# in ansible playbook file .yml

#-----------
 - name: Test connectivity to webservers    <--- task name
   host: localhost      <--- apply to localhost
   become: false        <--- no privilege escalation required
   tasks:

     - name: Connect to webserver
       uri:
         url: http://server1.lab.com    <-------- test by accessing the url
         return_content: yes
         status_code: 200
       register: output     
            # using "register" key word, "output" as variable name, 
                to capture the task result to the variable "output"

     - name: Show the content of the captured output    
            # create a task to show the content of the captured output
       debug:       <-------- using "debug" module
         var: output     <----- using "var" variable, and the actual variable used is "output"
                <------   var: output.<selected_element>, such as var: output.content

            # alternatively, we can use  
            msg: "{{ output }}" <---- result in more typting, rather than var: output

#--------------------------------------------------------------------------------

# another way to declare the variable in the playbook
    var:
      web_pkg: httpd
      firewall_pkg: firewalld
      web_service: httpd
      firewall_service: firewalld
      python_pkg: python3-PyMySQL
      rule: http

        tasks:
          - name: Required packages are installed and up to date
            yum:
              name:
                - "{{ web_pkg }}"       <--------- using "-"
                - "{{ firewall_pkg }}"
                - "{{ python_pkg }}"
              state: latest

          - name: The {{ firewall_service }} service is started and enabled
            service:
              name: "{{ firewall_service }}"
              enabled: true
              state: started
#------------------------------------------------------------------------------


#********* Managing secrets

# login to the system using ssh
ssh -o PreferredAuthentications=password fred@servera       
    # it will then prompt for the password, rather than using ssh key

#-------- play book to create user with visiable password
---
- name: Create user
  hosts: servera.lab.com

  vars:
    usrename: user1
    password: testing

  tasks:

    - name: Create users
      user:
        name: "{{ username }}"
        password: "{{ passwrod | password_has('sha512') }}"
        state: present
#-----------------------------------------------------------------

# To use secret, we create a password file

ansible-vault   # encrypt the file
ansible-vault encrypt <my_variable_file_name>       
    # encrypt the variable file, and prompt for the encrypted password

# To run the playbook with the encrypted password file
ansible-playbook --vault-id @prompt playbook.yml    
    # run ansible playbook and prompt for encrypted password

# how to change the encrypted password
ansible-vault rekey <path/encrypted-file-name>
Vault password: <enter the existing encrypted password>
New Vault password: <enter the new encrypt password>
Confirm New Vault password: <enter the new encrypt password>
Rekey successful


# ansible-vault edit
ansible-vault edit <playbook.yml>   # edit encrypted playbook file
    ansible-vault edit test.yml
Vault password: <enter the vault password>  # it will prompt for the encrypted password

ansible-vault view test.yml # view the encrypted playbook file
Vault password: <enter the vault password>  # it will prompt for the encrypted password
username: user1
pwhash: xxxxxxxxxxxxxx      # password hash


# ********** create user using encrypted variable file
---
- name: Create user account for the systems
  hosts: devservers
  become: true
  remote_user: devops

  vars_files:   <------------- variable files
  - <userinfo>.yml

  tasks:

  - name: Create user from userinfo.yml
    user:
      name: "{{ username }}"
      passwrod: "{{ pwhash }}"
#---------------------------------------------------------

# Methods 1 (without using ansible tower)
vi /<path>/vault-pass       # create a vault password file
chomd 0600 vault-pass       # secure it only the owner can see it
ansible-playbook create-user.yml --vault-password-file=vault-pass   
    # run playbook without manually type encrypted password


#*********** managing facts
# facts are varible in the managed hosts

ansible server1.lab.com -m setup | less     
    # the setup module connects to server1, and find out the variables (facts)

# there are different ansible facts/variables
"ansible_facts":
    ansible_all_ipv4_addresses:
    ansible_cmdline:
    ansible_device:
        # and more variables, we can use these facts to configure the systems components when required

#**** How to refer to the facts
---
- name: Display some basic facts
  hosts: server1.lab.com
  tasks:

   - name: Show some properties of {{ inventory_hostname }}
     debug:
       msg: >
         My FQDN is {{ ansible_facts['fqdn'] }}     <----- ansible_facts['<property_name>/key']
     and my defalt IP address is {{ ansible_facts['default_ipv4']['address'] }}

#-------------------------------------

# You could create your customs facts
cat /etc/ansible/facts.d/dev.fact   # need to be in facts.d directory, and with .fact file extension

[redhat_training]
environment: dev
ssh_port: 22
root_allowed: yes
groups_allowd: wheel
passwords_allowed: yes

# There are some default variable are set by default, we do not need to declare them
    hostvars, groups, group_names, inventory_hostname

    hostvars    let you access variables for another host, 
            including facts taht have been gathered about that host.
            you can access host variables at any point in a playbook.

            {{ hostvars['test.lab.com']['ansible_facts']['distribution'] }}

    groups      is a list of all groups (and hosts) in the inventory.

        {% for host in groups['app_servers'] %}
        {% endfor %}
#---------------------------------------------------------------------
---
- name: Install remote facts
  hosts: webserver

  vars:
    remote_dir: /etc/ansible/facts.d
    facts_file: custom.fact

  tasks:

  - name: Create the remote directory
    file:
      state: directory
      recurse: yes
      path: "{{ remote_dir }}"

  - name: Install the new facts
    copy:
      src: "{{ facts_files }}"
      dest: "{{ remote_dir }}"
#-----------------------------------------------------------------

#******************* chaper 4 - tasks control

# loops and control
loops using lookup plugin

ansible-doc -t lookup -l    
    # look at "lookup" in ansible-doc, and "-l" list them, "-t" is important to refer type of plug in

ansible-doc -t lookup items # lookup items in lookup

#** Example
---
- name: Create users
  hosts: server1.lab.com
  vars:
    myusers:
      - user1
      - user2

  tasks:

    - name: Create users
      user:
        name: "{{ item }}"  <--- item is the item in myusers
        state: present
      loop: "{{ myusers }}" <------- loop at the same identation level as "user"
#-----------------------------------------------------------

#**** Example using loop, and with_dict (lookup dict -    ansibile-doc -t lookup dict  )
---
- name: Create users in the appropriate groups
  hosts: all
  tasks:

    - name: Create groups
      group:
        name: "{{ item }}"
      loop:
        - group1
        - group2

    - name: Create users in their appropriate groups
      user:
        name: "{{ item.name }}"
        groups: "{{ item.groups }}"
      with_dict:            <-------------- "with"  begin loop
        - { name: 'user1', groups: 'group1' }
        - { name: 'user2', groups: 'group2' }
#----------------------------------------------------------

# ***** Example of loop using with_nested, and item[x]
  -name: Install required packages
   yum: 
        # using yum to install packages, and yum accept and unstand multiple items as arguments, 
            such yum install nmap http
     name:
       - "{{ '@mariadb' }}"
       - "{{ '@mariadb-client' }}"

  - name: Include the passwords from ansible_vault
    include_vars:
      file: passwords.yml

   - name: Give users access to their databases
     mysql_user:
       name: "{{ item[0] }}"    <------- item[0] refer to 1st entry in item
       priv: "{{ item[1] }}"    <----- item[1] refer to 2nd entry in item
       append_privs: yes
       password: "{{ db_pass }}"    <------- it will use the password variable from passwords.yml
     with_nested:
       - "{{ users }}"      <--- item[0] is users variable, it will loop through the users
       - "{{ category_db }}"    <--- item[1] is category_db variable, it will loop through the category_db
#-----------------------------------------------------------

#**** Example
  vars:
    source: /Users/user1/dir1
    zipfile: /users/user1/dir1.tar.gz
    zipformat: zip

  tasks:
   
    - name: Create a tar archive of the source directory
      archive:
        format: "{{ zipformat }}"   <--------- refer to variable "zipformat"
        path: "{{ source }}"
        dest: "{{ zipfile }}"
    
     - name: Check to see if the archive exists 
            # this is the task level, name of the task to run
       stat:
         path: "{{ zipfile }}"      <------- use "stat" module to check whether the path exist for the zip file
       register: archive    <--------- store the task information using "register" in variable "archive"
                 Important:   "register" only exist at "task" level

     - name: Show the archive dictionary
       debug:
         var: archive          
                # show the "archive" variable/dictionary content that capture the archive exists status
                # could use "msg", but then need to use "{{ }}"
 
     - name: Make sure that the arhive exist before proceeding
       assert:      <-------- using "assert" module, ensure
         that: "'zip' in archive.stat.mimetype" 
                #  check mimetype "zip" exist in archive stat (statistics)
#------------------------------------------------------------------------------------------

#************ Example
---
- name: Test variable is defined
  hosts: all
  vars:
    my_service: httpd

  tasks:
    - name: "{{ my_service }} package is defined"
      yum:
        name: "{{ my_service }}"
      when: my_service is defined   
        #  the task to install my_service only when my_service is defined
#--------------------------------------------------

Note: 
  ansible_machines == "x86_64"      
        # ==  equal, and using " " to enclose "x86_64" for the string value, not the integer
  min_memory is defined         # variable exists
  min_memory is not defined     # variable does not exist
  memroy_available          
        # Boolena variable is true, The values of 1, true, or yes, Yes, evaluate to true
  not memroy_available          
        # Boolean variable is false, The value of 0, False, or no, evalute to false
  ansible_distribution in supported_distros 
        # First variable's value is present as a value in second variable's list


#*** Example
---
- name: Demonstrate the "in" keyword
  hosts: all
  gather_facts: yes <----------- run the gather facts to collect facts from the managed hosts
  vars:
    supported_distros:
      - Redhat          < uisng list "-"
      - Fedora

  tasks:
    - name: Install httpd using yum, where supported
      yum:
        name: httpd
        state: present
      when: ansible_distribution in supported_distros   
            # look up "distribution" value is in supported_distros
#---------------------------------------------------------------------------

   when: ansible_distribution_version == "7.5" and ansible_kernel >= "3.10"     # "and", "or" 
   when: >                  <------------ using ">" for readiability, combine into one line
       ( ansible_distribution == "Redhat" and
         ansible_distribution_major_version == "7" )
       or
       ( ansible_distribution == "Fedora" and
         ansible_distribution_major_version == "28" )   

#*** Example
  - name: Install mariadb-server if enough space on root
    yum:
      name: mariadb-server
      state: latest
    loop: "{{ ansible_mounts }}"    
        # ansible_mounts come from facts that collects from the managed hosts
    when: item.mount == "/" and item.size_available > 300000000
#------------------------------------------

   command: /user/bin/systemctl is-active postfix
   ignore_errors: yes
        # important: ansible will stop the playbook when encounter the error
   register: result     <------ capture the result

 - name: Restart Apache httpd based on Postfix status
   service:
     name: httpd
     state: restarted
   when: result.rc = 0  
        # only when exit code of systemctl command is "0", successful


ansible <inventory_group_name> -m command -a 'cat /etc/redhat-release' -u devops --become


#************* Implementing handlers
#*** handlers - "notifiy" is the the handlers

#** Example
---
- name: Setup the support infrastructure to manage system
  hosts: all
  force_handlers: true  
        # by adding "force_handlers to true", it will run the handler event
  tasks:
    - name: Create the uesr support as a member of the group wheel
      user:
        groups: wheel
        name: support

    - name: Install the sudo configuration which allows passwordless execution of commands as root
      copy:
        src: support.sudo
        dest: /etc/sudoers.d/support

    - name: Install the ssh key
      authorized_key
        manage_dir: yes
        user: support
        key: "{{ lookup('file', 'id_rsa.pub') }}"

    - name: Limit ssh usage to members of group wheel
      lineinfile:   <---------- call lineinfile module
        state: present
        dest: /etc/ssh/sshd_config
        line: AllowGroups wheel     
                # check if "AllowGroups wheel" exist, 
                # if yes, then restart ssh daemon
      notifiy: Restart the ssh daemon       
                # notify same identation as task "name", 
                # it calls another task namely "Restart the ssh daemon"

    - name: Disallow password authentication
      lineinfile:
        state: present
        dest: /etc/ssh/sshd_config
        line: PasswordAuthentication no
      notify: Restart the ssh daemon

 handlers:  
    # handlers are at the end of the playbook, 
    # if there is any error before handler, the handlers task will NOT run
                Note: We can force the handler task to run use "force run...."

   - name: Restart the ssh daemon
     service:
       name: sshd
       state: restarted 

Note: handlers will not be called or run, if there is no change.
handler task will run after all the tasks have successfully run.
      The handler does not immediately execute the task, 
      it will run/call the task after all tasks have run.

#************ handling task failure
    ignore_errors: yes  
        # use "ignore_errors" keyword in a task, "yes" is boolean, can be "true, yes, y, Y"
                    # force execution after task failure
                    # at play level

# Normally when a task failes and the play aborts on that host, 
any handlers that had been notified by earlier tasks in the play will NOT run.
    force_handlers: yes <-------- force execution of handlers after task failure
        # The notified handlers that are called will run even if the play aborted becaused a later task failed.
                <----- it is at play level

# failed_when
You can run a script/shell command that outputs, and if error message occur in the output

  tasks:
    - name: Run user creation script
      shell: /usr/local/bin/create_users.sh
      register: command_result      <------------ register the shell ouput to "command_result" dictionary
      failed_when: "'Password missing' in command_result.stdout"    
                <-----------  'Password missing' string in the stdout
                # command_result.stdout     
                    # in "command_result" dictionary at the key "stdout"

  # Note: How to find out the .stdout, and other entries, 
    using "debug" module, and call debug task to inspect all the output in the debug task

We coud write the ansible playbook with the following code to do the same as above
    tasks:
      - name: Run user creation script
        shell: /usr/local/bin/create_users.sh
        register: command_result
        ignore_errors: yes  <------------- continue to run if there is error

      - name: Report script failure
        fail:       
            # call ansibile "fail" module,  
            # "ansible-doc fail" to find out more about "fail" module
          msg: "The password is missing in the output"      
            # Use the "fail" module to provide a clear failure message for the task.
        when: "'Password missing' in command_result.stdout" 
            # This approach enables/allows you to run intermediate tasks to complete or roll back changes

It normally would always report "changed" when shell command runs. We can suppress that change
            "change_when: false"

it will only reports ok or failed

   - name: get kerberos credentials as "admin"
     shell: echo "{{ krb_admin_pass }}" | kinit -f admin
     change_when: false

The following example uses the "shell" module to report "changed" based on the output of the module that is collected by a registered variable.

   task:
     - name: Upgrade database
       shell:
         cmd: /usr/local/bin/upgrade-database
       register: command_result
       change_when: "'Sucess' in command_reult.stdout"  
            # it will only report the change when "success"
       notify:
         - restart_database

   handlers:
     - name: restart_database
         service:
           name: mariadb
           state: restarted

#*** "block"
block are clauses that logically group tasks, and can be used to control how tasks are executed.
The same condition can be applied to all tasks within the block. 
So, there is no need to repeat the same condtion for every tasks in the block.

   - name: Block example
     hosts: all
     tasks:
       -name: Installing and configuring Yum verssionlock plugin
        block:
          - name: package needed by yum
            yum:
              name: yum-plugin-versionlock
              state: present

          - name: lock verson of tzdata
            lineinfile:
              dest: /etc/yum/pluginconf.d/versionlock.list
              line: tzdata-2016j-1
              state: present
        when: ansible_distribution == Redhat    
            # both tasks in block will run when distribution is Redhat
            # "when" is in same indentation as "block"

#**** rescure and always
When the task(s) in the block fail, tasks defined in the "rescure" and "always" clauses are executed

    tasks:
      - name: Upgrade DB
        block:      <----------- block of task(s)
          - name: Upgrade the database
            shell: 
              cmd: /usr/local/lib/upgrade-database

        rescue:
          - name: Revert the database upgrade
            shell: 
              cmd: /usr/local/lib/revert-database

        always:
          - name: always restart the database
            service:
              name: mariadb
              state: restarted

Note: The "when" condition on a "block" clause also applies to its "rescure" and "always" clauses if present.


#************ Example
 
min_ram_mb: 256

web_service: httpd
web_package: httpd
ssl_package: mod_ssl

fw_service: firewalld
fw_package: firewalld



   var_files: vars.yml

   tasks:
   # failed fast message
   - name: Show failed system requirement message
     fail:
       msg: "The {{ inventory_hostname }} did not meet minimum requirements."
     when: >
       ansible_memtotal_mb < min_ram_mb or
       ansible_distribution != "Redhat"


   # Block of config tasks
  - name: setting up the SSL cert directory and config files
    block
      - name: Create SSL cert directory
        file:
          path: "{{ ssL_cert_dir }}"
          state: directory          <-------------- state: diretory  "rather than present !!"

      - name: Copy Config Files
        copy:
          src: "{{ item.src }}"
          dest: "{{ item.dest }}"
        loop: "{{ web_config_files }}"
        notify: restart web service

    rescue:

      - name: configuration error message
        debug:
          msg: >
            One or more of the configuration changes failed, but
            the web service is still active.

  # configure the firewall
  - name: ensure web server ports are open
    firewalld:
      service: "{{ item }}"
      immediate: true
      permanent: true
      state: enabled
    loop:
      - http
      - https

   # Add handlers
   handlers:

     - name: restart web service
       service:
         name: "{{ web_service }}"
         state: restarted

# testings
  curl -k -vvv https://server1.lab.com      # "-k" accept self signed ssl certificate


#**************** Deploy files to managed hosts
# commonly used file modules
 blockinfiles   # insert, update, or remove a block of multiline text surrounded by customizable marker lines
 copy       # copy a file from the local or remote machine to a location on a managed host, it can set file attributes, including SELinux context
 fetch      # works like copy module, but in reverse order. Fetch files from remote managed hosts to ansible control node, and storing them in a file tree, organized by host name
 file       # set attributes, such as permissions, ownership, SELinux context, symlinks, hard links, and directories
            # Verify file present or absent
 lineinfile # ensure a particular line is in a file, or replace an existing line using a back-reference regular expression.
            # This module is primary useful when you want to change a single line in a file
            # verify a line is present or absent
 stat       # Retrieve status information for a file, similar to linux "stat" command
 synchronize    # Powerful module, a wrapper around the "rsync" command to make common tasks quick and easy.
            # not provide the full power of rsync command
            # still can call "rsync" command directly via the "run" command module


# Example to create new file
  - name: Touch a file and set permission
    file:           <---------- file module
      path: /path/to/file
      owner: user1
      group: group1
      mode: 0640
      state: touch  <---- touch command


#*** SELinux command
mkdir samba
chcon -t samba_share_t samba/   <------ this SELinux command will survive reboot, but wont survive restorecon
ls -ldZ samba/      


# ansible SELinux modules
ansible-doc -l      # /selinux   <--- search SELinux
selinux
selinux_permissive

ansible-doc sefcontext  # show sefcontext 

sefcontext  # manage SELinux file context mapping definitions, similar to the "semanage fcontext" command
        # this is ansible selinux module to use


# copying and editing files on managed hosts - copy
   - name: copy a file to managed hosts
     copy:
       src: file    <------ file on controller node
       dest: /path/to/file  <-------- copy to managed hosts

Note: By default, this module assuems that "force: yes" is set, it overwrite the remote file if it exists, 
    but contains different contents from the file being copied
    If "force: no" is set, it only copies the file to the managed host if it does not already exist.

# fetch the rsa public key from managed host to ansbile controller node
    fetch:
      src: "/home/{{ user }}/.ssh/id_rsa.pub
      dest: "files/keys/{{ user }}.pub


ansible-doc synchronize     # show synchronize in ansible-doc "documentation"

#********* Deploy custom files with jinja2 template
# jinja2 language
# file name:  x.j2  <--------------- reading more !!!!

# Example

   - name: Render the configuration from a j2 template
     template:
       src: vhost.template.j2       # <template-name>.j2   
            # template in ansible controller node
       dest: /etc/httpd/conf.d/{{ ansible_hostname }}.conf
       owner: apache
       group: root
       mode: 000644 <------ SElinux

    - name: Install sample content
      copy:
        content: "This is {{ ansible_fqdn }}\n"
        dest: /vhost/{{ ansible_fqdn }}/html/index.html
        settype: httpd_sys_content_t        <------- SElinux

#*** Jinja2 language is powerful
# loops  - uses "for" statement
   {% for user in users %}  # "users" already defined in the code or as variable
          {{ user }}    <------ display the user name
   {% endfor %}


#**** syntax
   {% statement %}
     body of for loop
   {% endfor %}

# Example
  {% for myuser in users if not myuser == "root" %} <---- "users" is variable
  User number {{ loop.index }} - {{ myuser }}       <------- output:   User numbe 1 - user1
  {% endfor %}


#*** Exmaple -  templates/hosts.j2 template
It constructs the file from all hosts in the group "all", it iterates over each host in the group, and
get three facts for the /etc/hosts file

 {% for host in groups['all'] %}
 {{ hostvars['host']['ansible_facts']['default_ipv4']['address'] }}
  {{ hostvars['host']['ansible_facts']['fqdn'] }}
    {{ hostvars['host']['ansible_facts']['hostname'] }}
 {% endfor %}


#*** Conditional - if statement
  {% if finished %}
   {{ result }}
  {% endif %}

# **** Jinja2 provides filter which change the output format for template expressions, 
# for example to JSON, or to YAML
  {{ output | to_json }}
  {{ output | to_yaml }}

# output in human readable format, using to_nice_json or to_nice_yaml filters
  {{ output | to_nice_json }}
  {{ output | to_nice_yaml }}



#**** How to customise ansible.cfg file
[defaults]
inventory = inventory   <--------- defy inventory file
ansible_managed = Ansible managed: modified on %Y-%m-%d %H:%M:%S    # defy your message

#---------------------------------------------------------------------

# How to find out the ansible collected facts
ansible <sever name> -m setup   <--------- it will show all the facts
ansible <sever name> -m setup  | grep -E 'processor|memtotal'
    #  output "processor or memtotal" lines

# How to display the message or output
   - name: Check file exists
     state:
       path: /etc/motd
     register: motd

   - name: Display stat results
     debug:
       var: motd

# How to create symbolic link
   - name: Copy custom /etc/issue file
     copy:
       src: files/issue
       dest: /etc/issue
       owner: root
       group: root
       mode: 0644
 
    - name: Ensure /etc/issue.net is a symlink to /etc/issue
      file:
        src: /etc/issue
        dest: /etc/issue.net
        state: link
        owner: root
        group: root
        force: yes

#*********************  Managing complex plays and playbooks / manage large projects
# How to list all the hosts in the inventory
ansible all --list-hosts    
ansible '*' --list-hosts    # '*' need to be in ' ' (single quote, encapsulation in ' ')

ansible '*.test.lab' --list-hosts

ansible 'dev,&webservers' --list-hosts      
    # list hosts in [dev] and [webservers] groups, must be in two groups
ansible 'dev,!webservers' --list-hosts      
    # list host that is part of [dev], but not part of [webservers] group

# How to customize playbook hosts
---
- name: This is an exmple playbook
  hosts:
    - server1.lab.com
    - test  <----- [test] group servers
    - dev,&boston    <----- servers in both [dev] and [boston] group
    - prod,!webservers    <------- server in [prod], but not in [webservers] group
---
# How to find out if a host in the inventory file
ansible <host-name or IP-address> -i <inventory-file-name> --list-hosts
ansible all -i <inventory-file> --list-hosts    # list all the hosts in the inventory file

# list all the host in test.lab, but not in lab.test.lab in the inventory file
ansible '*.test.lab, !*.lab.test.lab' -i inventory1 --list-hosts        
        # inventory file, such as "inventory1"

ansible '172.25.*' -i inventory1 --list-hosts

# list all hosts start with 's', or groups start with 's' but list the members
ansible 's*' -i inventory1 --list-hosts


#************* Including and importing files - Better to use "import_"
tasks in different yml files, such as REHL 7 and RHEL 8 in different yml files


ansible-doc -l
/import     # search for import module
    import_playbook
    import_role
    import_tasks

there are also include_roles, include_tasks
    #  what are the difference betweeen import and include

#-------------------------------------
  - name: include tasks from another file
    include_tasks: rhel8tasks.yml       <------ include_tasks example
#-------------------------------------

The rhel8tasks.yml  is not a playbook, it include tasks.

Important: 
1. playbook begins with "hosts" that will specify which hosts will be managed hosts for the tasks to be run at
2. The tasks yml file entries will NOT be carried out by "syntax-check", but
   the import_tasks entries will BE carried by "syntax-check"
    # Therefore, it is better to use "import-tasks"
3. when run "ansible-playbook <filename>.yml --list-tasks", 
    it will also list the tasks in the "import_tasks" file

#************* analysis 
In the "inventory" file, it has entry "inventory = inventory" to specify the inventory directory
"ls inventory" shows there is "inventory.py" python file, we need to make it executable
    chmod 755 inventory/inventory.py
# To show the list of hosts in the inventory.py in json format
    inventory/inventory.py --list

    ansible server*.lab.com --list-hosts    <---- list all the hosts

Then modify "playbook.yml" for the "hosts" entry to (head -3 playbook.yml"
---
- name: Install and configure web service
  hosts: server*.lab.com    
        # The original entries are
            hosts:
                          - servera.lab.com
              - serverb.lab.com
                          - serverc.lab.com

                        
#*********  Simplying playbooks with roles
using playbooks roles to re-use codes

Roles are directory, skills learn and improve with time

tree user.example   # list the tree view of the directory

    /vars/main.yml  # internal variables
    /defaults/main.yml  # another location for the variables, default values for the variables
                # allow user to change the values
    /tasks/main.yml


# how to use playbook roles
```     # this is a playbook
- hosts: remote.example.lab
  roles:
    - role1 # using the roles
    - role2


# we can use playbook handlers, such as pre_tasks, roles, post_tasks
- name: Play to illustrate order of execution
  hosts: remote.example.lab
  pre_tasks:
    - debug:
        msg: 'pre-task'
      notify: my handler
  roles:
    - role1
  tasks:
    - debug:
        msg: 'first task'
      notify: my handler
  post_tasks:
    - debug:


# RHEL system roles
yum install rhel-system-roles

#***** reusing content with system roles
ansible-galaxy list

    /home/<userid>/role-system/roles
    /usr/share/ansible/roles

# check example
ansible-doc timezone | grep -A 4 "EXAMPLES"


ansible database_servers -m shell -a date   # run ansible for database_servers group

ansible-config dump | grep -i roles # verify where the roles are created
ansible-galaxy      # this is not rhel shipped roles, they are other people created
            # it can be used to create role directories
ansible-galaxy init # to start/create your role, include the directories and files


#******* verify check variable precedence
There are 22 precedence, the bigger the number the stronger the precedence


https://galaxy.ansible.com  # community roles, not rhel official support roles


namespace . collection_name # standar/recommended name
    Example     
        redhat . satellite
        community.kubernetes

    https://console.redhat.com  # ---> Collection   # verify Collections available

ansible-config dump | grep -i collections
ansible-galaxy collection install --help | tail -5

ansible-galaxy collection install -r requirement.yml    # -r   requirement file

ansible-galaxy install -r roles/requirements.yml -p roles   # -p  path


#*************  troubleshooting ansible
Using good editor, such as visual studio code, and plug-in


By default, ansible does not store logs, edit ansible.cfg and add log_path
# ansible-config dump | grep -i log # see the log configuration

Then edit ansible.cfg and add the line for log_path
[defaults]
log_path=~/Documents/ansible.log       # add this line, use log rotate to rotate this log file
                             # Note: need to create the file first
                                touch  /logs/ansible.log

# debug -v, -vv, -vvv, -vvvv    4 level debug

or, create a variable and keep the remote file in the remote host for debug
export ANSIBLE_KEEP_REMOTES_FILES=1 # Use "export" to create the variable

then, when run the ansible playbook with "-v", it will create a sub-directory under 
    cd .ansible/tmp
for every task that it runs

cat AnsiballZ_setup.py      
    # verify this python script for further information about ansbile python script runs


# using "debug"
    var
    msg

# verify tasks of playbook
ansible-playbook <filename>.yml --list-tasks    # list playbook tasks

# run playbook step by step
ansible-playbook <filename>.yml --step

# run playbook at particular task
ansible-playbook <filename>.yml --start-at-task="<task name>"

last, use ~/.vimrc  # for ansible identation
# cat ~/.vimrc
autocmd FileType yaml setlocal ai ts=2 sw=2 et nu cuc
autocmd Filetype yaml colo desert


ansible-playbook --check playbook.yml       
    # just smoke check, no running/change

using " uri " module to test URL website

"assert" module, checking certain condition is satisfied

using "ping" module, to test the managed hosts, 
    and see whether the managed host can process the ping.py, and response with "pong"


# run the ad hoc command
ansible servera.lab.com -u devops -b -a "head /etc/posix/main.cf"
    -u devops   # run as user "devops"
    -b      # privilege mode
    -a      # argument, or command to run


#*********************   Automate Linux administration tasks
# yum group install "Development Tools"
- name: Install Development Tools
  yum:
    name: '@Development Tools'      # using "@" for group
    state: present


# yum module install perl:5.26/minimal
- name: Install perl AppStream module
  yum:
    name: '@perl:5.26/minimal'
    state: present

# ansible-doc yum   
    # for additional parameters and playbook examples


# gathering facts about installed packages
The package_facts Ansible module collects the installed package details on the managed hosts,
set the ansible_facts.packages variable with the package details

---
- name: Display installed packages
  hosts: servera.lab.com
  tasks:
    - name: Gather info on installed packages
      package_facts:
        manager: auto

    - name: List installed packages
      debug:
        var: ansible_facts.packages

    - name: Display NetworkManager version
      debug:
        msg: "version {{ ansible_facts.packages['NetworkManager'][0].version }}"
      when: "'NetworkManager' in ansible_facts.packages"


waitfor module - 
command module
shell module


# verify when the system was last reboot
ansible webservers -u devops -b -a "who -b" # against "webservers", who -b  
    # who last reboot the system


Ansible - idempotency

More with examples

# *************** Introduction ********************
Ansible controller - Install on Linux (called control node) 
    # Only need to install on Ansible Controller (control node)
    - requires Python package installed

Ansible - agentless

Ansible can manage network switches, Windows, Linux

Ansible playbook
    - yaml  (in YAML format)    .yml
    
    playbook.yml
        play 1
            -task1  (every task is a py script)
            -task2

        play 2
            -task1
            -task2

Note:
Every task refer to a module
    Module
        - parameters    (can be name one of the following)
        - options
        - arguments

Inventory
    - reference a system, or a group of systems
    - static inventory, such as input file contains a list of systems
    - dynamic inventory (python program)  - dyninv.py   (python program will find the systems)

connection plugin   
    (ssh is the connection plugin - Linux, container plugin, microsoft plugin - .Net, winrm or PowerShell, Hypervisor, etc)


inventory host(s) - includes in the inventory file

ehco $? (verify the return code, "0" is successful)

# register the machine, then subscribe to enable repo

Redhat CDN - 

rpm -q ansible  # vrify whether ansible is installed
which ansible

anisble ENTER   (will list the ansible commands)

#
sudo yum install ansible    (Install Ansible)
ansible --version   # verify the ansible version installed
    It will also show the config file (config file = /etc/ansible/ansible.cfg)

ansible -m setup localhost | grep ansible_python_version
    run ansible using module "setup" on localhost   # ansible -m setup


# *************** Deploying Ansible ********************
Deploying Ansible

Goal:
    - Configure Ansible to manage hosts and run ad hoc Ansible commands

Objectives:
    - Describe Ansible inventory concepts and manage a static invenory file
    - Describe where Ansible configure files are located, how Ansible select them, 
        and edit them to apply changes to default settings
    - Run a single Ansible automation task using an ad hoc command and explain some use cases for ad hoc commands


"inventory" text file - list the servers, contains "sections" describe inventory groups
servera.test.lab

[webservers]
serverb.test.lab        <----- or server[b:d].test.lab
serverc.test.lab
serverd.test.lab

[dbservers]
servere.test.lab

[hypervisors]
172.25.250.[11:19]      <----- range [x:y]  


# list all inventory hosts
ansible --list-hosts all    # it will list all the inventory hosts
ansible --list-hosts ungrouped      # list the hosts which are ungrouped, not in groups
ansible --list-hosts <group-name>   # Example,  ansible --list-hosts hypervisors


How to nested groups in inventory, using ":children"
[boston:children]
webservers
dbservers


ansible <Tab>   # list ansible related commands
ansible-inventory   # 
    ansible-inventory --graph   # show the inventory in graph format, visual format, "@" as group

# Global default ansible inventory file (Not the one that you normally will use)
tail /etc/ansible/hosts     # this is the global ansible hosts file


mkdir deploy-inventory      # create an directory "deploy-inventory"
cd deploy-inventory
vim inventory           # create "inventory" file


ansible all -i inventory --list-hosts   
        # run ansible against all the hosts in the inventory file
        # we are not using the default global inventory file /etc/ansible/hosts
        # The operations that are performing is "list-hosts"

^all^ungrouped      # replace the previous command "all" with "ungrouped"   



# Order of ansible configuration
1. /etc/ansible/answer.cfg  # 1st, weakness
2. ~/.ansible.cfg   # 2nd - current directory
3. ./ansible.cfg    # it will override previous two ansible configuration file
4. ANSIBLE_CONFIG=      # create a variable, it will be use globally


Prefer it is to create a directory to host the ansible configuration file and the inventory in that directory
mkdir ANSIBLE   # create a directory namely "ANSIBLE"
ls -l /ANSIBLE/
    ansible.cfg
export ANSIBLE_CONFIG=/ANSIBLE/ansible.cfg  # this variable will override any other ansible config file

ls -l .ansible.cfg  # verify the ansible file
Note:  /etc/ansible/ansible.cfg     it has quite some comments and configuration for reference

grep ^[^#]  /etc/ansible/ansible.cfg    # list any line that does not start with "#" comment line

# to remove the global variable ANSIBLE_CONFIG
unset ANSIBLE_CONFIG

Then, run
    ansible --version   # it will now use other ansible config file

# content of /ANSIBLE/ansible.cfg
[defaults]
inventory = inventory       
    # use the inventory file in the same directory as /ANSIBLE/ansible.cfg


When specify the ansible inventory file
ansible-inventory --graph -i /etc/ansible/hosts

ansible-config list # list the ansbile configuration file

grep remote_user /etc/ansible/ansible.cfg
    remote_user = root      
        # default remote user is "root", bad practice, should use other user


# How to configure ansible.cfg to use specified remote user
[defaults]
inventory = inventory
remote_user = devops        # use "devops" as remote user


[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false

# !!    repeat the previous command
sudo !!     # run sudo with previous command,  "tail /etc/shadow" as the previous command

# Learning, how to declare True or False, keep consistent, "true", and "false"
True = true = Yes = yes = 1
False = false = No = no = 0

ansible-config list
ansible-config dump # dump the entire configuration

ansible localhost --list-hosts  
        # run ansible on the localhost, and run command "list-hosts"


# ************** Running Commands ********************
Running ad hoc commands

Syntax
ansible <inventory-host or inventory-group> -m <module-name> -a <argument/options/paramters>

Note:
1. inventory-host or inventory-group must exist in the inventory file
2. module-name need to be valid ansible module name
3. argument/options/parameters    need to be valid for the required module

Example
    ansible servera.test.lab -m command -a date 
        # the default module is "command" if no module is specified
    
    # output
    servera.test.lab | CHANGED | rc=0 >>
    Mon Jun 3 15:00:43 EDT 2019         # rc=0   return code 0 means successful


# list all ansible modules
ansible-doc # about 3,000 modules
ansible-doc <module-name>
    Example:
        ansible-doc user    # show ansible module "user", similar to "man"
    ansible-doc  debug    # show anisble module "debug"

id testuser # verify whether testuser exist

ansible servera.test.lab -m user -a name=testuser   # create user on servera
ansible servera.test.lab -m user -a "name=testuser state=absent"        # remove the testuser

Note:
1. If output is green, means the command run successful, but not change is make
2. If output is muddy color, command run successful, and change has been make
3. If output is "red", some error happens


ansible-doc -l | wc -l      # list number of ansible modules
2834        



# verify sudoers
sudo cat /etc/sudoers.d/devops
devops ALL=(ALL)  NOPASSWD: ALL


ansible all -m ping # run ping module on all inventory hosts
        # return 
            "changed": false,
            "ping": "pong"

ansible localhost -m command -a 'id'
ansible localhost -a 'id'
ansible localhost -m command -a 'id' -u devops  # using devops to run the "id" command
ansible localhost -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd' -u devops
ansible localhost -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd' -u devops --become     
            # if privilge escalation requried"


# run ansible on all inventory hosts
ansible all -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd' -u devops --become   
ansible all -m command -a 'cat /etc/motd' -u devops


# ***************  Writing and running playbooks *******************
rpm -a nmap

cat ansible.cfg     # verify ansible config file

ansible servera.test.lab -m yum -a "name=nmap state=latest" 
    # install latest versoin of nmap
    # if nmap is installed, then update to the latest verion from the repository

# in vi editor, use extended mode, to see the identation
    :set list

# python is identation sensitive, do not use TAB, use spacebar


# exmaple of yaml playbook
---
-  english:

     - numbers:
         1: one
         2: two

     - words:
         one: 1
         two: 2
---

# run ansible playbook
ansible-playbook test.yml --syntax-check    
        # check ansible playbook file (yaml file) for syntax

visual studio code (yml editor)

# ensure ansible playbook, readability
a. style guide,  true, false
b. has blank line between tasks
c. has "-name" for description, task should have name

# Example of ansible playbook
---
- name: This is the first play
  hosts: servera.test.lab
  tasks:

    - name: This is first task
      debug:
        msg: This is task 1 of play 1

    - name: This is second task
      debug:
        msg: This is task 2 of play 1

- name: This is second play
  hosts: localhost
  tasks:

    - name: This is first task (of second play)
      debug:
        msg: This is task 1 of play 2

    - name: This is second task (of second play)
      debug:
        msg: This is task 2 of play 2
---

vim ~/.vimrc
# Example of yaml color code
autocmd FileType yaml setlocal ai ts=2 sw=2 et nu cuc
autocmd FileType yaml colo desert


# How to dry run ansible playbook
ansible-playbook playbook.yml -C    # -C "uppercase" C for dry run

# Important: 
1. when ansible-playbook encounter the first error, 
    it will stop there, and process no further
2. For troubleshooting, debug ansible playbook, using verbose mode
    -vvv    enough verbose information
    -vvvv   Too much verbose information
3. Every task calls a module, with parameter. Every task creates/generates a python script


# HTTP Site ansible playbook
---
- name: Install and start Apache HTTPD
  hosts: web
  tasks:
    - name: httpd package is present
      yum:
        name: httpd
        state: present

     - name: correct index.html is present
       copy:
         src: files/index.html  
                # the "files" directory is expected on the ansible control node
         dest: /var/www/html/index.html

      - name: httpd is started
        service:
          name: httpd
          state: started
          enabled: true

      - name: Create firewall rules
        firewalld:
          service: http
          state: enabled
          immediate: true
          permanent: true

- name: Test connectivity to web servers
  hosts: localhost
  become: false
  task:

    -name: Connect to web server
     uri:
       url: http://servera.test.lab
       return_content: yes  
---


Alternative way to create a new index.html
      - name: Install sample content
        copy:
          dest: /var/www/html/index.html
          content: "Hello World\n"

# Note
When run the ansible playbook, there is an implied task namely "Gathering Facts", it runs first

# test website
curl servera.test.lab

# If the site is https, then using openssl for site testing
openssl s_client -connect servera.test.lab:443
        Syntax:  openssl s_client -connect <hostname>:<port>
             openssl s_client -connect <hostname>:<port> -showcerts
        To view complete list of s_client commands: openssl -?


# ********* Implementing Multiple Plays ************

---
- name: Idempotent
  hosts: servera.test.lab
  tasks:

    -name: Good example of module usage
     copy:
       content: nameserver 8.8.8.8
       dest: /etc/resolv.conf


# Example of how to create file with lots of lines, and a single line
# "|" to preserve the line
# ">" to keep all in one line

---
- name: Lines example
  hosts: servera.test.lab
  tasks:

    -name: lots of lines    <--------- Name can have space
     copy:
       content: |   <--------- using pipe character "|" to preserve line break
         This is line 1
         This is line 2
         This is line 3
       dest: /var/tmp/lots_of_lines

     - name: one long line
       copy:
         content: > <---------- using ">" to keep one long line
           This is one really long line
           that looks like it may by finalized as many lines
           but because of the > character, it will be collapsed as one line
          dest: /var/tmp/one_line
---

---
- name: Enable inranet services
  hosts: servera.test.lab
  become: yes   <------ this is set at the global level
  tasks:
    
    -name: latest version of httpd and firewalld installed
     yum:
       name:        <---- name is a key, and it has two values: httpd, firewalld
         - httpd
         - firewalld
       state: latest

     -name: test html page is installed
      copy:
        content: "Welcome to the test.lab intranet!\"
        dest: /var/www/html/index.html

      -name: firewalld enabled and running
       service:
         name: firewalld
         enabled: true
         state: started

       -name: firewalld permits access to httpd serivce
          firewalld:        <-------- firewalld module has four (4) arguments
            service: http
            permanent: true <------ keep consistent "true" as the boolean value
            state: enabled
            immediate: true <------ keep consistent "true" as the boolean value

        -name: httpd enabled and running
         service:
       name: httpd
           enabled: true
           state: started

- name: Test intranet web server  
        # up / top level, this a play level, not a task
  hosts: localhost
  become: no
  tasks:

    - name: connect to intranet web server
      uri:
        url: http://servera.test.lab
        return_content: yes
        state_code: 200
---


# do syntax check
ansible-playbook intranet.yml --syntax-check
ansible-playbook intranet.yml -v    (or -vvv)


---
- name: Enable internet services
  hosts: serverb.test.lab
  become: true
  tasks:

    - name: latest version of all required packages installed
      yum:
        name:
          - firealld
          - httpd
          - mariadb-server
          - php
          - php-mysqlnd
         state: latest

     - name: firewalld enabled and running
       service:
         name: firewalld
         enabled: true
         state: started

      - name: firewalld permits http service
        firewalld:
          service: http
          permanent: true
          state: enabled
          immediate: true

       - name: httpd enabled and running
         service:
           name: httpd
           permanent: true
           state: enabled
           immediate: true           
          
        - name: mariabd enabled and running
          service:
            name: mariadb
            enabled: true
            state: started

         - name: test php page is installed
           get_url:
             url: http://materials.test.lab/labs/index.php  # download the php file to the serverb
             dest: /var/www/html/index.php
             mode: 0644

- name: Test intranet web server
  hosts: localhost
  become: false
  tasks:    <---------- tasks is a key word at play level

    -name: connect to internet web server
     uri:
       url: http://serverb.test.lab
       status_code: 200
---

# ***************  Managing variables and facts *****************
# variable
1. need to start with a character, not digit
2. can use "_"

https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html
It shows the variable at different level, and 1 is the weakness, and 22 is the strongest
The strongest will always win (override any weaker)


# variable playbook yaml file - playbook.yml

---
- name: Simple variable example
  hosts: dev
  vars:

    packages:
      - nmap
      - httpd
      - php
      - mod_php
      - mod_ssl

  tasks:

    - name: Install software
      yum:
        name: "{{ packages }}"      
            # best practices is have "space" between {{ and variable name, " " required
        state: present
---

Note:
using variable "play var" value 12



# Using variable outside playbook, in variable file
---
- name: Simple variables example
  hosts: all

  tasks:

    - name: Install software
      yum: 
        name: "{{ packages }}"
        state: present
---

Note: In the ansible playbook directory, there is variables files in the sub-directory, such as 
    group_vars

cat /group_vars/dev
packages:
  - httpd
  - php
  - mod_ssl
  - mod_php

cat /groups_vars/all
packages:
  - nmap

The packages can also be declared inside the host file
cat host_vars/serverb.test.lab
packages:
  -sysstat


Where, the inventory file has inventory groups:
[dev]
servera.test.lab
serverb.test.lab

[prod]
serverc.test.lab
serverd.test.lab

[boston:children]
dev
prod


Question:
When, apply the ansible playbook, to install the packages, which packages will be installed?
Answer: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html
    Look at the variable applying order, and see which one has higher precedence

1. command line values (eg "-u user")   <-- weakest
2. role defaults
3. inventory file or script group vars
4. inventory group_vars/all <----- the "all" file inside "group_vars" inventory directory
5. playbook group_vars/all
6. inventory group_vars/*
7. playbook group_var/*
8. inventory file or script host vars
9. inventory host_vars/*
10. playbook host_vars/*
11. host facts / cached set_facts
12. play vars   <-------------------------- when declare in playbook, normally used
13. play vars_prompt
14. paly vars_files
...


# Array playbook example

---
- name: Array example
  hosts: servera.test.lab
  vars:     <------------ precedence 12

    users:
      test01
        uname: test01
        fname: Tester
        lname: test01
        home: /home/test01
        shell: /bin/zsh

       test02:
         uname: test02
         fname: Tester
         lname: test02
         home: /home/test02
         shell: /bin/test02

   tasks:

     -name: Create a user from an array
      user:
        name: "{{ users.test01.uname }}"
        comment: "{{ users.test01.fname }} {{ users.test01.lname }}"
        shell: "{{ users.test01.shell }}"
        state: present
---

# How to verify the user is present/created
getent passwd test01    # getent  - get entries from all users available on the system


# How to debug ansible, by using "register" module and save the information in "output" variable

---
- name: Test connectivity to web servers
  hosts: localhost
  become: false
  tasks:
 
   - name: Connect to web server
     uri:
       url: http://servera.test.lab
       return_content: true
       status_code: 200
     register: output   <--------- "register" is a module, and "output" is a variable
                                    store the information in "output" variable

   - name: Show the content of the captured output
     debug: <------------- using "debug" module
       var: output  <----------- using "var" to call the "output"
---

   alternative, we can use "msg", then we need to use {{ }} to refer the variable
    msg: "{{ output }}" 

When run the playbook
    ansible-playbook webserver.yml

It will show "output" variable and all the information. There are many fields, it will show the entire dictionary.

If we only want to show output.content, then 
   - name: Show the content of the captured output
     debug: <------------- using "debug" module
       var: output.content  <----------- using "var" to call the "output", and show only output.content, 
                         rather than the entire dictionary


# Using variable in playbook
---
- name: Deploy and start Apache HTTPD service
  hosts: webserver
  vars:
    web_pkg: httpd
    firewall_pkg: firewalld
    web_service: httpd
    firewall_service: firewalld
    python_pkg: python3-PyMySQL
    rule: http
    web_page: '/var/www/html/index.html'
    web_content: 'Example web content'
    url_link: 'http://servera.test.lab'

  tasks:
    - name: Required packages are installed and up to date
      yum:
        name:
          - "{{ web_pkg }}"
          - "{{ firewall_pkg }}"
          - "{{ python_pkg }}"
         state: latest      <----- default "present", we use "latest"

     - name: The {{ firewall_service }} service is started and enabled
       service:
         name: "{{ firewall_service }}"
         enabled: true
         state: started

     - name: The {{ web_service }} service is started and enabled
       service:
         name: "{{ web_service }}"
         enabled: true
         state: started

      - name: Web content is in place
        copy:
          content: "{{ web_content }}"
          dest: "{{ web_page }}"

       - name: The firewall port for {{ rule }} is open
         firewalld:
           service: "{{ rule }}"
           permanent: true
           immediate: true
           state: enabled

- name: Verify the Apache service   <---------- 2nd play
  hosts: localhost
  become: false

  tasks:
    - name: Ensure the webserver is reachable
      uri:
        url: "{{ url_link }}"
        status_code: 200
---

# ********* Managing Secrets *******************

---
- name: Create our users
  hosts: servera.test.lab

  vars:
    username: test01
    password: Password01    <--------- with crential in the playbook, not good practice

  tasks:

    -name: Create users
     user:
       name: "{{ username }}"
       password: "{{ password | password_hash('sha512') }}"
       state: present
---

# Using ansible vault to store credential   <-------- ansible vault
# In ansible directory, create my_variables file with the credential
cat my_variables
---
username: test01
passwrod: Password01
---

# How to use ansible vault  (default it is using AES256 encryption)
ansible-vault encrypt my_variables
New Vault password:
Confirm New Vault password
Encryption successful

Now, when run "cat my_variables", it will show the encrypted credential file

# How to view the content of the ansible vault encrypted file
ansible-vault view my_variables
Vault password: 
    # after successfully enter the encrypted passowrd, the content of the encrypted file will be shown

# How to update/edit the encrypted ansible vault file content, using "edit"
ansible-vault edit my_variables     # Alternatively to run the previous command -  ^view^edit
                    # replace "view" with "edit"


# How to use the ansible vault file - one of the methods
mv my_variables vault   # rename the filename to "vault"
mkdir host_vars
mkdir host_vars/servera.test.lab
mv vault host_vars/servera.test.lab # Move the "vault" file to new location

tree host_vars/     # view directory structure

# How to run the ansible playbook with vault file and prompt for the vault credential
ansible-playbook --vault-id @prompt playbook.yml

# How to change the ansible vault file password, using "rekey"
ansible-vault rekey host_vars/servera.test.lab/vault
Vault password:
New Vault password:
Confirm New Vault password:


# ***************** Implementing Task Control ***************
Implementing loops
Implementing a task that runs only when another task change the managed host
Control what happens when a task fails, and what conditions cause a task to fail

# ansible lookup plugin - lookup facilitates loops
ansible-doc -t lookup -l    # show the list of lookup plugins
                # "t"  type of plugin

ansible-doc -t item -l      # look at item plugin

# Example to see documentation about lookup "dict" (dictionary)
ansible-doc -t lookup dict


# Create users using loop
---
- name: Create users
  hosts: servera.test.labe
  vars:
    myusers:        # myusers is a key with multiple values
      - test01
      - test02

  tasks:
    
    -name: Creste users
     user:
       name: "{{ item }}"
       state: present
     loop: "{{ myusers }}"  <---- loop is at task level
---

# list the users
tail /etc/passwd


---
- name: Create users in the appropriate groups
  hosts: all
  tasks:

    - name: Create groups
      group:
        name: "{{ item }}"
      loop:
        - group1
        - group2

     - name: Create users in their appropriate groups
       user:
         name: "{{ item.name }}"
         groups: "{{ item.groups }}"
       with_dict:       <--------------- using dictionary
         - { name: 'test01', groups: 'group1' }
         - { name: 'test02', groups: 'group2' }
---


---
- name: Create the users, and give them acccess to their databases
  hosts: all
  vars:

    users:
      - test01
      - test02

    category_db:
      - db01
      - db02
      - db03

   tasks:

     - name: Create the users
       user: 
         name: "{{ item }}"
       loop:
         - "{{ users }}"

     - name: Install mariadb-server
       yum:         # yum install -y package1 && yum install -y package2, 
                    # but we can use "yum install -y package1 package 2
         name: "{{ item }}"
       loop:
         - "{{ '@mariadb' }}"
         - "{{ '@mariadb-client' }}"

     - name: Start and enable the mariadb
       service:
         name: mariadb
         state: started
         enabled: true

     - name: Include the password from ansible vault
       include_vars:
          file: passwords.yml

     - name: Give the users access to their database
       mysql_user:
         name: "{{ item[0] }}"
         priv: "{{ item[1] }}.*:ALL "
         append_privs: true
         password: "{{ db_pass }}"
       with_nested:
         - "{{ users }}"
         - "{{ category_db }}"

----

# better style for yum install
     - name: Install mariadb-server
       yum:
         name:
           - "{{ '@mariadb' }}"
           - "{{ '@mariadb-client' }}"


# use "register" to troubleshooting loop, to create dictionary and output the information

# **** Check to see whether the task has been successful
---
- name: Create and verify a zip file
  hosts: localhost
  become: false

  vars:
    source: /tmp/testing/scratch
    zipfile: /tmp/testing/scratch.tar.gz
    zipformat: zip

  tasks:
   
    - name: Create a tar archive of the source directory
      archive:
        format: "{{ zipformat }}"
        path: "{{ source }}"
        dest: "{{ zipfile }}"

    - name: Check to see if the archive exists
      stat:
        path: "{{ zipfile }}"
      register: archive 
            # store the information/result/output in a register variable/dictionary namely "archive"

     - name: Show the archive dictionary
       debug:
         var: archive

      - name: Make sure that the archive exists before proceeding
        assert:
          that: "'zip' in archive.stat.mimetype"


# ****** Conditional Task Syntax
The "when" statement is used to run a task conditionally.

---
- name: Simple boolean task
  hosts: all
  vars:
    run_task_when_true: true

  tasks:
    -name: httpd package is installed
     yum:
       name: httpd
     when: run_task_when_true       <------ when at the task level


---
- name: Run the task when the variable is define
  hosts: all
  vars:
    required_service: httpd

  tasks:
    -name: "{{ required_service }} package is installed"
     yum:
       name: "{{ required_service }}"
     when: required_service is defined


# Example of the conditionals
Operation           Example
-------------------------------------------------------
Equal (value is string)     ansible_machine == "x86_64"
Equal (value is numeric)    max_memory == 512
Less than           min_memory < 128
Greater than            min_memory > 64
Not equal to            min_memory != 64
Variable exists         min_memory is defined
Variable does not exists    min_memory is not defined
Boolean variable is true (1)    memory_available
Boolean variable is false (0)   not memory_available
First variable's value is   ansible_distribution in supported_distros   
            # ansible distribution
  present as a value in second
  variable's list


Note:
The managed host setup module creates variables, such as ansible_distribution


---
- name: Demonstrate the "in" keyword
  hosts: all
  gather_facts: true    
    # run the gather_facts module to collect the information
  vars:
    supported_distrios:
      - RedHat
      - Fedora

   tasks:
     - name: Install httpd using yum on supported distributions
       yum:
         name: httpd
         state: present
       when: ansible_distribution in supported_distrios


Example
    ansible_distribtution == "RedHat" or ansible_distribution == "Fedora"
    ansible_distribution_version == "8.0" and ansible_kernel == "xxxxxxx"

or
   when:
     - ansible_distribution_version == "8.0"
     - ansible_kernel == "xxxxxxx"


   when: >
     ( ansible_distribution == "RedHat" and
       ansible_distribution_major_version == "7" )
      or
      ( ansible_distribution == "Fedora" and 
        ansible_distribution_major_version == "28" )


---
- name: Install mariadb-server if enough space on root
  yum:
    name: mariadb-server
    state: latest
  loop: "{{ ansible_mounts }}"            <--------- ansible_mounts facts dictionaries
  when: item.mount == "/" and item.size_available > 300000000   <------- 300MB

# using debug to see the collect facts for all the items collected
ansible servera.test.lab -m setup | less

---
- name: Restart httpd if postfix is running
  hosts: all
  
  tasks:
    -name: Get postfix server status
     command: /usr/bin/systemctl is-active postfix
     ignore_error: true
     register: result

- name: Restart Apache httpd based on postfix status
  service:
    name: httpd
    state: restarted
  when: result.rc == 0

# Too see more information, using "debug" -- for debug
    -name: Get postfix server status
     command: /usr/bin/systemctl is-active postfix
     ignore_error: true
     debug:
       var: result  <------- check "result" for detail information



# ********* Implementing Handlers

Good playbook is to use "copy" rather than "echo" to update/write content to file.
If using echo, every time the playbook is run, it will result in "change" state.

Handlers run at the end of the tasks. If the task fail, then the handlers won't run.
But, by using "force_handlers: true", if the task fail, handlers will run.

---
- name: Setup the support infrastructure to manage systems
  hosts: all
  force_handlers: true

  tasks:
    - name: Create the user support as a member of the group wheel
      user:
        groups: wheel
        name: support

    - name: Install the sudo configuration which allows passwordless execution of commands as root
      copy:
        src: support.sudo
        dest: /etec/sudoers.d/support

    - name: Install the ssh key
      authorized_key
        manage_dir: true
        user: support
        key: "{{ lookup('file', 'id_rsa.pub') }}"

     - name: Limit ssh usage to members of the group wheel
       lineinfile:
         state: present
         dest: /etc/ssh/sshd_config
         line: AllowGroups wheel
       notify: Restart the ssh daemon

      - name: Disallow password authentication
        lineinfile:
           state: present
           dest: /etc/ssh/sshd_config
           line: PasswordAuthentication no
         notify: Restart the ssh daemon

  handlers:

     - name: Restart the ssh daemon
       service:
         name: sshd
         state: restarted

# ********** Handle task failure
1. By default, if a task fail, the playbook stop.
However, this could be override
    ignore_errors: true     
        # This is at the "tasks" identation level (Note: It can be at play level)

2. Forcing execution of handlers after task failure
Normally when a task fails and the play aborts on that host, 
    any handlers that has been notified by earlier tasks
in the play will not run. If you set the
    force_handlers: true    keyword on the play
then notfied handlers are called even if the play aborted because a later task failed.

---
- hosts: all
  force_handlers: yes   
        # this is at the play level, you could also "ignore_errors: yes"

  tasks:
    - name: A task will always notifies its handler
      command: /bin/true
      notify: restart the database


3. Specifying task failure conditions
Using "failed_when" keyword on a task to specify which conditions indicate that the task has failed.
This is often used with command modules that may successfully execute a command, 
    but the command's output indicates a failure.

  tasks:
    - name: Run user creation script
      shell: /user/local/bin/create_users.sh
      register: command_result
      failed_when: "'Password missing' in command_result.stdout"

# ansible "fail" module
ansible-doc fail    # to see more information

  tasks:
    - name: Run user creation script
      shell: /usr/local/bin/create_users.sh
      register: command_result
      ignore_errors: true

     - name: Report script failure
       fail:
         msg: "The password is missing in the output"
       when: "'Password missing' in command_result.stdout"

Use fail module to provide a clear failure message for the task. 
    This enables delayed failure, allowing to run
intermediate tasks to complete or roll back of changes.



3. using changed_when
Normally, the shell module will report "change" when it runs, 
    suppress that change, so it will only report "ok" or "failed"

 - name: Get kerberos credentials as "admin"
   shell: echo "{{ krb_admin_pass }}" | kinit -f admin
   changed_when: false

report changed based on the output of the module that is collected by a registered variable

 tasks:
   - name: Report sucess when upgrade database
     shell:
       cmd: /usr/local/bin/upgrade-database
     register: command_result
     changed_when: "'Success' in command_result.stdout"
     notify:
       - restart_database   
            # this is task, and can have "space" in the name

  handlers:
    - name: restart_database
        service:
          name: mariadb
          state: restarted

# *** ansible block
4. Ansible blocks and error handling
In playbook, blocks are clauses that logically groups tasks, 
    and can be used to control are executed.

 - name: Block example
   hosts: all

   tasks:
    
     - name: Installing and configuring yum versionlock plugin
       block:   <------- using ansible block
       - name: Package needed by yum
         yum:
           name: yum-plugin-versionlock
           state: present

      - name: Lock version of tzdata
        lineinfile:
          dest: /etc/yum/pluginconf.d/versionlock.list
          line: tzdata-2016j-1
          state: present
       when: ansible_distribution == RedHat
            # same conditional will apply to both tasks in the block

Note:
If without block, then we need to have two conditional, one for each task


5. If the tasks in the block clause fail, tasks defined in the "rescure" and "always" clauses are executed.
  
  tasks:
    - name: Upgrade database
      block:
        - name: Upgrade the database
          shell:
            cmd: /usr/local/lib/upgrade-database
      rescure:
        - name: Revert the database upgrade
          shell:
            cmd: /usr/local/lib/revert-database
      always:
        - name: Always restart the database
          service:
            name: mariadb
            state: restarted

Note: The "when" condition on a block cluse also applies to its rescure and always clauses if present.


---
- name: Task failure exercise
  hosts: databases
  vars:
    web_package: httpd
    db_package: mariadb-server
    db_service: mariadb

  tasks:
    - name: Check local time    
            # this happens before the "block", result in a change state
      command: date
      register: command_result

    - name: Print local time
      debug:
        var: command_result.stdout     

*****************************************
Better using following to not show as change state, unless fail
  tasks:
    - name: Check local time    
        # this happens before the "block", result in no change state
      command: date
      register: command_result
      changed_when: false

    - name: Print local time
      debug:
        var: command_result.stdout 

********************************************

    -name: Attempt to set up a webserver
     block:
     - name: Install {{ web_package }} package
       yum:
         name: "{{ web_package }}"
         state: present    
       failed_when: web_package != "httpd"

     rescue:
     - name: Install {{ db_package }} package
       yum:
         name: "{{ db_package }}"
         state: present

      always:
      - name: Start {{ db_service }} service
        service:
          name: "{{ db_service }}"
          state: started
---

---
- name: Playbook Control
  hosts: webservers
  vars_files: vars.yml      <--------- contains the variables

  tasks:
  # Fail fast message
  - name: Show failed system requirements message
    fail:
      msg: "The {{ inventory_hostname }} did not meet minimum requirements."
    when: > 
      ansible_memtotal_mb < min_ram_mb or
      ansible_distribution != "RedHat"

  # Install all packages
  - name: Ensure required packages are present
    yum:
      name: "{{ packages }}"
      state: latest

  # Enable and start services
  - name: Ensure services are started and enalbed
    service:
      name: "{{ item }}"
      state: started
      enabled: true
    loop: "{{ services }}"

  # Block of config tasks
  - name: Setting up the SSL cert directory and config files
    block:

      - name: Create SSL cert directoy
        file:
          path: "{{ ssl_cert_dir }}"
          state: directory

      - name: Copy Config Files
        copy:
          src: "{{ item.src }}"
          dest: "{{ item.dest }}"
         loop: "{{ web_config_files }}"
         notify: restart web service

    rescue:
   
      - name: Configuration error message
        debug:
          msg: > 
            One or more of the configuration
            changes failed, but the web service
            is still active.

   # Configure the firewall
   - name: Ensure web server ports are open
     firewalld:
       service: "{{ item }}"     
       immediate: true
       permanent: true
       state: enabled
     loop:
       - http
       - https

  # Add handlers
  handlers:

    - name: restart web service
      service:
        name: "{{ web_service }}"
        state: restarted

---

# vars.yml
min_ram_mb: 256

web_service: httpd
web_package: httpd
ssl_package: mod_ssl

fw_service: firewalld
fw_package: firewalld

services:
 - "{{ web_service }}"
 - "{{ fw_service }}"

packages:
 - "{{ web_package }}"
 - "{{ ssl_package }}"
 - {{ fw_package }}"

ssl_cert_dir: /etc/httpd/conf.d/ssl

web_config_files:
  - src: server.key
    dest: "{{ ssl_cert_dir }}"
  - src: server.crt
    dest: "{{ ssl_cert_dir }}"


curl -k -vvv https://serverb.test.lab   
    # "k" means we will accept the self signed certificate


# ****************** Deploy Files to Manage Hosts ********************

ansible_LinuxAutomation_5.DeployFiles2ManagedHosts-1

# Commonly used file modules 
    # use to manage file or directory, or link, ownership, permission, SELinux context,
blockinfile 
    # insert, update, or remove a block of multiline text surrounded by cusomizable marker lines
copy
fetch   # fetching files from remote machines to the control node
file
lineinfile  
    # ensure that a particluar line is in a file, 
    # or replace an existing line using back-reference regular experssion
stat    
    # retrieve status information for a file, similar to Linux "stat" command
synchronize 
    # a wrapper around the "rsync" command
        Note: It is not to provide access to the full power of rysnc command, 
            but make the most common invocations easier to implement
            You still need to call the "rsync" command directly via the "run" command module


# create a file and set permission
 - name: Touch a file and set permissions
   file:
     path: /patch/to/file
     owner: user1
     group: group1
     mode: 0640
     state: touch


# modify file attributes
Using file module to ensure that a new or  existing file has the correct permissions or SELinux type.
ls -Z file1 # verify file SELinux context   
    # Security Enhanced Linux

 - name: SELinux type is set to samba_share_t
   file:
     path: /path/to/samba_file
     setype: samba_share_t

Note: 
Similar to "chcon" command
    mkdir samba
    chcon -t samba_share_t samba/
    ls -ldZ samba/  
        # SELinux context for directory

Note:
If someone run "restorecon" then, the file permission will be reset
SELinux modules: sefcontxt, selinux, selinux_permissive, selogin
    ansible-doc selinux     # see more information about the module


#********* Copying and Editing files on managed hosts
The copy can be used to copy file locates in the ansible directory on the control node to selected managed hosts.

By default, this module assumes that "force: true" is set, it force to overwrite the remote file if it exists.
If "force: false" is set, then it only copies the file if it does not already exist.

 -name: Copy a file to managed hosts
  copy:
    src: file
    dest: /path/to/file


# fetch is used to retrieve a file from a refrence system before distributing it to other managed hosts.
 - name: Retrieve SSH key from reference host
   fetch:
     src: "/home/{{ user }}/.ssh/id_rsa.pub"
     dest: "files/keys/{{ use }}.pub"

# Add a line to an existing file, using "lineinfile" module
 - name: Add a line of text to a file
   lineinfile:
     path: /path/to/file
     line: 'Add this line to the file'
     state: present

# Add a block of text to an existing file, use the "blockinfile" module
 - name: Add additional lines to a file
   blockinfile:
     path: /path/to/file
     block: |   <------------- using "|" for line break
       First line to be added
       Second line to be added
     state: present


---
- name: Copy file from ansible control node to remote host
  hosts: localhost
  become: false
  vars:
    - source: /home/user01/ansible
    - destination: https://transfer.sh/ansible.zip
    - zipfile: /home/user01/ansible.zip

  tasks:
 
    - name: Check whether the source directory exists
      stat:
        path: "{{ source }}"
      register: verify_result

     - name: Make sure the directory exists before proceeding
       assert:  <---------- assert can have multiple conditions
         that: "'directory' in verify_result.stat.mimetype"

     - name: Create a tar archive of the source directory
       archive:
         format: zip
         path: "{{ source }}"
         dest: "{{ zipefile }}"

     - name: Check whether the archive exists
       stat:
         path: "{{ zipfile }}"
       register: archive

     - name: Make sure the archive exists before proceeding
       assert:
         that: "'zip' in archive.stat.mimetype"

     - name: Upload the tar archive to the remote host
       shell: curl --upload-file {{ zipfile }} {{ destination }}
       args:
         warn: false
       register: result

     - name: Display the download path
       debug:
         msg: "Your download URL is {{ result.stdout }} and will be avaiable for 2 weeks from {{ ansible_date_time.date }}"

     - name: Validate that the upload is downloadable
       get_url:
         url: "{{ result.stdout }}"
         dest: /var/tmp/

      - name: Remove the download copy
        file:
          path: /var/tmp/ansible.zip
          state: absent
---

# synchronize module
---
- name: synchronize local file to remote files
  synchronize: 
    src: file
    dest: /path/to/file

Note: There are many ways to use the synchronize module and its many parameters, 
    including sychronizing directories.
    ansible-doc synchronize     # view documentation


# Retrieve log files
---
- name: Use fetch module to retrieve log files
  hosts: all
  remote_user: support
  become: true

  tasks:
 
  - name: Fetch the /var/log/secure log file from the managed hosts
    fetch:
      src: /var/log/secure
      dest: secure-backups
      flat: no


# verify the backup
tree -F secure-backups  # it will show the backup file structure

 
 tasks:
   - name: Copy a file to managed hosts and set attributes/permissions
     copy:
       src: files/test.txt
       dest: /home/devops/test.txt
       owner: devops
       group: devops
       # mode: 0630
       mode: u+rw, g-wx, o-rwx
       setype: samba_share_t
    
   - name: Persistently set the SELinux context on the file
     sefcontext:
       target: /home/devops/test.txt
       setype: samba_share_t
       state: present

# verify
ansible all -m command -a 'ls -Z' -u devops


---
- name: Using the file module to ensure SELinux file context as default
  hosts: all
  remote_user: devops
  become: true

  tasks:
 
  - name: SELinux file context is set to defaults
    file:
      path: /home/devops/test.txt
      seuser: _default
      serole: _default
      setype: _default
      selevel: _default

# verify SELinux change
ansible all -m command -a 'ls -Z' -u devops

Note: verify the default file attributes of "unconfined_u:object_r:user_home_t:s0"

# verify content of the file
ansible all -m command -a 'cat test.txt' -u devops


# Delete a file from the remote host
---
- name: Remove a file
  hosts: all
  remote_user: devops

  tasks:

  - name: Remove a file from the managed hosts
    file:
      path: /home/devops/test.txt
      state: absent


#*************** Deploy custom files with Jinja2 templates ****************
Using "template" module

# ansible config file
cat ansible.cfg

[defaults]
inventory = inventory
ansible_managed = Ansible managed: modified on %Y-%m-%d %H:%M:%S


# Message of the day Jinja2 file
cat motd.j2

This is the system {{ ansible_facts['fqdn'] }}.
This is a {{ ansible_facts['distribution'] }} version {{ ansible_facts['distribution_version'] }} system.

Only use this system with permission.
You can request access from {{ system_owner }}.


Where
    system_owner is defined by "vars" variable


# Create a playbook file namely motd.yml in the current working directory.
Include a task for the template module that maps the motd.j2 Jinja2 template to the remote file /etc/motd
on the managed hosts.

---
- name: Configure SOE
  hosts: all
  remote_user: devops
  become: true
  vars:
    - system_owner: admin@test.lab

  tasks:
 
  - name: Confgure /etc/motd
    template:
    src: motd.j2
    dest: /etc/motd
    owner: root
    group: root
    mode: 0644

# Linux file permission 
    0421  (read/write/execute)
    where
    owner: read
    group: write
    other/world: execute


# gather some information
ansible serverb.test.lab -m setup | grep -E 'processor|memtotal'

# cat motd.j2
System total memory: {{ ansible_facts['memtotal_mb'] }} MiB.
System processor count: {{ ansible_facts ['processor_count'] }}


---
- name: Configure system
  hosts: all
  remote_user: devops
  become: true

  tasks:

    - name: Configure a custom /etc/motd
      template:
        src: motd.j2
        dest: /etec/motd
        owner: root
        group: root
        mode: 0644

     - name: Check file exists
       stat:
         path: /etc/motd
       register: motd

     - name: Display stat results
       debug:
         var: motd

     - name: Copy custom /etc/issue file
       copy:
         src: files/issue
         dest: /etc/issue
         owner: root
         group: root
         mode: 0644
 
     - name: Esure /etc/issue.net is a symlink to /etc/issue
       file:
         src: /etc/issue
         dest: /etc/issue.net
         state: link
         owner: root
         group: root
         force: true
---


#**************** Managing large projects - complex hosts *****************
ansible all --list-hosts    
    # it will return all hosts, if host appears in multiple groups, it will only report once.

Note: Same command using wildcard *
ansible '*' --list-hosts    
    # need to using ' ' (single quote)
ansbile '*.test.lab' --list-hosts   
    # can apply to IP addresses

# AND operator
ansible 'dev,&webservers' --list-hosts      
    # ",&" with comma and & for AND operator

# part of dev, but NOT part of webservers
ansible 'dev,!webservers' --list-hosts      
    # "!" machine part of "dev", but NOT part of webserver group

# when constructure hosts, there are ways for inclusion, exclusion
 hosts:
   - servera.test.lab
   - dev,&webservers
   - prod,!webservers



# The playbook.yml does not have entries for "hosts"
---
- name: Resolve host patterns
  hosts:
  tasks:
    - name: Display managed host name
      debug:
        msg: "{{ inventory_hostname }}"

Note: inventory_hostname will be coming from gather_facts

# when there are multiple inventory files in the ansible directory
# need to specify the inventory file
# Note: The host in the command need to exist in the inventory file
ansible db1.test.lab -i inventory1 --list-hosts 
ansible 172.25.2.10 -i inventory1 --list-hosts
ansible all -i inventory1 --list-hosts
ansible '*.test.lab' -i inventory1 --list-hosts
ansible '*.test.lab, !*.test.lab' -i inventory1 --list-hosts
ansible '172.25.*' -i inventory1 --list-hosts
ansible 's*' -i inventory1 --list-hosts

# list hosts that are in inventory, in prod group, 
    host has ip address beginning with 172, and host contains lab
# it is combination of all the hosts
ansible 'prod,172*,*lab*' -i inventory1 --list-hosts


# Apply inventory groups with playbook
ansible-playbook -i inventory2 playbook.yml


#***************** Including and importing files *********************
This is similar to programming "include", 
    so we can keep RHEL 8 related in RHEL8 file, and RHEL7 in different files, or
    netowrking in one file, etc


ansible-doc -l
/^import    # search modules starting with "import"

import_playbook
import_role
import_tasks


include
inclue_role
include_tasks
include_vars


# example of "include"

 - name: Include tasks from another file
   include_tasks: network_tasks.yml     
        # The network tasks in different yml file


 - name: Import tasks from another file
   import_tasks: network_tasks.yml      
        # The network tasks in different yml file


Important:
1. network_tasks.yml is NOT a playbook, it is a yaml file that has the tasks
2. When run "ansible-playbook mainplaybook.yml --syntax-check, 
    it will only check the main playbook, but it skip the "include yaml" file
Note:
   We need to run syntax-check against the include file, to ensure there is no error
    ansible-playbook include.yml --syntax-check
3. But, if we are to us "import", then when run syntax-check against the main playbook, 
    it will report error if the import file has error.

---
- name: This is network relate task
  debug:
    msg: xxxxx


# to list the playbook tasks
ansible-playbook hasimport.yml --list-tasks 
    # including the import tasks

ansible-playbook hasinclude.yml --list-tasks    
    # does NOT list the tasks in the "include.yml"


Note:
1. The import file is considered as static tasks. The syntax-check will also check the import file
2. The include file is considered as dynamic tasks, therefore, need to do syntax-check in advance.


# environment.yml   - Not a playbook, it is a yaml dictionary
---
  - name: Install the {{ package }} package
    yum:
      name: "{{ package }}"
      state: latest
  
  - name: Start the {{ service }} service
    service:
      name: "{{ service }}"
      enabled: true
      state: started

# firewall.yml      - not a playbook, it is a yaml dictionary
---
  - name: Install the firewall
    yum:
      name: "{{ firewall_pkg }}"
      state: latest

  - name: Start the firewall
    service:
      name: "{{ firewall_svc }}"
      enabled: true
      state: started

  - name: Open the port for {{ rule }}
    firewalld:
      service: "{{ item }}"
      immediate: true
      permanent: true
      state: enabled
    loop: "{{ rule }}"
  

# placeholder.yml
---
  - name: Create placeholder file
    copy:
      content: "{{ ansible_facts['fqdn'] }} has been customized using Ansible.\n"
      dest: "{{ file }}"

# test.yml
---
  - name: Test web service
    hosts: localhost
    become: false

    tasks:
    
      -name: Connect to internet web server
       uri:
         url: "{{ url }}"
         status_code: 200

# playbook.yml
---
- name: Configure web server
  hosts: servera.test.lab

  tasks:

    - name: Include the environment task file and set the variables
      include_tasks: tasks/environment.yml
      vars:
        package: httpd
        service: httpd
      when: ansible_facts ['os_family'] == "RedHat'

    - name: Import the firewall task file and set the variables
      import_tasks: tasks/firewall.yml
      vars:
        firewall_pkg: firewalld
        firewall_svc: firewalld
        rule:
          - http
          - https

    - name: Import the placeholder task file and set the variable
      import_tasks: task/placeholder.yml
      vars:
        file: /var/www/html/index.html

 - name: Import test play file and set the variable 
        # play level, not tasks level
   import_playbook: plays/test.yml
     vars:
       url: 'http://servera.test.lab'
---



# More example

ls inventory/       # list the files in the inventory/ directory
chmod 755 inventory/inventory.py    # set executable
inventory/inventory.py --list       # list the inventory
ansible server*.test.lab --list-hosts


---
- name: Install and configure web service
  hosts: server*.test.lab
  serial: 2 <----------------- using serial

  tasks:

    - name: Import the web_tasks.yml task file
      import_tasks: tasks/web_tasks.yml

    - name: Import the firewall_task.yml task file
      import_tasks: tasks/firewall_tasks.yml

   handlers:
     - name: restart httpd
       service:
         name: httpd
         state: restarted

# ********************************* Roles ***********************************************
# simplying playbooks with roles
It allows to reuse Ansilbe code easily with roles

Important Note:
Role is a directory structure with resources that we could use to delivery a number of solutions.
Role is NOT a playbook, we still need to create playbook to run them.

Roles/directory structure could contain: (share the code with others, or other projects)
 Jinja 2 template files
 static file
 tasks files
 variable file, etc

Then package them into directory. 

** Important: Need to be heavily parameterized, so it can be shared with other project/teams.

# example of Ansible role
tree user.example
user.example/
 - defaults
    - main.yml
 - files
 - handlers
    - main.yml
 - meta
    - main.yml
 - README.md
 - tasks
    - main.yml
 - templates
 - tests
    - inventory
    - test.yml
 - vars
    - main.yml

*********************************
Using roles in a playbook, this is one way to call Ansible roles.

---
- hosts: remote.test.lab
  roles:
    - role1
    - role2

# Important
For each role specified, the role tasks, role handlers, role variables, 
    and role dependencies will be imported into the playbook, in order.
Any copy, script, template, or include_tasks/import_tasks in the role can reference the relevant files, 
    templates, or task file in the role
without absolute or relative path names.
Ansible looks for them in the role's files, templates, or tasks subdirectories respectively.

When you use a roles section to import roles into a play, 
    the roles will run first, before any tasks that you define for that play.


ansible-galaxy <enter>  # It will show options that can be used
ansible-galaxy --help

# Search ansible-galaxy roles matching the <topic>
ansible-galaxy search <topic>       
    # ansible-galaxy search ssh
ansible-galaxy search ssh --platform EL     
    # search more specific topic, in Enterpries Linux regarding with platform ssh
ansible-galaxy search ssh --platforms EL --galaxy-tags system       
    # more specific about "system" tags

ansible-galaxy search ssh -platforms EL --galaxy-tags system --author <author name>


# How to install ansible-galaxy role
ansible-galaxy install <role name>      
    # Example, ansible-galaxy install geerlingguy.ssh-chroot-jail
ansible-galaxy install <role name> -p <path/directory>  
    # Example, ansible-galaxy install geerlingguy.ssh-chroot-jail -p roles


# verify ansible-galaxy installation
cd roles/
ls -l
ansible-galaxy list 

Note: list the roles, if the the newly installed ansible-galaxy role(s) does not appear, 
    then update galaxy config file
or install the galaxy role into "/root/.ansible/roles" directory

# How to update ansible galaxy config with the newly installed galaxy role
mkdir /root/.ansible/roles
ansible-galaxy install geerlingguy.ssh-chroot-jail -p /root/.ansible/roles/
ansible-galaxy list 
    # it will now show the newly installed ansible galaxy role

ls -l /root/.ansible/roles/geerlingguy.ssh-chroot-jail  
    # to see the role directory structure


# ******* Installing roles using a requirements file
You can also use ansible-galaxy to install a list of roles based on definitions in a text file.
You can create roles/requirements.yml file the project directory that specifies which roles are needed.
This file acts as a dependency manifest for the playbook project.
    which enables playbook to be developed and tested separately from any supporting roles.


Example
# requirements.yml to install geerlingguy.redis
 - src: geerlingguy.redis
   version: "1.5.0"

The src attribute specifies the source of the role, the version attribue is optional.


ansible-galaxy install -r roles/requirements.yml -p /root/.ansible/roles/


# Example
cat /roles/requirements.yml
# from Ansible Galaxy, using the latest version
 - src: geerlingguy.redis   <--------- roles download from Ansible controller node

# from Ansible Galaxy, overriding the name and using a specific version
 - src: geerlingguy.redis
   version: "1.5.0"
   name: redis_prod

# from any git-based repository, using https
 -src: https://gitlab.com/guardianproject-ops/ansible-nginx-acme.git    
    # roes from Gitlab
  scm: git
  version: 56e00a54
  name: nginx-acme

# from any git-based repository, using SSH
 -src: git@gitlab.com:guardianproject-ops/ansible-nginx-acme.git
  scm: git
  version: master
  name: nginx-acme-ssh

# from a role tar ball, given a URL
# supports http, https, or file protocols
 -src: file:///opt/local/roles/testrole.tar 
    # 1st two "//" is reference file share, the 3d "/" reference "/" root file system
  name: testrole

The "src" keyword sepcifies the Ansible Galaxy role name. 
    If the role is not hosted on Ansible Galaxy, the src keyword indicates the role's URL.


# How to remove Ansible Galaxy role
ansible-galaxy list # list the galaxy roles
ansible-galaxy remove <ansible-galaxy role name>

# Example
https://docs.ansible.com/ansible/latest/galaxy/user_guide.html


vim roles/requirements.yml
cat roles/requirements.yml
# requirements.yml
- src: git@workstation.test.lab:test01/bash_env
  scm: git  <----------- "scm" source code managemnt
  version: master   <-------- the branch is "master"
  name: test01.bash_env


ansible-galaxy install -r roles/requirements.yml -p roles
ls roles/
    Note: There are two files under "roles/" directory, requirements.yml & test01.bash_env

# verify Ansible roles update after install roles
ansible-galaxy list 
    # Not able to see the newly installed role, as the instllation is not in the default Ansible role directory
anslbile-galaxy list -p roles 
    # list ansible roles in the "roles/" directory, we will be able to see the newly installed roles
    # The installation is not in the default ansible role directory, it is in the "customed" roles directory

# Create a playbook to use the new role
---
- name: use test01.bash_new role playbook
  hosts: devservers
  vars:
    default_prmot: '[\u on \h in \W dir]\$' # <loginuser> on <login-host> in <working directory>\

   pre_tasks:

   - name: Ensure test user does not exist
     user:
       name: test02
       state: absent    <---- delete the user if exist
       force: true
       remove: true

     roles:
     - test01.bash_new

     post_tasks:
     - name: Create the test user
       user:
         name: test02
         state: present
         passwrod: "{{ 'redhat' | password_hash('sha512'), 'mysecretsalt' }}"

ansible-playbook use-bash_env-role.yml

# Create bash profile Jinja2 template
cat roles/test01.bash_env/templates/_bash_profile.j2
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
       . ~/.bashrc
fi

# User specific environment and startup programs

PATH = $PATH:$HOME/.local/bin:$HOME/bin

export PATH

#*********************** Getting roles and modules from Content Collections *************
Ansible has many plugins, such as 
module plugins
inventory plugins
callback plugins
vars plugins
lookup plugins
connection plugins
become plugins
etc..

A content collecion is a collection of plugins
A developer can create a content collection, which contains plugins, roles
    which is indepent of the Ansible plugins, roles


Contents can come from "Redhat Automation Hub - https://console.redhat.com" or "Ansible Galaxy"
where
    Automation Hub
        # has certified and supported "collections", need valid RHEL support subscription
    Ansible Galaxy  
        # community support (It has roles and collections)

Naming standard - Collection:
    namespace . collection_name

where
    namespace:      developer / team / vendor
    collection_name:    friendly name

You can configure "ansible.cfg" with "RHEL Automation Hub" for content
https://www.ansible.com/blog/getting-started-with-automation-hub
https://www.redhat.com/sysadmin/get-started-private-automation-hub

a. Prerequisites
- Obtain the API token from the Automation Hub server.

b. Procedure
1. Add "server_list" option under [galaxy] section, and provide one or more server names
2. Create a new section for each server name
    [galaxy_server.<server_name>]
3. Set the url option if necessary. The community Ansible Galaxy does not require any "auth_url"
4. Set the "auth_url" option for each server name
5. Set the API token for the Automation Hub server.


# How to check the ansbile config collections configuration
ansible-config dump | grep -i collections

Example output
COLLECTION_PATH(default) = ['/home/test01/.ansible/collections', '/usr/share/ansible/collections']

# How to install ansible collection
Using "ansible-galaxy connection" command   # ansible-galaxy collection --help
ansible-galaxy collection install --help | tail -5
    -p COLLECTION_PATH  # The path to the directory containing the collections
    -r REQUIREMENTS     # requirement-file
                    A file containing a list of collections to be installed

# Install Ansible galaxy collection
ansible-galaxy collection install http://materials.test.lab/lab/role-collections/gls-utils.0.0.1.tar.gz

Note down the installation location

Read/show the first section of the README.md file
head -9   ~/.ansible/collecions/ansible_collections/util/roles/<role_name>/README.md

If the collection comes with any plugin, note down its plugins, example
ls  ~/.ansible/collecions/ansible_collections/util/roles/<role_name>/plugins/modules/<filename.py>

Example:
ls ~/.ansible/collecions/ansible_collections/gls/utils/plugins/modules/newping.py

Note: This is not ICMP ping, this is a trival test module tht requires Python on the remote-node.
For Windows targets, use the [ansible.windows.win-ping] module
For network targets, use the [ansible.netcommon.net_ping] module

Note: module file in this case is python file

To read the documentation about the collection
ansible-doc <anisble_collection_name>       
    # Fully qualify collection name - FQCN  (namespace.collection_name.<module_name>

# Example of usng the installed collection and role
---
- name: Backup the system configuration
  hosts: servera.test.lab
  become: true
  gather_facts: true

  tasks:
    - name: Ensure the machine is up
      gls.utils.newping     
            # newly installed collection module
        data: pong      
            # ping successful result "pong"

    - name: Ensure configuration files are saved
      include_role:
        name: gls.utils.backup
      vars:
        backup_id: backup_etc
        backup_files:
          - /etc/sysconfig
          - /etc/yum.repos.d


# cat requirements.yml
---
collections:
  - name: http://materials.test.lab/labs/role-collections/redhat-insights-1.0.5.tar.gz
  - name: http://materials.test.lab/labs/role-collections/redhat-rhel_system_roles-1.0.1.tar.gz


# Install collection using the requirements file
ansible-galaxy collection install -r requirements.yml


# Playbook to configure the system
---
- name: Configure the system
  hosts: servera.test.lab
  become: true
  gather_facts: true

  tasks:
    - name: Ensure the system is registered with Insights
      include_role:
        name: redhat.insights.insights_client
      vars:
        auto_config: false
        insights_proxy: http://proxy.test.lab:8080

     - name: Ensure SELinux is enforcing
       include_role:
         name: redhat.rhel_system_roles.selinux
       vars:
         selinux_state: enforcing

# Run playbook verification using "check"
ansible-playbook new_system.yml --check     <---- No changes are actually been make, just "check"

# Simply playbooks with roles
cat web_dev_server.yml
- name: Configure Dev Web Server
  hosts: dev_webserver
  force_handlers: yes

mkdir -p roles
cat roles/requirements.yml
- name: infra.apache
  src: git@workstation.test.lab:infra/apache
  scm: git
  verson: v1.4

# install role with requirements
ansible-galaxy install -r roles/requirements.yml -p roles

sudo yum install -y rhel-system-roles | tail -4     # verify  the roles installation
tail roles/apache.developer_configs/meta/main.yml

cat web_dev_server.yml
- name: Configure Dev Web Server
  hosts: dev_webserver
  force_handlers: true

  roles:
    - apache.developer_configs

# Need to configure SELinux as the dev web server are using non-standard ports: 8081, 8082 (Note: standard is 8080)
---
- name: Configure Dev Web Server
  hosts: dev_webserver
  force_handlers: true

  roles:
    - apache.developer_configs

  pre_tasks:
  - name: Check SELinux configuration
    
    block:
      - include_roles:
          name: rhel-system-roles.selinux

    rescue:
      # Fail if failed for a different reason than selinux_reboot_required
      - name: Check for general failure
        fail:
          msg: "SELinux role failed."
        when: not selinux_reboot_required

      - name: Restart managed host
        reboot:
          msg: "Ansible rebooting system for updates

      - name: Reapply SELinux role to complete changes
        include_role:
          name: rhel-system-roles.selinux


# cat selinux.yml
---
# variables used by rhel-system-roles.selinux

selinux_policy: targeted
selinux_state: enforcing

selinux_ports:
  - ports:
      - "9081"
      - "9082"
    proto: 'tcp'
    setype: 'http_port_t'
    state: 'present'


# copy the selinux.yml to the required location, so it can be used by the playbook
cp selinux.yml group_vars/dev_webserver

# carry out syntax-check
ansible-playbook web_dev_server.yml --syntax-check

# run playbook
ansible-playbook web_dev_server.yml # SELinux now works with customized port 9081 & 9081



#************** Troubleshooting Ansible *************
Troubleshoot playbooks and managed hosts
1. Troubleshoot generic issues with a new playbook and repair them
2. Troubleshoot failures on managed hosts when running a playbook


Visual Studio Code - Microsoft (Editor)


# 1. verify ansible log - Update ansible log path
ansible-config dump | grep -i log
DEFAULT_LOG_PATH(default) = NONE

# Update "ansible.cfg" with "log_path=<path>"
# Example
[defaults]
inventory=inventory
remote_user=devops
log_path=~/Documents/ansible.log

[privilege_escalation]
become=False
become_method=sudo
become_users=root
become_ask_pass=False

# Create the ansible log directory and log file
mkdir -v logs
touch logs/ansible.log

Note:
Once the log path is updated, all the output will display on screen/monitor, plus save to the log file


# 2. **** verbose level - v, vv, vvv, vvvv
ansible-playbook intranet.yml -v    (-vv, -vvv, -vvvvv)

# 3. Create a script
Create a variable

# create a variable and assign value "1"
# This will instruct the control node to keep the python script that run on the remote host in the remote host, under
# ".ansible/tmp" directory
export ANSIBLE_KEEP_REMOTE_FILES=1

If you know Python, you can debug the Python script. To verify the Python script
SSH to the remote host as the devops user (The remote_user), navigate to ".ansible/tmp" directory

ssh devops@servera  # ssh to the remote host
cd .ansible/tmp
ls -l   # There are directories created, "ansible-tmp-xxxxx"
There are files in the directory, namely "AnsiballZ_<module_name>.py


# 4. Capture the output of the variable in debug module, to find out the key words in the dictionary, or the sub-dictionaries
The output the variable to see the information

ansible-doc debug   # find out the options that debug module has

Example:
1) Example 1
- shell: /usr/bin/uptime
  registery: result     # capture the result of shell command "/usr/bin/uptime" in a register namely "result"

- debug:
    var: result
    verbosity: 2    # display the "result" with verbose mode 2 (show more information)

2) Example 2
- name: Display all variables/facts known for a host
  debug:
    var: hostvars[inventory_hostname]
    verbosity: 4

# 5. ansible-lint   Cool troubleshooting tool
It not only do syntax check, but also tell you whether the playbook is developed with best practices.
It also verify whether task has "name". It is best practice that ansible task has name.

# 6. List the playbook tasks, to verify login an implementation
ansible-playbook intranet.yml --list-tasks


# 7. Run playbook with "step", so can step through the playbook
ansible-playbook intranet.yml --step

# 8. Run playbook from "task name/particular task"
ansible-playbook intranet.yml --start-at-task=<task name>

ansible-playbook intranet.yml --start-at-task="firewalld enabled and running"


# 9. Check playbook (not actually run), "run smoke test"
ansible-playbook playbook.yml --check

Note: The task inside the playbook can overwrite check
   tasks:
     - name: task always in check mode
       shell: uname -a
       check_mode: yes  <------------ will always run check

# 10. "--diff" option
ansible-playbook playbook.yml --check --diff    # show what change is make in the remote host

# 11. Testing the modules
Some modules can provide addtional information about the status of the managed host.

a. "uri" module
It provides a way to check that a RESTful API is returning the required content

   tasks:
     - name: Check api return version content
       uri:
         url: http://api.test.com
         return_content: true
       register: apiresponse

     - name: R
     - fail:
         msg: 'version is not provided'
       when: "'version' not in apiresponse.content"


b. "script" module supports executing a script on managed hosts, and fails if the return code for that script is nonzero.
Note: The script must exist on the control node, and it is transferred to and executed on the managed hosts.
   task: 
     - script: check_free_memroy

c. "stat" module gathers facts for a file much like the "stat" command.
You can use it to register a variable and then test to determine if the file exists or to get other information about the file.
If the file does not exist, the stat task will not fail, but its registered variable will report "false" for 
  *.stat.exists


d. "assert" module
It is an alternative to the "fail" module. The assert module supports a "that" option that takes a list of conditions.
If any of those conditionals are false, the task fail.
You can use the "success_msg" and "fail_msg" options to customize the message it prints if it reports success or failure

Example
   tasks:
     - name: check if /var/run/app.lock exists
       stat:
         path: /var/run/app.lock
       register: lock

     - name: Fail if the application is running
       assert:
         that:
           - no lock.stat.exists

#*** Troubleshooting connections
Ansible works best using ssh key. Copy the control node public key to the managed hosts.

Many common problems when using Ansible to manage hosts are associated with
1. connections to the managed host
2. configuration problems around the remote user and privilege escalation

If you are having problems authenticating to a managed host
1. make sure you have "remote_user" set correctly in ansible configuration file or in your playbook
2. confirm that you have the correct SSH keys setup or are providing the correct password for the "remote_user"
3. Make sure that "become" is set properly
a. Using the correct "become_user" (that is root by default)
b. You are entering the correct "sudo" password and that "sudo" on the managed host is configured correctly

A more subtle problem has to do with "inventory" settings.
1. With a complex server with multiple network addresses, 
    you may need to use a particular address or DNS name when connecting to that system
You may not want to use that address as the machine's inventory name for better readability.
You could set a host inventory variable, "ansible_host", 
    that will override the inventory name with a differnt name or IP address and be
used by Ansible to connect to that host.
This variable could be set in the "host_vars" file or directory for that host, 
    or could be set in the inventory file itself.


Example:
    web4.phx.test.lab   ansible_host=192.168.2.10


# 12. Testing managed hosts using ad hoc commands
a. Use "ping" module to test whether you can connect to the managed hosts.
Depending on the options you pass, 
    you can also use it to test whether privilege escalation and credentials are correctly configured

ansible testhost -m ping

ansible testhost -m ping --become

output:
  testhost | FAILED! => {
      "ansible_facts": {
           "discovered_interpreter_python": "/usr/libexec/platform-python"
       },
       "changed": false
       "module_stderr": "sudo": a password is required\n",
       "module_stdout": "",
       "msg": "MODULE FAILURE\nSee stdout/stderr for the exect error",
       "rc": 1
}


b. verify remote host file system whether is full / disk free
ansible remotehost -m command -a 'df'
ansible remotehost -m command -a 'free -m'

ansible remothost.test.lab -u devops -b -a "head /etc/posix/main.cf"


---
# start of mailrelay playbook
- name: create mail relay servers
  hosts: mailrelay
  user: devops
  become: true

  tasks:
    
    - name: install postfix package
      yum:
        name: postfix
        state: installed

     - name: install mail config files
         template:
           src: postfix-relay-main.config.j2
           dest: /etc/postfix/main.cf
           owner: root
           group: root
           mode: 0644
        notify: restart postfix

      - name: check main.cf file
        stat: path=/etc/postfix/main.cf
        register: maincf
        when: maincf.stat.exists is defined

       - mame: start and enable mail services
         service:
           name: postfix
           state: started
           enabled: yes

        - name: start and enable mail services
          service:
            name: postfix
            state: started
            enabled: true

        - name: check for always_bcc
          command: /usr/sbin/postconf always_bcc
          register: bcc_state
          ignore_errs: true

         - name: email notification of alwys_bcc config
           mail:
             to: use01@serverb.test.lab
             subject: 'always_bcc setting is not empty'
             body: "always_bcc is {{bcc_state.stdout }}"
           when: bcc_state.stdout != 'always_bcc='

          - name: postfix firewalld config
            firewalld:
              staet: enabled
              permanent: true
              immediate: true
              service: smtp

  handlers:
    -name: restart postfix
     service:
       name: postfix
       state: restarted


# end of emailrelay play


# Verify the playbook changes
ansible servera.test.lab -u devops -b -a "firewall-cmd --list-services"


# Example code fix
---
- name: create secure web service
  hosts: webservers
  #remote_user: students    
    # need to be fixed
  remote_user: devops
  # privilege escalation is not set in ansible.cfg
  become: true
  vars:
    #random_var: This is colon: test    
        # need to " " quote to prevent ":" double quote interpretation
    random_var: "This is colon: test"

  tasks:
    - block:
        - name: install web server packages
          yum:
            #name: {{ item }}
            name: "{{ item }}"
            state: latest
          notify:
            - restart services
          loop:
            - httpd
            - mod_ssl

          - name: install httpd config files
            copy:
              src: vhosts.conf
              dest: /etc/httpd/conf.d/vhosts.conf
              backup: yes
              owner: root
              group: root
              mode: 0644
             register: vhosts_config
             notify:
               - restart services

           - name: create ssl certificate
             command: openssl req -new -nodes 
                -x509 -subj "/C=US/ST=North Carolina/L=Raleigh/O=Example Inc/CN=serverb.test.lab" 
                    -days 365 -keyout /etc/pki/tls/private/serverb.test.lab.key 
                        -out /etc/pki/tls/certs/serverb.test.lab.crt -extensions v3_ca
             args:
               creates: /etc/pki/tls/certs/serverb.test.lab.crt

        #    - name: start and enable web services  <----- identation issue
            - name: start and enable web services
              service:
                name: httpd
                state: started
                enabled: true

             - name: deliver content
               copy:
                 path: /etc/httpd/conf.d/vhosts.conf
                 state: absent
                notify:
                  - restart services

             - name: email notification of httpd config status
               mail:
                 to: test@serverb.test.lab
                 subject: 'httpd config is not correct'
                 body: "httpd syntax is {{ httpd_conf_syntax.stdout }}"
               when: httpd_conf_syntax.stdout != 'Syntax OK'

   handlers:
     - name: restart services
       service:
         name: httpd
         state: restarted

# end of secure web play

ansible-playbook secureweb.yml      # test the updated yml playbook

ansible all -u devops -b -m command -a 'systemctl status httpd' 
    # verify the http content


#****************** Automation admin tasks ***********************

Ansible Task                Yum Command
 - name: Install or update httpd    
    # yum update httpd, or yum install httpd (if the package is not yet installed)
   yum:
     name: httpd
     state: latest

 - name: Update all packages        yum update
   yum: 
     name: '*'
     state: latest

 - name: Remove httpd           yum remove httpd
   yum:
     name: httpd
     state: absent
            
 - name: Install Development Tools  yum group install "Development Tools"
   yum:
     name: '@Development Tools'
     state: present 

Note:
 With Ansible module, must use prefix group name with @
 To retrieve the list of groups "yum group list" command

  - name: Remove Development Tools  yum group remove "Development Tools"
    yum:
      name: '@Development Tools'
      state: absent


 - name: Install perl AppStream module      yum module install perl:5.26/minimal
   yum:
     name: '@perl:5.26/minimal'
     state: present

Note:
To list the available Yum AppStream modules:  
    yum module list



---
- name: Configure the company Yum repositories
  hosts: servera.test.lab
 
  task:
    - name: Deploy the GPG public key
      rpm_key:
        key: http://materials.test.lab/yum/repository/RPM-GPG-KEY-example

    - name: Ensure Example Repo exists
      yum_repository:
        file: example
        name: example-internal
        description: Example Inc Internal YUM repo
        baseurl: http://materials.test.lab/yum/repository/
        enabled: true
        gpgcheck: true  <--------- the previous task has downloaded the gpg key
        state: present


#********* Setup support infrastructure to manage systems
---
- name: Setup the support infrastructure to manage systems
  hosts: all
  
  tasks:
    - name: Create the user support as a member of the group wheel
      user:
        groups: wheel
        name: support
 
    - name: Install the sudo configuration which allows passwordless execution of commands as root
      copy:
        src: support.sudo
        dest: /etc/sudoers.d/support

    - name: Install the ssh key
      authorized_key:
        manage_dir: yes
        user: support
        key: "{{ lookup('file', 'id_rsa.pub') }}"
 
    - name: Limit ssh usage to members of the group wheel
      lineinfile:
        state: present
        dest: /etc/ssh/sshd_config
        line: AllowGroups wheel
      notify: Restart the ssh daemon


# cat support.sudo
support ALL=(ALL) NOPASSWD: ALL


#******* Managing Users and Authentication
---
- name: Create multiple local users
  hosts: webservers

  vars_files:
    - vars/users_vars.yml

  handlers: 
    # even though the handlers appear here, 
    # it only executive after all tasks run. It is a normal task
    - name: Restart sshd
      service:
        name: sshd
        state: restarted

  tasks:
    - name: Add webadmin group
      group:
        name: webadmin
        state: present

    - name: Create user accounts
      user:
        name: "{{ item.username }}"
        group: webadmin
      loop: "{{ users }}"

    - name: Add authorized keys
      authorized_key:
        user: "{{ item.username }}"
        key: "{{ lookup('file', 'files/'+ item.username + '.key.pub') }}"   
            # lookup module, lookup file
      loop: "{{ users }}"

     - name: Modify sudo config to allow webadmin users sudo without a password
       copy:
         content: "%webadmin ALL=(ALL) NOPASSWD: ALL"       <
            # %webadmin refer to group
         dest: /etc/sudoers.d/webadmin
         mode: 0440

     - name: Disable root login via SSH
       lineinfile:
         dest: /etc/ssh/sshd_config
         regexp: "^PermitRootLogin"
         line: "PermitRootLogin no"
       notify: "Restart sshd"   
            # notify handlers, run handlers task "Resart sshd"


# cat vars/users_vars.yml
---
users:
  - username: user1
    groups: webadmin
  - username: user2
    groups: webadmin
  - username: user3
    groups: webadmin


# ls -l files
user1.key.pub
user2.key.pub
user3.key.pub


#*********** Managing the boot process and scheduled processes
# cd ~/system-process
# cat create_crontab_file.yml

---
- name: Recurring cron job
  hosts: webservers
  become: true

  tasks:
  
    - name: Crontab file exists
      cron:
        name: Add date and time to a file
        minute: "*/2"   <----- every 2 minutes
        hour: 9-16
        weekday: 1-5
        job: data >> /home/devops/my_date_time_cron_job
        cron_file: add-date-time    
            # make note of the cron job file, it will be referenced later
        state: present


ansible-playbook create_crontab_file.yml --syntax-check
ansible-playbook create_crontab_file.yml

# verify the cron job file
ansible webservers -u devops -b -a "cat /etc/cron.d/add-date-time"  
    # "-b" become (escalation as root)
*/2 9-16 * * 1-5 devops date >> /home/devops/my_date_time_cron_job


# How to remove the cron job
# cat remove_cron_job.yml
---
- name: Remove scheduled cron job
  hosts: webservers
  become: true

  tasks:
    - name: Cron job removed
      cron:
        name: Add date and time to a file
        user: devops
        cron_file: add-date-time
        state: absent


#** Schedule a one time job ("at" module)
# cat schedule_at_task.yml
---
- name: Schedule at task    "at"
  hosts: webservers
  become: true
  become_user: devops

  task: 
   
    - name: Create data and time file
      at:
        command: "date > ~/my_at_date_time\n"
        count: 1
        units: minutes      <---- define the at job "1 minute"
        unique: yes
        state: present

Note:
The schedule will create a "at" job every minute, 
    sleep 60    <------ wait for 1 minute
    ansible webservers -u devops -b -a "ls -l my_at_date_time"  
        # Verify the schedule at job file

Note:
The at job file will be created every minute


#******* create default boot target
---
- name: Change default boot target
  hosts: webservers
  become: true

  tasks:

    - name: Default boot target is graphical
      file:
        src: /usr/lib/systemd/system/graphical.target
        dest: /etc/systemd/system/default.target    <------- create this link file
        state: link

# verify the default target
ansible webservers -u devops -b -a "systemctl get-default"
    graphical.target    <------ It change/set as graphical target


# Reboot the hosts, and verify the default graphical login is set
# cat reboot_hosts.yml
---
- name: Reboot hosts
  hosts: webservers
  become: true

  tasks:
 
    - name: Hosts are rebooted
      reboot:

# verify
ansible webservers -u devops -b -a "who -b" 
    # verify when the system was last rebooted time


#*** Set default boot target to multi-user
# cat set_default_boot_target_multi-user.yml

---
- name: Change default runlevel target
  hosts: webservers
  become: true

  tasks:
   
    - name: Default runlevel is multi-user target
      file:
        src: /usr/lib/systemd/system/multi-user.target
        dest: /etc/systemd/system/default.target    
            # undo the graphical target back to default multiuser
        state: link


# ************* Managing storage
Parameter   Description
Name
------------------------------------------
align       Configure partition alignment
device      block device
flags       Flags for the partition
number      The partition number
part_end    Partition sie from the beginning of the disk specified in the "parted" supported units
state       Create or remove the partition
unit        Size units for the partition
-------------------------------------------------------

# create a new 10GB partition

- name: Create new 10GB partition
  parted:
    device: /dev/vdb
    number: 1
    state: present  <-------- ensure the partition is available
    part_end: 10GB



#*** lvg and lvol modules
"lvg" "lvol" modules support the creation of logical volumes, 
    including the configuration of physical volumes, and volume groups.
The "lvg" takes as parameters block device to configure as the back end physical volumes for the volume group.

lvg:    logical virtual group
lvol:   logical volume

Parameter   Description
Name
------------------------------------------
pezie       The size of the physical extent. Must be the power of 2, or multiple of 128 KiB
pvs     List of comma separated devices to be configured as physical volumes for the volume group
vg      The name of the volume group
state       Create or remove the volume


- name: Creates a volume group
  lvg: vg-data      <-------- logical group name "vg-data"
  pvs: /dev/vda1    <-------- phsical volume for the volume group
  pesize: 32        <---  physical extent size to "32"


- name: Resize a volume group
  lvg:
    vg: vg-data
    pvs: /dev/vdb1, /dev/vdc1

The "lvol" module creates logical volumes, and supports the resizing and shrinking of those volumes, and the filesystems for them.
The module also support 


Parameter   Description
Name
------------------------------------------------------
lv      The name of the logical volume
resizefs    Resizes the filesystem with the logical volume
shrink      Enable logical volume shrink
size        The size of the logical volume
vg      The parent volume group for the logical volume


# Create a logical volume of 2GB
  - name: Create a logical volume of 2GB
    lvol:
      vg: vg-data   <------- virtual group
      lv: lv-data   <------ logical volume
      size: 2g

# ********* The filesystem module
The filesystem module supports both creating and resizing a file system, for ext2, ext3, ext4, ext4dev, f2fs, lvm, xfs, and vfat


Parameter   Description
Name
------------------------------------------------------
dev     block device name
fstype      filesystem type
resizefs    Grows the filesystem size to the size of the block device

# Create a filesystem on a partition
  - name: Create an XFS filesystem
    filesystem:
      fstype: xfs
      dev: /dev/vbd1

# ************** The mount module ********************


Parameter   Description
Name
------------------------------------------------------
path        mount point path
src     device to be mounted
state       specify the mount status.
        If set to "mounted", the system mounts  the device, and configures /etc/fstab with that mount information.
        To "umount" the device, 

# Example mount a device with an specific ID
  - name: Mount device with ID
    mount:
      path: /data   <------------ mount point
      src: UUID=xxxxxxxxxxxxxx  <------------ mount the device with the required UUID
      fstype: xfs
      state: present

It mount the device to /data (mount point), and configure /etc/fstab accordingly

# mounty the NFS share available at 172.25.250.100:/share (nfs) share directory at the managed hot.

   - name: Mount NFS share
       mount: name=/nfsshare src=172.50.200.10:/share fstype=nsfs opts=defaults, nobootwaitig  dump=0 passno=2   state=mounted

#******** Configure swap with modules
RHEL engine does not currently include moduls to manage swap memory.
To add swap memory to a system with Ansible with logical volumes, 
    you need to create a new volume group and logical volume with lvg and lvol modules.

When ready, format the new logical volume using "command" module and "mkswap" command, 
    then need to activate the new swap device using command module with "swapon" command

Using "ansible_swaptotal_mb" variable to trigger swap configuration and enablement when swap memory is low.

# Create swap memory - volume group and logical volume
 - name: Create new swap VG
   lvg: vg=vgswap pvs/dev/vda1      <------ pvs  physical volume
   state: present

 - name: Create new swap LV
   lvol: vg=vgswap
   lv: lvswap
   size: 10g

 - name: Format swap LV
   comman: mkswap /dev/vgswap/lvswap
   when: ansible_swaptotal_mb < 128


 - name: Activate swap LV
   command: swapon /dev/vgswap/lvswap
   when: ansible_swaptotal_mb < 128


#*** Ansible FACTS for storage configuration
Ansible uses facts to retrieve information to the control node about the configuration of the managed hosts.
You can use the "setup" module to retrieve all the Ansible facts of the managed host.

ansible webservers -m setup # use Ansible "setup" module to retrieve all the Ansible facts

The "filter" option for the setup module supports fine-grained filtering based on shell-style wildcards
    ansible webservers -m setup -a 'filter=ansible_devices'     
        # filter all storage devices, or dictionary called "ansible_devices"



ls ~/system-storage
    ansible.cfg
    inventory
    storage_vars.yml
    storage.yml

# cat storage_vars.yml
---
partitions:
  - number: 1       <-------------- partition 1 will be created
    start: 1MiB     <--- look at Ansible module "unit" for storage
    end: 257MiB

volume_groups:
  - name: apache-vg
    devices: /dev/vdb1

logical_volumes:
  - name: content-lv
    size: 64M
    vgroup: apache-vg
    mount_path: /var/www

  - name: logs-lv
    size: 128M
    vgroup: apache-vg
    mount_path: /var/httpd


Then update storage_vars.yml, then run storage.yml again to see the changes
---
partitions:
  - number: 1       <-------------- partition 1 will be created
    start: 1MiB     <--- look at Ansible module "unit" for storage
    end: 257MiB
  - number: 2
    start: 257MiB
    end: 513MiB

volume_groups:
  - name: apache-vg
    devices: /dev/vdb1, /dev/vdb2

logical_volumes:
  - name: content-lv
    size: 128M
    vgroup: apache-vg
    mount_path: /var/www

  - name: logs-lv
    size: 256M
    vgroup: apache-vg
    mount_path: /var/httpd



# cat storage.yml
---
- name: Ensure Apache storage configuration
  hosts: webservers
  vars_files:
    - storage_vars.yml

  tasks:

    - name: Correct partitions exist on /dev/vdb
      parted:
        device: /dev/vdb
        state: present
        number: "{{ item.number }}"
        part_start: "{{ item.start }}"
        part_end: "{{ item.end }}"
      loop: "{{ partitons }}"

   - name: Ensure volume group exist
       lvg:
         vg: "{{ item.name }}"
         pvs: "{{ item.devices }}"
       loop: "{{ volume_groups }}"

   - name: Create each logical volume (LV) if needed
     lvol:
       vg: "{{ item.vgroup }}"
       lv: "{{ item.name }}"
       size: "{{ item.size }}"
     loop: "{{ logical_volumes }}"
     when: item.name not in ansible_lvm["lvs"]   
        # only create the logical volume when the volume not created

    - name: Ensure xfs filesystem exists on each LV
      filesystem:
        dev: "/dev/{{ item.vgroup }}/{{ item.name }}"
        fstype: xfs
      loop: "{{ logical_volumes }}"

    - name: Ensure the correct capacity for each LV
      lvol:
        vg: "{{ item.vgroup }}"
        lv: "{{ item.name }}"
        size: "{{ item.size }}"
        resizefs: yes   
            # xfs size can NOT be shrink, it will error out if size is smaller
        force: yes
       loop: "{{ logical_volumes }}"

     - name: Each logical volume is mounted
       mount:
         path: "{{ item.mount_path }}"
         src: "/dev/{{ item.vgroup "}}/{{ item.name }}"
         fstype: xfs
         opts: noatime  <----- mount option, comma separated
         state: mounted


# Verify storage
ansible all -a pvs
ansible all -a vgs
ansible all -a lvs
ansible all -a lsblk


#******************** Managing Network Configuration *******************
# System Roles
RHEL 8 includes a collection of system Ansible roles to configure RHEL.
"rhel-system-roles" package installs those system roles, including, such as time synchrnoization, networking.

# To view list of currently installed system roles
    ansible-galaxy list

Roles are located in /usr/share/ansible/roles   directory
A role beginning with "linux-system-roles" is a symlink to the matching "rhel-system-roles" role

The network system role supports the configuration of networking on managed hosts:
ethernet interfaces, bridge interfaces, bonded interfaces, VLAN interfaces, MAC vLAN interfaces, and infiniband interfaces

The "network_provider" variable configures the backend provider, either "nm" (NetworkManager), or initscripts

In RHEL 8, the netowrk role uses the "nm" (Network Manager) as the default networking provider. 
The "initscripts" provider is used for RHEL 6, and requires "the network service" to be available.

The "network_connections" variable configures the different connections, 
    specified as a list of dictionaries, using the interface name as the connection name.


---
 network_provider: nm
 network_connections:
   - name: ens4
     type: ethernet
     ip:
       address:
         - 192.168.10.10/24


network_connections:
- name: eth0
    persistent_state: present
    type: ethernet
    autoconnect: true   
        # automatically starts the connection at boot, this is the default value
    mac: 00:00:5e:00:53:53  
        # It check the MAC address of the managed host, only the NIC with this MAC will be applied
    ip:
      address:
        - 192.168.10.20/24
    zone: external


 - name: NIC configuration
   hosts: webservers
   vars:
     network_connections:
       - name: ens4 <--------- ens4 profile
         type: ethernet
         ip:
           address:
             - 192.168.10.30/24
    roles:
      - rhel-system-roles.network

You can specify variables for the network role with the "vars" clause, 
    or create a YAML file with those variables under the "group_vars" or "host_vars" directories, 
    depending on your use cases.


#*********** Configuring networking with modules
As an alternative, Ansible engine includes a collection of modules which support the network configuration on a system.
The "nmcli" module supports the management of both network connections and devices, 
    supports the configuration of both teaming and bonding for network interfaces, 
    as well as IPv4 and IPv6 addressing.


# nmcli module parameters
Parameter       Description
Name
--------------------------------------------------
conn_name       Configures the connection name
autoconnect     Enable automatic connection activation on boot
dns4            Configure DNS servers for IPv4 (up to 3)
gw4         Configure the IPv4 gateway for the interface
ifname          Interface to be bound to the connection
ipv4            IP address (IPv4) for the interface
state           Enables or disables the network interface
type            Type of device or network connection


  - name: NIC configuration
    nmcli:
      conn_name: ens4-conn
      ifname: ens4
      type: ethernet
      ipv4: 192.168.20.30/24
      gw4: 192.168.20.1
      state: present


All network interfaces for a managed host are available under the "ansible_interfaces" element.
Use "gather_subset=network" parameter for the setup module to restrict the facts to those included in the network subset.
The "filter" option for the setup module supports fine-gained filtering based on shell style wildcards.

ansible webservers -m setup \
 -a 'gather_subset=network filter=ansible_interfaces'


You can retrieve additional information about the configuration 
    for a network interface with the "ansible_NIC_name" filter for the setup module.

# Retrieve the configuration for the "ens4" network interface.
ansible webservers -m setup \
 -a 'gather_subset=network filter=ansible_ens4'


# verify ansible network roles installation
ansible-galaxy list # verify "linux-system-roles.network" and "rhel-systm-roles.network"


# Setup TCP/IP configuration
---
  network_connections:
    - name: eth0
      type: eth0
      ip:
        route_metrics: 100
        dhcp: no
        #dhcp_send_hostname: no
        gateway4: 192.168.2.1

        dns:
          - 192.168.2.10
          - 192.51.100.10

        route_metric6: -1   <------- why "-1"
        auto6: no
        gateway6: 2001:db8::1

        address:
          - 192.0.2.3/24
          - 10.10.0.3/26


# check documentation
less /usr/share/doc/rhel-system-roles/network/README.md


# cat group_vars/webservers/network.yml
---
network_connections:
  - name: enp2s0
    type: ethernet
    ip:
      address:
        - 192.168.10.10/24


# network configuration

network_connections:
  - name: eth0
    type: ethernet
    ip:
      route_metric4: 100
      dhcp4: no
      #dhcp4_send_hostname: no
      gateway4: 192.168.10.1

      dns:
        - 192.0.2.2
        - 198.51.10.2
      dns_search:
        - test.lab
        - subdomain.test.lab

      route_metric6: -1
      auto6: no
      gateway6: 2001:db8::1

      address:
        - 192.0.2.3/24
        - 198.51.100.3/26

# cat playbook.yml
---
- name: NIC Configuration
  hosts: webservers

  roles:
    - rhel-system-roles.network

# verify result
ansible webservers -m setup -a 'filter=ansible_enp2s0'


# cat repo_playbook.yml
---
- name: Repository Configuration
  hosts: webservers
  tasks:

    - name: Esure example repo exists
      yum_repository:
        name: example-internal
        description: Example Inc. Internal Yum repo
        file: example
        baseurl: http://materials.test.lab/yum/repositroy
        gpgcheck: true

    - name: Ensure Repo RPM key is installed
      rpm_key:
        key: http://materials.test.lab/yum/repository/RPM-GPG-KEY-example
        state: present

    - name: Install Example motd package
      yum:
        name: example-motd
        state: present


# Create users
mkdir vars
vim vars/users_vars.yml
cat vars/users_vars.yml

---
users:
  - username: ops1
    groups: webadmin
  - username: ops2
    groups: webadmin


# users.yml
---
- name: Create multiple local users
  hosts: webservers
  vars_files:
    - vars/users_vars.yml

  tasks:
  
    - name: Add webadmin group
      group:
        name: webadmin
        state: present

     - name: Ceate user accounts
       user:
         name: "{{ item.username }}"
         groups: webadmin
       loop: "{{ users }}"

# cat storage_vars.yml
---
partitions:
  - number: 1
    start: 1MiB
    end: 257MiB

volume_groups:
  - name: apache-vg
    devices: /dev/vdb1

logical_volumes
  - name: content-lv
    size: 64M
    vgroup: apache-vg
    mount_path: /var/www

  - name: logs-lv
    sie: 128M
    vgroup: apache-vg
    mount_path: /var/log/httpd

---
 - name: Ensure Apache storage configuration
   hosts: webservers
   vars_files:
     - storage_vars.yml

   tasks:
     - name: Correct partitions exist on /dev/vdb
       parted:
         device: /dev/vdb
         state: present
         number: "{{ item.number }}"
         part_start: "{{ item.start }}"
         part_end: "{{ item.end }}"
       loop: "{{ partitions }}"

      - name: Ensure volume groups exist
        lvg:
          vg: "{{ item.name }}"
          pvs: "{{ item.devices }}"
        loop: "{{ volume_groups }}"

      - name: Create each logical volume (LV) if needed
        lvol:
          vg: "{{ item.vgroup }}"
          lv: "{{ item.name }}"
          size: "{{ item.size }}"
        loop: "{{ logical_volumes }}"

      - name: Esure XFS filesystem exists on each LV
        filesystem:
          dev: "/dev/{{ item.vgroup }}/{{ item.name }}"
          fstype: xfs
        loop: "{{ logical_volumes }}"

      - name: Ensure the correct capacity for each LV
        lovl:
          vg: "{{ item.vgroup }}"
          lv: "{{ item.name }}"
          size: "{{ item.sized }}"
          resizefs: true
        loop: "{{ logical_volumes }}"

       - name: Ensure logical volume is mounted
         mount:
           path: "{{ item.mount_path }}"
           src: "/dev/{{ item.vgroup }}/{{ item.name }}"
           fstype: xfs
           state: mount
         loop: "{{ logical_volumes }}"

# Create_crontab_file.yml
---
- name: Recurring cron job
  hosts: webservers
  become: true

  tasks:
    - name: Crontab file exists
      cron:
        name: Add date and time to a file
        minute: "*/2"
        hour: 9-16
        weekday: 1-5
        job: df >> /home/devops/disk_usage
        cron_file: disk_usage
        state: present


ansible-galaxy list # verify network roles installed

# network_playbook.yml
---
- name: NIC configuration
  hosts: webservers
  
  role:
    - rhel-system-roles.network

mkdir -pv group_vars/webservers
# cat group_vars/webservers/network.yml
---
network_connections:
 - name: enp2s0
   type: ethernet
   ip:
     address:
       - 172.25.205.40/24

# cat ftpclient.yml
---
- name: Ensure FTP client configuration
  hosts: ftpclients
  tasks:

   - name: latest version of lftp is installed
     yum:
       name: lftp
       state: latest

# ansible-vsftpd.yml
---
- name: FTP server is installed
  hosts: ftpservers

  var_files:
    - vars/default-template.yml
    - vars/vars.yml

  tasks:
    - name: Packages are installed
      yum:
        name: "{{ vsftpd_package }}"
        state: present

    - name: Ensure service is started
      service:
        name: "{{ vsftpd_service }}"
        state: started
        enabled: true

    - name: Configuration file is installed
      template:
        src: templates/vsftpd.conf.j2
        dest: "{{ vsftpd_config_file }}"
        owner: root
        group: root
        mode: 0600
        setype: etc_t
      notify: restart vsftpd

    - name: firewalld is installed
      yum:
        name: firewalld
        state: present

    - name: firewalld is started and enabled
      service:
        name: firewalld
        state: started
        enabled: true
  
    - name: FTP port is open
      firewalld:
        service: ftp
        permanent: true
        state: enabled
        immediate: yes

     - name: FTP passive data port is open
       firewalld:
         port: 21000-21020/tcp
         permanent: true
         state: enbled
         immediate: yes

  handlers:

    - name: restart vsftpd
      service:
        name: "{{ vsftpd_service }}"
        state: restarted


# cat site.yml
---
# FTP server playbook
- import_playbook: ansible-vsftpd.yml

$ FTP client playbook
- import_playbook: ftpclients.yml



#***************** more example
# cat ansible.cfg
[defaults]
remote_user=devops
inventory=inventory/    # inventory is pointing to a "inventory" directory


[privilege_escalation]
become=true
become_method=sudo
become_user=root
become_ask_pass=false



# ansible-inventory --list all
{
   "_meta": {
       "hostvars": {
          "servera.test.lab": {},
          "serverb.test.lab": {},
          "serverc.test.lab": {},
          "serverd.test.lab": {}
       }  
   },
   "all": {
      "children": [
          "ftpclients",
          "ftpservers",
          "ungrouped"
        ]
    },
    "ftpclients": {
       "hosts": [
           "servera.test.lab",
           "serverc.test.lab"
       ]
    },
    "ftpservers": {
       "hosts": [
           "serverb.test.lab",
           "serverd.test.lab"
       ]
    }
}


# Convert ansible playbook to role
mkdir -v roles
cd roles/
ansible-galaxy init ansible-vsftpd  
    # using Ansible galaxy to create initial role, it will create the directories and initial files
mv -v default-template.yml roles/ansible-vsftpd/defaults/main.yml
mv -v vars.yml roles/ansible-vsftpd/vars/main.yml
mv -v vsftpd.conf.j2 roles/ansible-vsftpd/templates/


vsftpd-configure.yml
---
- name: Install and configure vsftpd
  hosts: ftpservers
  vars:
    vsftpd_anon_root: /mnt/share/
    vsftpd_local_root: /mnt/share/

  roles:
    - ansible-vsftpd

  tasks:
   
    - name: /dev/vdb1 is partitioned
      command: >
        parted -- script /dev/vdb mklabel gpt mkpart primary 1MiB 100%
      args:
        creates: /dev/vdb1

    - name: XFS file system exists on /dev/vdb1
      filesystem:
        dev: /dev/vdb1
        fstype: xfs
        force: yes

    - name: anon_root mount point exists
      file:
        path: '{{ vsftpd_anon_root }}'
        state: directory

    - name: /dev/vdb1 is mounted on anon_root
      mount:
        name: '{{ vsftpd_anon_root }}'
        src: /dev/vdb1
        fstype: xfs
        state: mounted
        dump: '1'
        passno: '2'
      notify: restart vsftpd

     - name: Make sure permissions on mounted fs are correct
       file: 
         path: '{{ vsftpd_anon_root }}'
         owner: root
         group: root
         mode: '0755'
         setype: "{{ vsftpd_setype }}"
         state: directory

      - name: Copy README to the ftp anon_root
        copy:
          dest: '{{ vsftpd_anon_root }}/README'
          content: "Welcome to the FTP server at {{ ansible_fqdn }}\n" 
          setype: '{{ vsftpd_setype }}'


# cat site.ymo
# FTP servers playbook
- import_playbook: vsftp-configur.yml

# FTP client playbook
- import_playbook: ftpclients.yml


#************************** Automating Linux administration tasks
Task: subscribe systems, configure software channels and repositories, and manage RPM packages on managed hosts

ansible-doc yum     # verify parameters and playbook examples


# Install multiple packages
---
- name: Install the required packages on the web server
  hosts: servera.test.lab
  tasks:

    - name: Install the packages
      yum:
        name:
          - httpd
          - mod_ssl
          - httpd-tools
        state: present      <--------- latest, absent


# Gathering facts about the installed packages
"package_facts" module collects the installed package details on managed hosts.
It sets the ansible_facts.packages variable with the package details.

---
- name: Display installed packages
  hosts: servera.test.lab
  
  tasks:
      
    - name: Gather info on installed packages
      package_facts:
        manager: auto   <------ using the package manager automatically, in RHEL 8 is yum
  
    - name: List installed packages


      debug: 
        var: ansible_facts.packages

     - name: Display Network Manager version
       debug:
         msg: "Version {{ ansible_facts.packages['NetworkManager'][0].version }}"
       when: "'NetwworkManger' in ansible_facts.packages"


---
- name: Install the required packages on the web servers
  hosts: webservers
  tasks:
    - name: Install httpd on RHEL
      yum:
        name: httpd
        state: present
      when: "ansible_distribution == 'RedHat'"

    - name: Install httpd on Fedora
      dnf:
        name: httpd
        state: present
      when: "ansible_distribution == 'Fedora'"


# Access Red Hat CDN - Content Delivey Network
- name: Playbook to access the Red Hat CDN
  hosts: servera.test.lab
  vars:
     cdn_username:  <----------- store the sensible password in ansible vault
     cdn_password:
     repos:
       - rhel-8-for-x86_64-baseos-rpms
       - rhel-8-for-x86_64-baseos-debug-rpms

  tasks:
    
    - block:
   
      - name: Register the server
        redhat_subscription:
           username: "{{ cdn_usename }}"
           password: "{{ cdn_password }}"
           auto_attached: true

      - name: Disable all repositories
        rhsm_repository:
          name: '*'
          state: disabled

      - name: Enable core RHEL repositories
        rhsm_repository:
          name: "{{ repos }}"   <------------- Refer to the variable "repos"
          state: enabled

     when:
       - cdn_username: != ""



# cat /etc/yum.repos.d/rhel_dvd.repo
[rhel-8.0-for-x86_64-baseos-rmps]
baseurl = http://content.test.lab/rhel8.0/x86_64/dvd/BaseOS
enable = true
gpgcheck = false
name = Red Hat Enterprise Linux 8.0 BaseOS (dvd)
[rhel-8.0-for-x86_64-appstream-rpms]
baseurl = http://content.test.lab/rhel8.0/x86_64/dvd/AppStream
enable = true
gpgcheck = false
name = Red Hat Enterprise Linux 8.0 AppStream (dvd) 



--- 
- name: Enable external repository
  hosts: servera.test.lab
  vars:
    repo_name: baseos_repository
    repo_url: http://content.test.lab/rhel8.0/x86_64/dvd/BaseOS
    repo_enabled: true
    repo_gpgcheck: false
    repo_desc: Red Hat Enterprise Linux 8.0 BaseOS (dvd)

  tasks:

    - name: Define "{{ repo-name }}"
      yum_repository:
        file: "{{ repo_name }}"
        name: "{{ repo_name }}"
        description: "{{ repo_desc }}"
        baseurl: "{{ repo_url }}"
        gpgcheck: "{{ repo_gpgcheck }}"
        enabled: "{{ repo_enabled }}"
        state: present


#************ Schedule with the "at" module
Quick one-time scheduling is done with the "at" module. 
    You create the job for a future time to run and it is held until that time comes to execute.

The parameters
-------------------------------------------------
Parameter   Options     Comments
------------------------------------------------------------------------
command     null        a command that is schedule to run
count       null        The count of units (must run with units)
script_file null        An existing script file to be executed in the future
state       absent, present     The state adds or removes a command or script
unique      yes, no     If a job is already running, it will not be executed again
units       min/hours/days/weeks    The time denominations


- name: Remove tempuser
  at: 
    command: userdel -r tempuser
    count: 20
    units: minutes
    unique: yes


 - name: Flush Bolt
   user: "root"
   minutes: 45
   hour: 11
   job: "php ./app/nut cache:clear"


# ******* cron module
The parameters
-------------------------------------------------
Parameter   Options     Comments
------------------------------------------------------------------------
special_time    reboot, yearly,     a set of reoccurring times
        annually, monthly
        weekly, daily
        hourly
state       absent, present     "present" - create
                    "absent" - remove
cron_file   null            If there is a large banks of servers to maintain
                    then, it is better to have a pre-written crontab file
backup      yes, no         Back up the crontab file prior to being edited




# ********** systemd and service modules
For managing "services" or reloading "daemons", Ansible has the "systemd" and "service" modules.
Service offers a basic set of options: start, stop, restart, enable
The "systemd" offers more configuration option, and allows daemon reload where the service module will not

Note:
The init daemon is being replaced by systemd. So in a lot of cases systemd will be better option



# ********* The "reboot" module
Considered safer than using the shell module to initiate shutdown.

- name: Reboot after patching
  reboot:
    reboot_timeout: 180     <------ 180 seconds

- name: Force a quick reboot
  reboot:   <------ not time specified


************ "wait for " module


#*********** The "shell" and "command" module
The command module is consider more secure but some environment variables are not available, also the "stream" operators will not work.
Then, shell module will need to be use.

# Shell module example
- name: Run a templated variable (always use quote filter to avoid injection)
    shell: cat {{ myfile|quote }}

To sanitize any variables, it is suggested that you use "{{ var | quot }}", instead of just "{{ var }}"


# The command module
 - name: The command example
   command: /usr/bin/scrape_logs.py arg1 arg2

     args:
       chdir: scripts/
       creates: /path/to/script

#** environment variable
"ansible_env" has the environment variables inside it

---
- name: Environment
  hosts: webservers
  vars:
    local_shell: "{{ ansible_env }}"    <-- create a variable namely "local_shell" and it has the value of "ansible_env"

  tasks:
    - name: Printing all the environment variables in Ansible
      debug:
        msg: "{{ local_shell }}"

Note: You can isolate the variable you want to return by using the lookup plugin.
msg: "{{ lookup('env', 'USER', 'HOME', 'SHELL') }}"



---
- name: Scheule at task
  hosts: webservers
  become: true
  become_user: devops

  tasks:
 
    - name: Create date and time file
      at: 
        command: "date > ~/my_at_date_time\n"
        count: 1
        units: minutes
        unique: yes
        state: present

ansible sudoer file

# ansible playbook requires ansible user ssh without password
# create ansible sudoers file

0600  /etc/sudoers.d/ansible      # create sudoers file namely "ansible"


# Create ansible playbook sudoer

Ansible Utilities for Operational State Management and Remediation

https://www.ansible.com/blog/using-new-ansible-utilities-for-operational-state-management-and-remediation/

Comparing the current operational state of your IT infrastructure to your desired state is a common use case for IT automation. This allows automation users to identify drift or problem scenarios to take corrective actions and even proactively identify and solve problems.

How to write ansible module

https://spacelift.io/blog/ansible-modules

Ensure that a similar module doesn’t exist to avoid unnecessary work. Αdditionally, you might be able to combine different modules to achieve the functionality you need. In this case, you might be able to replicate the behavior you want by creating a role that leverages other modules. Another option is to use plugins to enhance Ansible’s basic functionality with logic and new features accessible to all modules.