Ansible ROLE Explained With a Real Use Case

Ansible ROLE Explained With a Real Use Case

What is Ansible ?

Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.

Ansible works by connecting to your nodes and pushing out small programs, called modules to them. Modules are used to accomplish automation tasks in Ansible. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules and removes them when finished.

No alt text provided for this image

Then What is ROLE ?

Roles provide a framework for fully independent or interdependent collections of files, tasks, templates, variables, and modules. The role is the primary mechanism for breaking a playbook into multiple files. This simplifies writing complex playbooks and makes them easier to reuse. The breaking of the playbook allows you to break the playbook into reusable components. Each role is limited to a particular functionality or desired output, with all the necessary steps to provide that result either within the same role itself or in other roles listed as dependencies.

Roles are not playbooks. Roles are small functionality that can be used within the playbooks independently. Roles have no specific setting for which hosts the role will apply. Top-level playbooks are the bridge holding the hosts from your inventory file to roles that should be applied to those hosts.

The command is used for creating a role :

ansible-galaxy role init <role_name>

To see an available role in our system :

ansible-galaxy role list

Now let's configure some infrastructure using ROLE :-

?? Problem Statement : I will create 2 Ansible ROLES; one for configuring Apache HTTPD Server & another for configuring HA-Proxy Load Balancer. Finally I will these two roles for controlling Webserver versions and solving challenge for host IP's addition dynamically over each Managed Node in haproxy.cfg file.

?? Solution :

  • Create a workspace myroles & initialize 2 roles there :
# mkdir myrole/

# cd myroles/

# ansible-galaxy init myapache

 
# ansible-galaxy init myloadbalancer

  • Update the path in ansible configuration file :
[defaults]

inventory=/etc/ansible/ip.txt

host_key_checking=false

roles_path= /root/myroles/

  • Working in myapache role :
# cd myapache/

# vim tasks/main.yml


---
	# tasks file for myapache
	- name: "installing apache software"  
	  package:
	          name: "{{ p_name }}"
	          state: present
	

	- name: "webpage"
	  copy:
	          src: files/index.html
	          dest: /var/www/html/index.html
	  register: x
	

	

	- name: "starting the web service"
	  service:
	          name: "{{ s_name }}"
	          state: started

	  when: x.changed


# vim vars/main.yml

---
	# vars file for myapache
	 p_name: "httpd"
     s_name: "httpd"

	 

# vim files/index.html

<br/>
<h1>
   <marquee> Index Page Configured By Ansible ROLE :) </marquee>
</h1>
<br/>
<h2> BYE !! </h2>

  • Working in myloadbalancer role :
# cd myloadbalancer

# vim tasks/main.yml


---
	# tasks file for myloadbalancer
	

	- name: "installing haproxy software"
	  package:
	          name: "{{ p_name }}"
	          state: present
	

	- name: "setting up configuration file"
	  template:
	          src: "files/haproxy.cfg.j2"
	          dest: "/etc/haproxy/haproxy.cfg"
	

	- name: "starting the firewalld services"
	  service:
	          name: "firewalld"
	          state: started
	

	- name: "exposing the port of proxy"
	  firewalld:
	          port: "{{ port_no }}/tcp"
	          state: enabled
	          permanent: yes
	          immediate: yes
	

	- name: " disabling selinux"
	  command: "setenforce 0"
	

	

	- name: "starting service"
	  service:
	          name: "{{ s_name }}"
	          state: started

	          enabled: yes

# vim vars/main.yml
---
    # vars file for myloadbalancer
	p_name: "haproxy"
	s_name: "haproxy"
    port_no: 8080 


# vim files/haproxy.cfg.j2

#---------------------------------------------------------------------
	# Example configuration for a possible web application.  See the
	# full configuration options online.
	#
	#   https://www.haproxy.org/download/1.8/doc/configuration.txt
	#
	#---------------------------------------------------------------------
	

	#---------------------------------------------------------------------
	# Global settings
	#---------------------------------------------------------------------
	global
	    # to have these messages end up in /var/log/haproxy.log you will
	    # need to:
	    #
	    # 1) configure syslog to accept network log events.  This is done
	    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
	    #    /etc/sysconfig/syslog
	    #
	    # 2) configure local2 events to go to the /var/log/haproxy.log
	    #   file. A line like the following can be added to
	    #   /etc/sysconfig/syslog
	    #
	    #    local2.*                       /var/log/haproxy.log
	    #
	    log         127.0.0.1 local2
	

	    chroot      /var/lib/haproxy
	    pidfile     /var/run/haproxy.pid
	    maxconn     4000
	    user        haproxy
	    group       haproxy
	    daemon
	

	    # turn on stats unix socket
	    stats socket /var/lib/haproxy/stats
	

	    # utilize system-wide crypto-policies
	    ssl-default-bind-ciphers PROFILE=SYSTEM
	    ssl-default-server-ciphers PROFILE=SYSTEM
	

	#---------------------------------------------------------------------
	# common defaults that all the 'listen' and 'backend' sections will
	# use if not designated in their block
	#---------------------------------------------------------------------
	defaults
	    mode                    http
	    log                     global
	    option                  httplog
	    option                  dontlognull
	    option http-server-close
	    option forwardfor       except 127.0.0.0/8
	    option                  redispatch
	    retries                 3
	    timeout http-request    10s
	    timeout queue           1m
	    timeout connect         10s
	    timeout client          1m
	    timeout server          1m
	    timeout http-keep-alive 10s
	    timeout check           10s
	    maxconn                 3000
	

	#---------------------------------------------------------------------
	# main frontend which proxys to the backends
	#---------------------------------------------------------------------
	frontend main
	    bind *:{{ port_no }}
	    acl url_static       path_beg       -i /static /images /javascript /stylesheets
	    acl url_static       path_end       -i .jpg .gif .png .css .js
	

	    use_backend static          if url_static
	    default_backend             app
	

	#---------------------------------------------------------------------
	# static backend for serving up images, stylesheets and such
	#---------------------------------------------------------------------
	backend static
	    balance     roundrobin
	    server      static 127.0.0.1:4331 check
	

	#---------------------------------------------------------------------
	# round robin balancing between the various backends
	#---------------------------------------------------------------------
	backend app
	    balance     roundrobin
	   {% for ip in groups['webservers'] %}
	    server app {{ ip }}:80 check
	   {% endfor %}
	

  • Now just create the final playbook & execute it :
# vim setup.yml

- hosts: webserver
  roles: 
    - role: "myapache"

- hosts: loadbalancer
  roles:
    - role: "myloadbalancer"

That's all from my side, have a fantastic read _/\_

Thank You Guys :)
??









要查看或添加评论,请登录

Krushna Prasad Sahoo的更多文章

社区洞察

其他会员也浏览了