Skip to content

Ansible Ninja

Building Change Reports with the Ansible Controller API

Posted on September 20, 2022 - September 26, 2022 by jimmy

One of the main inconveniences with Ansible Controller is that while every playbook that it runs is logged, boiling those logs down to just the data you need in a usable format can be difficult. There is no real way of customizing the reporting built into the system, so instead you are generally expected to buy another 3rd Party tool (some of the recommendations can be super expensive) to export to and build reports out of the data. This might not be a tenable solution for a lot of people, so today we will explore ways of utilizing Ansible with the Ansible Controller API to pull the data we need and build our own reports.

We will start with a simple scenario.

“I have multiple playbooks that I periodically run against my hosts, but I only want a report on hosts where something was changed and for it to tell me what actually changed in a usable format (csv file or html).”

This is a very common scenario that I see a lot. Customers have a Security Hardening playbook that they run daily via a schedule against their hosts, and they just want to be alerted if something is out of Compliance. If Ansible Controller would just allow you to create a notification on “Changed” this could be a bit easier, but alas it only allows for notifications on Success or Failed. So instead we are going to replicate this concept ourselves. As with most things in Ansible, there are many routes we could take to accomplish this. For our purposes, since we may have an infinite number of playbooks that we may want to do reporting against, it makes the most sense to make any reporting we do agnostic to the original playbook itself. Instead we will create all the reporting in its own playbook, and just point it at the Job we want to report against (or include it in somehow). Again there are multiple ways we could do this, but the simplest way would be to utilize Workflows. Workflows allow us to have playbooks that run after each other. This allows us to do that “pointing” via a very easy method. As an example, our simple Workflow may look like this. The first node is our Security Hardening Job Template, while the last node is the Reporting Job Template.

Instead of scheduling the individual Job to run, we will schedule this Workflow to run which will now include in our Reporting.

When running jobs in a Workflow, each node receives a few special variables from Controller. We will be utilizing one such variable, tower_workflow_job_id, to call the Controller API and pull the full listing of jobs that are running in that workflow. From there, this will allow us to look up the job and all it’s events to create our report. So now we will start building out our playbook.

To start with, since we know we will be running this from Controller and accessing the API, you will want to create a Credential of the Type “Red Hat Ansible Automation Platform” and fill it out with our access to our Controller. We will then pull those credentials into variables to make them easier to use later (as we won’t be using the Ansible Controller modules). We will also add a few variables to determine the max amount of data we want to pull (to reduce load on the Controllers).

- name: Check previous job for change entries
  hosts: localhost
  gather_facts: no
  connection: local
  vars:
    tower_server: '{{ lookup("env", "TOWER_HOST") }}'
    tower_username: '{{ lookup("env", "TOWER_USERNAME") }}'
    tower_password: '{{ lookup("env", "TOWER_PASSWORD") }}'
    max: 50
    page_size: 200

So before we start building out our report, we need to set a few thing up. We will be doing this in our pre_tasks to accomplish this. While putting these in pre_tasks isn’t necessary for this playbook as we are just running against localhost; I have other reporting playbooks that run against their separate hosts, so I try to be consistent across all of them. The first thing we want to check is whether our special variable exists, if not that means either someone ran the Job Template by itself, or from the command line, and we don’t want that. Next we will be assembling our reports from multiple files, so we will want to build out a directory structure for this temporary location they will be created. If running a version of Controller that contains Execution Environments, then the 3rd task isn’t necessary (as it will always be empty).

  pre_tasks:
    - name: Precheck
      ansible.builtin.assert:
        that:
          - tower_workflow_job_id is defined
          - tower_workflow_job_id > 0
        fail_msg: "This playbook must be ran from a workflow within Controller"

    - name: Ensure Reports directories exist
      ansible.builtin.file:
         state: directory
         path: "{{ playbook_dir }}/reports/fragments/"
      delegate_to: localhost
      run_once: true 
 
    - name: Ensure fragments directory is empty
      ansible.builtin.file:
         state: absent
         path: "{{ playbook_dir }}/reports/fragments/*"
      delegate_to: localhost
      run_once: true 

Now that we are setup, we can start writing the main portions of our playbook. Our first task is going to be using the URI module. Take note that we aren’t using the Controller modules for this. While they are good for creating or changing things, they aren’t very handy for pulling data. There is a Controller API lookup plugin that can be used, but it can be a bit confusing to newcomers. So in our first task, we will be connecting to the API using our special variable (and our Tower credential variables we setup) and will pull all the nodes for the workflow.
In the second task, I don’t expect you to fully understand what is happening, but I am reverting to the Jinja2 language to parse the data we received and looking through it to find the Job ID that is not currently running (because that is the Audit job) and that has a type of “Job” (so it doesn’t grab a Project Syncs, etc… if we had that in our Workflow). In the end, this will give us the Job ID of the Hardening Job that ran before our current Audit run. I use the Jinja2 language a lot in my playbooks. YAML itself is not a programming language, so Jinja2 allows me to do some very complex stuff in a single task, while doing the same in YAML with modules would require multiple tasks.

Once we have the Job ID, we will want to pull the events of that Job run. Here we will add some limits to what is pulled so that we get only what we need and don’t put excessive load on the Controller. So again we use the URI module and we set the URL parameter to include some flags to tell it to only pull events that were task executions that made a change. This should give us exactly the events that we want to look at. We are setting the page size to the maximum though (via the variable above) but we still may receive multiple pages, so we will have to account for that possibility.

  tasks:
    - name: Get the Job ID for the first playbook run
      ansible.builtin.uri:
        url: https://{{ tower_server }}/api/v2/workflow_jobs/{{ tower_workflow_job_id }}/workflow_nodes/
        method: GET
        user: "{{ tower_username }}"
        password: "{{ tower_password }}"
        body_format: json
        validate_certs: False
        force_basic_auth: yes
        status_code:
          - 200
      register: response

    - ansible.builtin.set_fact:
        job_id: "{%- set job = namespace(job=0) -%}\
                 {%- for c in response.json.results -%}\
                   {%- if c.job != tower_job_id and c.summary_fields.job.status != 'running' and c.summary_fields.job.type == 'job' and c.job > job.job -%}\
                     {%- set job.job = c.job -%}\
                   {%- endif -%}\
                 {%- endfor -%}\
                 {{- job.job -}}"

    - name: Get the Job Results for the first playbook run
      ansible.builtin.uri:
        url: https://{{ tower_server }}/api/v2/jobs/{{ job_id }}/job_events/?event=runner_on_ok&changed=true&page_size={{ page_size }}
        method: GET
        user: "{{ tower_username }}"
        password: "{{ tower_password }}"
        body_format: json
        validate_certs: False
        force_basic_auth: yes
        status_code:
          - 200
      register: jdata

Before we start building the actual report, we want to do a quick check first and ensure that we need to. If we didn’t find any changes, then there is no reason to continue on. So here we will display a message (nobody likes a playbook that just stops without warning) and then will use the meta module to end the play run. I prefer using the meta module instead wrapping the rest of the playbook in a block and using a when statement.

    - ansible.builtin.debug: 
        msg: No Changes were detected
      when: jdata.json.count == 0

    - name: End Play if no changes
      ansible.builtin.meta: end_play
      when: jdata.json.count == 0

Now we are going to start writing the report files. First we are going to create a quick variable to tell us how many pages we need to pull, and another to tell us what page to start on. Now this may seem like common sense, but since the data is stored in reverse order in the API, we want to make sure we are getting the newest data if it was going to exceed our maximum pull size, so we might not be starting on Page 1 (Page 1 has the oldest data). Ideally since we are pulling changes for a single job here and really want all the data, we shouldn’t be limiting ourselves. I included this concept in here so you will understand how to do it in other reports. I have other playbooks that look at all the jobs at once, and parse them, so it is an important concept to keep in mind.

Now that our variables are set. We need to loop and pull each page one at a time. We will use the include_tasks module to handle this with the range filter, and will set the loop variable to the page number we are pulling.

    - name: Set Page Count
      ansible.builtin.set_fact:
        pages: "{{ ((jdata.json.count / page_size) | round(0, 'ceil') | int) + 1  }}"

    - name: Set Start Page Count
      ansible.builtin.set_fact:
        start: "{{ 1 if pages|int < max|int else ((pages|int) - (max|int)) }}"

    - name: Loop over the event pages backwards
      include_tasks: tasks/event_tasks.yml
      loop: "{{ range(start|int, pages|int) | list }}"
      loop_control:
        loop_var: page

In the tasks/event_tasks.yml file, we will use 2 simple tasks to build out the report page by page. We will first use the URI module again to pull the events in the particular page. We will then use the template module to build the report for that page’s events.

- name: Get the events on page {{ page }}
  ansible.builtin.uri:
    url: https://{{ tower_server }}/api/v2/jobs/{{ job_id }}/job_events/?event=runner_on_ok&changed=true&page_size={{ page_size }}&page={{ page }}
    method: GET
    user: "{{ tower_username }}"
    password: "{{ tower_password }}"
    body_format: json
    validate_certs: False
    force_basic_auth: yes
    status_code:
      - 200
  register: events

- name: Create report of changes on page {{ page }}
  ansible.builtin.template:
    src: templates/changes.html.j2
    dest: "{{ playbook_dir }}/reports/fragments/page-{{ page }}.html"

The template may look a bit complex, as we have a lot of data to loop through, and need to determine which parts of it to display. For our purposes, we will utilize the Date, Hostname, Playbook Name, Task Name, Module, and a Diff of what was changed. For your Hardening Template, if you checked the “Show Changes” box, your Job output will display a nice Diff of what was changed. In the API though, you don’t get the diff just the raw data of the before and after of the changes. You don’t get all this data if you don’t have “Show Changes” enabled, so if you want it, it is important to ensure you enable it. This is way too much data to display on its own though, so for my own playbook, I hacked together a lookup plugin that allows me to do diffs with html output for the colors. You can find it posted in my repo later in the article. We are also going to convert it all to YAML so that it displays a bit nicer in our report.

{% for c in events.json.results %}

<tr><td>{{ c.modified[:19] | replace("T", " ") }}</td><td>{{ c.host_name }}</td><td>{{ c.playbook }}</td><td>{{ c.task }}</td><td>{{ c.event_data.resolved_action }}</td><td><pre>
{% if c.event_data.res.diff is defined %}
{% for d in c.event_data.res.diff %}
{% if d.before_header is defined and d.before is defined %}
{{ lookup('diff', before=d.before, after=d.after, header=d.before_header) | default("") | to_nice_yaml(width=1337) | replace ("\\n", "\n") }}
{% else %}
{{ d | default("") | to_nice_yaml(width=1337) | replace ("\\n", "\n        ") }}
{% endif %}
{% endfor %}
{% endif %}
</pre></td></tr>

{% endfor %}

Now that our report files are created, we need to combine them all back together, so we will use the assemble module to accomplish this. Now if we were creating a CSV file, we would append just a header containing the columns names to the file. As we will be creating an HTML report, we will want both a header and a footer. We will use the lineinfile module to accomplish this.

 post_tasks:
    - name: Concat all the html files 
      ansible.builtin.assemble: 
        src: "{{ playbook_dir }}/reports/fragments/"
        dest: "{{ playbook_dir }}/reports/changes.html"
 
    - name: Append the header to the html file 
      ansible.builtin.lineinfile: 
        dest: "{{ playbook_dir }}/reports/changes.html"
        insertbefore: BOF 
        line: "{{ lookup('ansible.builtin.template', 'templates/header.html.j2') }}"
      
    - name: Append the footer to the html file 
      ansible.builtin.lineinfile: 
        dest: "{{ playbook_dir }}/reports/changes.html"
        insertafter: EOF 
        line: "{{ lookup('ansible.builtin.template', 'templates/footer.html.j2') }}"

The header.html.j2 file will look like this

<html>
  <head>
    <style>
      body { background-color: #efefef; }
      table { border-spacing: 0px; }
      table, th, td { border: 1px solid #ccc;  background-color: white;}
      th { background-color: #ccc; font-weight: bold; }
      td { padding: 10px; }
    </style>
  </head>
	<body><center><br><h3>{{ jdata.json.results[0].summary_fields.job.name }}</h3><br><br>
    <table>
      <tr><th>Date</th><th>Server</th><th>Playbook</th><th>Task</th><th>Module</th><th>Change</th></tr>

and the footer.html.j2

</table></center></body></html>

Now we just need 1 task to complete our playbook, and that is just sending out the report we built. We will be sending this out via Email, so we use the mail module. I am specifying 4 variables here that we did not define early, and that is because I typically set these variables in the Workflow Extra Vars, so that I can email each report to different users as needed (it could also be survey questions, etc…). In the body of the email, we will include both a link to the Workflow and the Job in our Controller.

    - name: Mail Report
      community.general.mail:
        host: "{{ smtp_server | default('127.0.0.1') }}"
        port: "{{ smtp_port | default(25) }}"
        subject: Change Report - {{ jdata.json.results[0].summary_fields.job.name }}
        body: |
              Attached is the change report for <b>{{ jdata.json.results[0].summary_fields.job.name }}</b><br><br>
              <a href="https://{{ tower_server }}/#/jobs/workflow/{{ tower_workflow_job_id }}/output">Workflow</a><br>
              <a href="https://{{ tower_server }}/#/jobs/playbook/{{ job_id }}/output">Job</a><br>
        from: "{{ from_address }}"
        to:
          - "{{ to_address }}"
        attach:
          - "{{ playbook_dir }}/reports/changes.html"
        subtype: html
      ignore_errors: true

The results from the report we receive will then look something like this

Here is a full example of the report ran against a test system.

changesDownload

You can find the full playbook including the Lookup Plugin on my Github repo located here

https://github.com/cigamit/ansible_misc/tree/master/change_audit

Posted in Ansible, Controller, ReportingTagged Ansible, Ansible Tower, Reporting

Building Reports with Ansible

Posted on September 20, 2022 by jimmy

While the Ansible Automation Platform is great at a lot of things, Reporting is generally not considered one of them. So today we will explore how to use Ansible to collect data from our hosts, and use that data to build out a simple CSV report to email to us.

The types of reports and the data required will vary greatly depending on the use case, so we will pick a simple use case and dive into it.

“I have lots of Windows Servers, and I want to create a report of all their shares and their share permissions”

The concept behind building a report for this is a bit simple. We need to connect to some hosts and pull data from those hosts, and then we will store that data in a CSV file and email it out. Doing all of this in Ansible is fairly easy, but since YAML isn’t really a programming language, depending on the data and how you want to manipulate it, it can get a bit complex.

With the way that Ansible works, each host is running in its own fork, so building reports in Ansible generally consists of creating a small report per host (or a report per host per share such as in this case) and then combining them all back together at the end.
To begin our playbook, we will start with some pre_tasks to ensure we have a directory created to store these report fragments in. We will then ensure the directory is empty, but this step isn’t completely necessary if we are running from an Execution Environment. All the pre_tasks are delegated to the localhost and are set to run once, instead of with each host.

- name: Create Report
  hosts: all
 
  pre_tasks:
    - name: Ensure Reports directories exist
      ansible.builtin.file:
         state: directory
         path: "{{ playbook_dir }}/reports/fragments/"
      delegate_to: localhost
      run_once: true 
 
    - name: Ensure fragments directory is empty
      ansible.builtin.file:
         state: absent
         path: "{{ playbook_dir }}/reports/fragments/*"
      delegate_to: localhost
      run_once: true 

Next will come the tasks section in our playbook. This section is fairly small, as we only need to do 2 things. The first task is running a Powershell command to grab the Shares of the server and just get the properties that we want. Most importantly, we tell Powershell to convert the results to JSON, so that we can easily parse the data.

The second task is looping over the shares and including in more tasks using the include_tasks module. You will notice that even though we told Powershell in the previous task to convert it to JSON, we still have to tell Ansible to convert the data to JSON too via the from_json filter. This is because in the first task, even though its returned formatted as JSON, it is still a string to Ansible, so we must change it to be an object.

  tasks:
    - name: Powershell | Get-SMBShare
      ansible.windows.win_shell: Get-SMBShare | Select-Object -Property Name,ScopeName,Path,Description,CimSystemProperties | ConvertTo-JSON
      register: shares

    - name: include Permissions report
      ansible.builtin.include_tasks: tasks/share_permissions.yml
      loop: "{{ shares.stdout | from_json }}"
      loop_control:
        loop_var: share
        label: "{{ share.Name }}"

Now, inside our tasks/share_permissions.yml file we will start to build out our report per share. So each share will create its own file to be added to the report. Our first task is to get the Share Permissions, which just requires a simple Powershell command that we will again convert to JSON for parsing. The 2nd task is simply creating the variable for the JSON object.

The 3rd task gets a bit complicated, because I am utilizing Jinja2 to do a bit of programming. While this might not be necessary in your report, I am doing this to format the data exactly as I want it (removing some strings and removing duplicates). This step could be done utilizing ansible modules and some loops, but I find it easier and faster do leave the fancy programming to Jinja2 and let Ansible do what it does best.

- name:  Powershell | Get-SMBShareAccess
  ansible.windows.win_shell: Get-SMBShareAccess -Name "{{ share.Name }}" | Select-Object -Property AccountName,AccessRight | ConvertTo-JSON
  register: permission

- name: Set Permission variable
  ansible.builtin.set_fact:
    permission: "{{ permission.stdout | from_json }}"

- name: Combine Share Permissions
  ansible.builtin.set_fact: 
    pcom: '{%- set r = [] -%}
           {%- if permission.AccountName is defined -%}
               {%- set permission = [permission] -%}
           {%- endif -%}
           {%- for item in permission -%}
               {%- set v = item.AccountName | replace("NT AUTHORITY\\", "") | replace("BUILTIN\\", "") | replace("NT SERVICE\\", "") -%}
               {%- if v not in r -%}
                   {{- r.append(v) -}}
               {%- endif -%}
           {%- endfor -%}
           {{- r | sort | join(";") -}}'

We then want to do the same for any File System ACLs for the Shares. We will wrap all of this in a block and rescue as sometimes it will fail if there are no ACL permissions for the share, and we want to avoid that. So our steps here are basically the same as before. Get the ACLs, parse and format them, with an additional step in the rescue that if we fail, set the ACL variable to blank.

- block: 
  - name:  Powershell | Get-ACL
    ansible.windows.win_shell: Get-ACL "{{ share.Path }}" | Select -ExpandProperty Access | Select -ExpandProperty IdentityReference | ConvertTo-JSON
    register: acl

  - name: Combine ACL Permissions
    ansible.builtin.set_fact: 
      pacl: '{%- set r = [] -%}
            {%- for item in acl.stdout | from_json  -%}
                {%- set v = item.Value | replace("NT AUTHORITY\\", "") | replace("BUILTIN\\", "") | replace("NT SERVICE\\", "") -%}
                {%- if v not in r -%}
                    {{- r.append(v) -}}
                {%- endif -%}
            {%- endfor -%}
            {{- r | sort | join(";") -}}'
  rescue:
  - name: Set ACL Permission Blank
    ansible.builtin.set_fact: 
      pacl: ""

Now that we have both our Share Permissions and ACL Permissions, we can get ready to insert them into the report. First though, I want to set one more variable so we can create a column in the report that tells me whether the Share is Open or Secured. We do this by searching for “Everyone” in the Share Permissions (and ideally we should do a bit more here). We then create the report using the template module, and you will notice that we name the report using the hostname and share name to ensure its unique, and this task is then delegated to the localhost.

- name: Check for Everyone Permission
  ansible.builtin.set_fact: 
    open: "{% if pcom is search('Everyone') %}Open{% else %}Secured{% endif %}"

- name: Render the Host Report Template
  ansible.builtin.template: 
    src: "templates/share.csv.j2" 
    dest: "{{ playbook_dir }}/reports/fragments/{{ inventory_hostname }}-{{ share.Name }}.csv"
  delegate_to: localhost

If we take a look at our template at templates/share.csv.j2 you will see that it is just a single line with all the information we have gathered.

{{ inventory_hostname }},{{ share.Name }},{{ open }},\\{{ share.CimSystemProperties.ServerName}}\{{ share.Name }},{{ share.Path }},{{ pcom }},{{ pacl }},{{ share.Description }}

From this point we will head back to the main playbook and create some post_tasks to start building the main report. We will use the assemble module to take all the files we have created and combine them into a single file. Again this is delegated to localhost and set to run only once. We now have all the data in the report, but we need to know which columns are which, so we will use the lineinfile module to add a header at the beginning of the file (BOF).

    - name: Assemble all the csv files 
      ansible.builtin.assemble: 
        src: "{{ playbook_dir }}/reports/fragments/"
        dest: "{{ playbook_dir }}/reports/shares.csv"
      delegate_to: localhost
      run_once: true 
 
    - name: Append the header to the csv file 
      ansible.builtin.lineinfile: 
        dest: "{{ playbook_dir }}/reports/shares.csv"
        insertbefore: BOF 
        line: "Hostname,Share_Name,Share_Value,FullSharePath,Share_Mapping,Share_Permissions,NTFS_Permissions,Description" 
      delegate_to: localhost
      run_once: true

Now our report is created. We have a lot of options on what we can do with the report file, but for this scenario we will email it out via the mail module. Take note of the additional variables required when you do this. We could also copy it to a remote file share and rename it based upon the date, etc…

    - name: Mail Report
      community.general.mail:
        host: "{{ smtp_server | default('127.0.0.1') }}"
        port: "{{ smtp_port | default(25) }}"
        subject: Windows Share Report
        body: Here is the report of all Windows Shares
        from: "{{ from_address }}"
        to:
          - "{{ to_address }}"
        attach:
          - "{{ playbook_dir }}/reports/shares.csv"
      ignore_errors: true
      delegate_to: localhost
      run_once: true

The wraps up the reporting playbook. The concepts used in this reporting playbook can be carried over to virtually any report you need to build. In the end, you are just collecting data from servers (or maybe an API), creating files per server and then assembling those files together. This CSV file output will look something like this

Windows Share Report

The repo containing this example playbook is also available on Github at

https://github.com/cigamit/ansible_misc/tree/master/windows_share_report

Posted in Ansible, ReportingTagged Ansible, Reporting

Ansible Tower Log Aggregation Parsing

Posted on March 25, 2020 - August 25, 2020 by jimmy

Ansible Tower has a nifty little feature that allows us to spit off its logs from playbook runs in real time to a log aggregator. Typically this would be something like Splunk or Elastic Stack. I on the other hand wanted to utilize the data for my own needs. In particular I was wanting all the fact data for a reporting engine I was creating. I typically use PHP to rapidly prototype projects, as I can write it super fast, and then go back and rewrite projects that proved interesting, in python.

So I created a RHEL 8 VM to start testing on. First thing I needed to do was to create a ‘server’ in nginx listening on a port besides 80/443, since my reporting website would be running on those ports. I chose port 5000 as that seems to be the default for a few of the other log aggregation products. I wanted to encrypt the data stream, so I created SSL certificates, etc.. to run with it. I am using all self-signed certs but you can give it a real cert if you want. If using self-signed, be sure to disable the certificate check in Ansible Tower. If you don’t want to use SSL, you will have to explicitly put a http:// in front of the server name in Ansible Tower.

What you will mainly notice that is different in this config is that I set the index to parse.php which is the script I wrote to parse the output. So all output to this port is parsed by my script by default.

server {
        listen 5000 http2 ssl;

        server_name _;
        root /opt/app/html;


        ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
        ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_ecdh_curve secp384r1;
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off;
        ssl_stapling on;
        ssl_stapling_verify on;
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 5s;

        # Disable preloading HSTS for now.  You can use the commented out header line that includes
        # the "preload" directive if you understand the implications.
        #add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;

        access_log  /opt/app/logs/parse-access.log  main;
        error_log   /opt/app/logs/parse-error.log;

        location / {
                index parse.php;
        }

        location ~ \.php$ {
                fastcgi_pass 127.0.0.1:9000;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }
}

The next thing I did was create my parse.php. Absolutely everything that happens in Ansible Tower is sent off to this script. You can easily grab this data via the PHP Input Stream. This data is json, and it can be in a slightly different format depending on logging aggregator type. I just set it to other.

Now lots and lots of data will be sent to this. If you find that your parser server is getting hammered badly, you can limit what Tower will send to this server by modifying the parameter “LOGGERS SENDING DATA TO LOG AGGREGATOR FORM“. We are looking for the job_events in particular for my use case.

You will find a quick sample of the PHP code below. There are some pieces I left out, such as my functions for storing the data, etc.. but this gives you the general idea.

  1. We grab the data from the Input Stream and check to make sure its not blank just for good measure.
  2. Next we attempt to decode the json data into an array we can use. If this fails (not valid json) then we want to exit cleanly.
  3. Now that we have an array of data, we want to look for a few key pieces. Namely we want to see if this is a job_event, so we check the logger_name field to see where it came from.
  4. If this matches “awx.analytics.job_events” then we will want to do a little more validation to ensure its the right type of job event, specifically the data from tasks ran against hosts. I noticed that the host_name field is always present for the data I wanted (since I want the fact data from hosts). So we look for that.
  5. Lastly, we grab the facts from the stream and parse them out into a usable format. From here you can do whatever you want with the data. I stuck it all in a MySQL DB so that I can create reports off of it.
<?php

$data = file_get_contents('php://input');
if ($data != '') {
	try {
		$data = json_decode($data, true);
	} catch (Exception $e) {
		exit;
	}
} else {
	exit;
}

if (isset($data['logger_name'])) {
	switch ($data['logger_name']) {
		case 'awx.analytics.job_events':
			if (isset($data['event_data']['res']['ansible_facts'])) {
				if (isset($data['host_name'])) {
					$f = $data['event_data']['res']['ansible_facts'];
					$t = $data['event_data']['task_action'];
					$fs = parse_facts($f);
					if (count($fs)) {
						// INSERT FACTS INTO DB HERE
					}
				}
			}
			break;
	}
}
function parse_facts($f, $fs = array(), $n = '') {
	foreach ($f as $k => $v) {
		if (is_array($v)) {
			$s = ($n != '' && substr($n, -1, 1) != '.' ? '.' : '');
			$fs = parse_facts($v, $fs, "$n$s$k.");
		} else {
			$fs[$n . $k] = $v;
		}
	}
	return $fs;
}

Now that you have the ability to process the data from Ansible Tower, there are a lot of neat things that you can do with it. Another function I wrote into my Reporting Engine is a change logger. Tower records every job run, but its not easy just to check and see everything that has changed across all your servers. So I record all these changes myself and present them in an easy to view format. I also allow searching of this data, so I can easily see which playbook has changed a particular file with just a few key presses.

In my demo Reporting Engine, an example view of all changes looks a little like this.

You can then view the individual changes with pertinent data about the playbook that made the change (and a link back the Ansible Tower log for that change). One thing to note, I convert the json results data to yaml for easier viewing / searching using Spyc.

Posted in AnsibleTagged Ansible Tower, logging, PHP

Recent Posts

  • Building Change Reports with the Ansible Controller API
  • Building Reports with Ansible
  • Ansible Tower Log Aggregation Parsing

Recent Comments