The list secrets role rule is no longer not needed for container groups
SUMMARY
The list secrets role rule is no longer not needed for container groups. This was simplified to just get the single matching secret using the read_namespaced_secret() method.
Reviewed-by: Seth Foster <None>
Converts Options lists and other modal lists to tables
SUMMARY
This addresses #10162. Note this issue I found while working on this #10181
Kicked Off E2E
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
UI
AWX VERSION
ADDITIONAL INFORMATION
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Keith Grant <keithjgrant@gmail.com>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Fixes bug where activity stream changes were displaying as [object object]
SUMMARY
Resolves#10138
Here it is working:
In this case the value passed to the VariablesDetail component is a straight up JSON object (not a stringified JSON object) which that component does not seem to like in its current state.
Rather than looking at changing the VariablesDetail implementation I decided to just stringify the object before it gets passed in. Since this object is generated by the backend I think it's safe to assume it will always be an object.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
UI
Reviewed-by: Kersom <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Update migration page as per latest name changes
Update migration page as per latest name changes
rename.mov
See: #8355
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Container groups registry creds
SUMMARY
Fixes#10114
In VM-based installs, the user provides image pull creds to us, then we put them in an authfile and give it to podman via --authfile.
This is not so simple with ContainerGroups because we need to use the k8s API to apply a podspec to create containers in this paradigm.
Currently, the EE pod gets created, but errors when pulling the custom EE in from the private repo:
This work will modify the init() for the AWXReceptorJob class to create a k8s secret in the given namespace, then specify that secret name in the pod-spec as an imagePullSecret
Also, the imagePullPolicy was not being enforced when running JT's in EE's using container groups, this is because the imagePullPolicy nevery got set on the pod spec.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
API
AWX VERSION
devel
ADDITIONAL INFORMATION
Issues that this solves:
Image pull secret now gets created in the cluster namespace specified by the user for that container group.
imagePullSecret name gets set on the pod spec
If the pull secret already exists in the namespace, delete it, then create it. (kube_api.replace_namespaced_secret did not work for this case...)
Enforce imagePullPolicy for EE's in container groups
Basic error handling
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
Reviewed-by: Jeff Bradberry <None>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Reviewed-by: Chris Meyers <None>
Updates PF
SUMMARY
This is a dependency bump for PF so that I can get an update that fixes a validation bug in pf date picker.
E2E test have been triggered
ISSUE TYPE
dep bump
COMPONENT NAME
UI
AWX VERSION
ADDITIONAL INFORMATION
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
- base64 encode secret values before creating the secret
- Construct valid .dockerconfigjson
- Cancel jobs where it will obviously fail & error handling
- Check if the secret exists first, then attempts to replace it if it does.
Document usage of workflow job template nodes with approvals
SUMMARY
I'm not sure if this procedure is mandatory, but it was the only way it worked for me.
It's already documented in one sentence that you can create all workflow template nodes first and then link them later. However in the Examples it has been missing to make it explicit, as it's not really intuitive when you create the nodes and link them at the same time, it won't work and you run into #8777
ISSUE TYPE
Docs Pull Request
AWX VERSION
awx: 19.0.0
Reviewed-by: Shane McDonald <me@shanemcd.com>
Reviewed-by: Maximilian Meister <None>
Reviewed-by: Alan Rominger <arominge@redhat.com>
tweak message about image pull option
SUMMARY
I'm not sure if it is OK to change the migration in this way.
This message in the UI didn't make sense to me. If you all prefer the existing message, that is fine...I just think it is weird and doesn't tell you what it does.
If I don't care, I can leave it on --------- in the UI which is "do whatever the default is"
If I do care, its weird to choose an option saying I don't choose and option.
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Jeff Bradberry <None>
Only initialize pendo if the key is available
SUMMARY
Don't bootstrap/init pendo if the key is missing or empty string
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Adding AWX_CONTAINER_GROUP_POD_LAUNCH_TIMEOUT in defaults.py
--- Adding a new global variable AWX_CONTAINER_GROUP_POD_LAUNCH_TIMEOUT in defaults.py
--- Passing this parameter in receptor_params in AWXReceptorJob class
SUMMARY
Related issue : ansible/receptor#315
Corresponding PR on receptor side : ansible/receptor#316
I have added a global variable AWX_CONTAINER_GROUP_POD_LAUNCH_TIMEOUT ( and set default value as 5 minutes ).
This will be passed onto Receptor in case a job is container group task.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
AWX VERSION
awx: 19.1.0
ADDITIONAL INFORMATION
File Name : awx/main/tasks.py
Class Name : AWXReceptorJob
Function name: receptor_params(self)
Before the change the value of receptor_params before returning was :
{'secret_kube_config': '---\n'
'apiVersion: v1\n'
'clusters:\n'
'- cluster:\n'
' insecure-skip-tls-verify: true\n'
' server: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' name: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
'contexts:\n'
'- context:\n'
' cluster: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' namespace: default\n'
' user: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' name: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
'current-context: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
'kind: Config\n'
'preferences: {}\n'
'users:\n'
'- name: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' user:\n'
' token: '
'sha256~otM0Z2vC8qfzm3eV0tfz-0V4DXKPqLC_Pp1gxXKuC-c\n',
'secret_kube_pod': '---\n'
'apiVersion: v1\n'
'kind: Pod\n'
'metadata:\n'
' labels:\n'
' ansible-awx: 04366fdb-65ec-4efc-bb01-685e9ca7e05b\n'
" ansible-awx-job-id: '21'\n"
' name: automation-job-21\n'
' namespace: default\n'
'spec:\n'
' containers:\n'
' - args:\n'
' - ansible-runner\n'
' - worker\n'
' - --private-data-dir=/runner\n'
' image: quay.io/ansible/awx-ee:0.2.0\n'
' name: worker\n'}
>>>
After the change the value of receptor_params before returning is :
{'pod_pending_timeout': '5m',
'secret_kube_config': '---\n'
'apiVersion: v1\n'
'clusters:\n'
'- cluster:\n'
' insecure-skip-tls-verify: true\n'
' server: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' name: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
'contexts:\n'
'- context:\n'
' cluster: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' namespace: default\n'
' user: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' name: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
'current-context: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
'kind: Config\n'
'preferences: {}\n'
'users:\n'
'- name: '
'https://api.tower.ocp4.testing.ansible.com:6443\n'
' user:\n'
' token: '
'sha256~otM0Z2vC8qfzm3eV0tfz-0V4DXKPqLC_Pp1gxXKuC-c\n',
'secret_kube_pod': '---\n'
'apiVersion: v1\n'
'kind: Pod\n'
'metadata:\n'
' labels:\n'
' ansible-awx: 04366fdb-65ec-4efc-bb01-685e9ca7e05b\n'
" ansible-awx-job-id: '21'\n"
' name: automation-job-21\n'
' namespace: default\n'
'spec:\n'
' containers:\n'
' - args:\n'
' - ansible-runner\n'
' - worker\n'
' - --private-data-dir=/runner\n'
' image: quay.io/ansible/awx-ee:0.2.0\n'
' name: worker\n'}
>>>
We can see that an extra parameter has been added : 'pod_pending_timeout': '5m',
####Additional changes :
--- Removed AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES from defaults.py as it was not being used anywhere
--- Removed AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY from defaults.py as it was not being used anywhere
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
Reviewed-by: Amol Gautam <amol_gautam25@yahoo.co.in>
Reviewed-by: Seth Foster <None>
--- Passing this parameter in receptor_params in AWXReceptorJob class
--- Removed AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES from defaults.py as it was not being used anywhere
--- Removed AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY from defaults.py as it was not being used anywhere
Adding host_metrics for customers who don't have automation analytics
SUMMARY
I am trying to accomplish the following:
Create a new model that stores details about hostname
create corresponding migrations for it
Update the table (main_hostmetrics) after playbook execution
retrieve the data with custom awx-manage command : awx-manage host_metrics --since <date> --until <date>
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
AWX VERSION
awx: 18.0.0
ADDITIONAL INFORMATION
Reviewed-by: Ryan Petrello <ryan@ryanpetrello.com>
Reviewed-by: Amol Gautam <amol_gautam25@yahoo.co.in>
Reviewed-by: Chris Meyers <None>
Reviewed-by: Jeff Bradberry <None>
Reviewed-by: Bill Nottingham <None>
Reviewed-by: Ladislav Smola <lsmola@redhat.com>
Reviewed-by: Alan Rominger <arominge@redhat.com>
Use inventory and env private_data_dir subfolders
SUMMARY
This is another part of trying to get AWX to follow ansible-runner best practices.
See docs:
https://ansible-runner.readthedocs.io/en/stable/intro.html#runner-input-directory-hierarchy
So what runner & people expect is:
.
├── env
│ ├── envvars
│ ├── extravars
│ ├── passwords
│ ├── cmdline
│ ├── settings
│ └── ssh_key
├── inventory
│ └── hosts
└── project
├── test.yml
└── roles
└── testrole
├── defaults
├── handlers
├── meta
├── README.md
├── tasks
├── tests
└── vars
What we were producing was something more like
├── cp
├── env
│ ├── cmdline
│ ├── envvars
│ ├── extravars
│ ├── passwords
│ ├── settings
│ └── ssh_key
├── project
│ ├── ansible.cfg
│ ├── ansible_env.yml
│ ├── async_tasks.yml
<snip>
│ └── vault.yml
└── tmp3evnorsu
There was no inventory folder, and inventory was stored in tmp3evnorsu as a top-level file. Any credential files were also saved in that same structure.
With this change, it's more like:
├── cp
├── env
│ ├── cmdline
│ ├── envvars
│ ├── extravars
│ ├── passwords
│ └── settings
├── inventory
│ └── tmp94xjm0d7
└── project
├── ansible.cfg
├── ansible_env.yml
├── async_tasks.yml
<snip>
└── vault.yml
and any credential files (starting with tmp) are dumped into the env folder.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
AWX VERSION
14.1.0
ADDITIONAL INFORMATION
The motivation for this is that ansible-runner may require specific mounts for every file of directory in the private_data_dir. As such, we really don't want randomized filenames in the top level directory. I don't know for sure this is the direction we are going to go, but this seems like a defensible change on its own, to better adhere to the practices someone familiar with ansible-runner would expect.
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Chris Meyers <None>
Updates LingUI Again
SUMMARY
This updates ling ui to bring in their latest release so that we don't have to run npm extract-strings every time we add/update a <Plural> or a string with a variable inside it.
ISSUE TYPE
-dependency upgrade
COMPONENT NAME
UI
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
close db and cache connection in new threads
It's not safe to share or re-use django db connections nor django cache connections across new threads nor processes.
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Commits settings.DEFAULT_EXECUTION_ENVIRONMENTS updates into the database
SUMMARY
Fixes: ansible/awx-operator#256
cc: @shanemcd
Cannot update execution environment version in place by updating the AWX spec via awx-operator. For example, consider the scenario below:
spec:
tower_ee_images:
- image: registry.tatu.home/ansible/awx-ee:0.1.0
name: My Custom Execution Environment
The expectation would be once you modify the AWX spec to the version below, the EE should reflect this change once the awx-operator finishes its reconciliation.
After:
spec:
tower_ee_images:
- image: registry.tatu.home/ansible/awx-ee:0.2.0
name: My Custom Execution Environment
-- deployment got updated
- args:
- receptor
- --config
- /etc/receptor.conf
image: registry.tatu.home/ansible/awx-ee:0.2.0
imagePullPolicy: IfNotPresent
name: awx-devel-ee-ee
Besides the awx-operator updated the YAML deployment as expected, the change did not hit the database.
awx=# select * from main_executionenvironment;
id | created | modified | description | image | managed_by_tower | created_by_id | credential_id | modified_by_id | organization_id | name | pull
----+---------------+---------------+-------------+----------------+------------------+---------------+---------------+----------------+-----------------+----------------+------
1 | 2021-05-11 15.| 2021-05-11 15.| | registry.tatu..| t | 1 | | 1 | | My Custom Exec.|
|.:24:02.674302.|.:24:02.674327.| |.home/ansible/a.| | | | | |.ution Environm.|
|.+00 |.+00 | |.wx-ee:0.1.0 | | | | | |.ent |
(1 row)
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
API
AWX VERSION
devel
ADDITIONAL INFORMATION
Quick proof of concept
>>> settings.DEFAULT_EXECUTION_ENVIRONMENTS
[{'name': 'My Custom Execution Environment', 'image': 'registry.tatu.home/ansible/awx-ee:0.2.0'}]
>>> [a.name for a in ExecutionEnvironment.objects.all()]
['My Custom Execution Environment']
>>> [a.image for a in ExecutionEnvironment.objects.all()]
['registry.tatu.home/ansible/awx-ee:0.2.0']
*** updating to new but keeping name
>>> _, created = ExecutionEnvironment.objects.update_or_create(name='My Custom Execution Environment', defaults={'image': 'registry.tatu.home/ansible/awx-ee:0.3.0', 'managed_by_tower': True})
>>> [a.name for a in ExecutionEnvironment.objects.all()]
['My Custom Execution Environment']
>>> [a.image for a in ExecutionEnvironment.objects.all()]
['registry.tatu.home/ansible/awx-ee:0.3.0']
*** adding a new name
>>> _, created = ExecutionEnvironment.objects.update_or_create(name='My Custom Execution Environment2', defaults={'image': 'registry.tatu.home/ansible/awx-ee:0.3.0', 'managed_by_tower': True})
>>> [a.name for a in ExecutionEnvironment.objects.all()]
['My Custom Execution Environment2', 'My Custom Execution Environment']
>>> [a.image for a in ExecutionEnvironment.objects.all()]
['registry.tatu.home/ansible/awx-ee:0.3.0', 'registry.tatu.home/ansible/awx-ee:0.3.0']
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Add a check for the existence of pre-commit.sh
SUMMARY
so that you don't get stuck when working on branches that don't have this yet.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
API
AWX VERSION
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
Have the project update playbook warn if role/collection syncing is disabled
In recent AWX, a galaxy credential (even a blank one for galaxy.ansible.com) is required to sync role/collection content. This is done so that server precedence can be properly set, and so that it can be configured to pull only from a private content host.
This does lead to bug reports where the credentials are not set, and users don't understand why their content is not syncing. This makes that more clear.
Reviewed-by: Alan Rominger <arominge@redhat.com>