Update code to pull subscriptions from
console.redhat.com instead of
subscription.rhsm.redhat.com
Uses service account client ID and client secret
instead of username/password, which is being
deprecated in July 2025.
Additional changes:
- In awx.awx.subscriptions module, use new service
account params rather than old basic auth params
- Update awx.awx.license module to use subscription_id
instead of pool_id. This is due to using a different API,
which identifies unique subscriptions by subscriptionID
instead of pool ID.
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
In OCP/K8S, projects run in the task pod's ee container. The private_data_dir is not extracted to /runner. Instead, the project update runs directly from the mounted in private_data_dir, e.g. /tmp/awx_1_abcd.
When injecting a credential that uses extra vars, we pass the private_data_dir as as the container_root, so that the correct command line argument is generated, e.g. "-e /tmp/awx_1_abcd/env/extra_var_file".
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
* Fallback to use subscription cred for analytic
Fall back to use SUBSCRIPTION_USERNAME/PASSWORD to upload analytic to if REDHAT_USERNAME/PASSWORD are not set
* Improve error message
* Guard against request with no query or data
* Add test for _send_to_analytics
Focus on credentials
* Supress sonarcloud warning about password
* Add test for analytic ship
fix memory and cpu settings to suport k8s resource request format
* fix conversion of memory setting to bytes
This setting has not been getting set by default, and needed some fixing
up to be compatible with setting the memory in the same way as we set it
in the operator, as well as with other changes from last year which
assume that ansible runner is returning memory in bytes.
This way we can start setting this setting in the operator, and get a
more accurate reflection of how much memory is available to the control
pod in k8s.
On platforms where services are all sharing memory, we deduct a
penalty from the memory available. On k8s we don't need to do this
because the web, redis, and task containers each have memory
allocated to them.
* Support CPU setting expressed in units used by k8s
This setting has not been getting set by default, and needed some fixing
up to be compatible with setting the CPU resource request/limits in the
same way as we set it in the resource requests/limits.
This way we can start setting this setting in the
operator, and get a more accurate reflection of how much cpu is
available to the control pod in k8s.
Because cpu on k8s can be partial cores, migrate cpu field to decimal.
k8s does not allow granularity of less than 100m (equivalent to 0.1 cores), so only
store up to 1 decimal place.
fix analytics to deal with decimal cpu
need to use DjangoJSONEncoder when Decimal fields in data passed to
json.dumps
* Before, we would get the min and max pk of the set we are to gather.
This changeset removes that.
* Before, we would, basically, know the size of the set we are to gather
and would query 100,000 of those job event records at a time. That logic
is now gone.
* Now, for unpartitioned job events we gather 4 hours at a time by
created time.
* Now, for partitioned job events we gather 4 hours at a time by
modified time.
[devel] add clarifying comment re: AUTOMATION_ANALYTICS_LAST_GATHER being set regardless
(the comment in the changeset pretty much says it all)
Reviewed-by: Chris Meyers <None>
- `since` should not be after `until`
- neither `since` nor `until` should be in the future
- `since`, `AUTOMATION_ANALYTICS_LAST_GATHER`, and `AUTOMATION_ANALYTICS_LAST_ENTRIES[key]` should be truncated to 4 weeks prior to `until`
- an explicit `since` parameter should always take precedence over the settings values
- stop trying to ship csv slices when one breaks
- only update LAST_ENTRIES if all of the files in a time/pk slice succeed
- don't allow an explicit --until parameter to set the GATHER/ENTRIES values backwards
Previously, invoking the command with neither of the --ship or
--dry-run flags would result in effectively doing a dry run. With the
stricter checks now in place in analytics.core.gather, let's make sure
that we pass the 'dry-run' parameter in to gather() in the no-flags
case.
With the change to use pk-based interval slicing for the job events
table, we need analytics.gather to be the code that manages all of the
"expensive" collector slicing. While we are at it, let's ship each
chunked tarball file as we produce it.
Collect expensive collectors separately, and in a loop
where we make smaller intermediate dumps.
Don't return a table dump if there are no records, and
don't put that CSV in the manifest.
Fix up unit tests.