Compare commits

...

30 Commits

Author SHA1 Message Date
Marliana Lara
2a1dffd363 Fix edit constructed inventory hanging loading state (#14343) 2023-08-21 12:36:36 -04:00
digitalbadger-uk
8c7ab8fcf2 Added required epoc time field for Splunk HEC Event Receiver (#14246)
Signed-off-by: Iain <iain@digitalbadger.com>
2023-08-21 09:44:52 -03:00
Hao Liu
3de8455960 Fix missing trailing / in PUBLIC_PATH for UI
Missing trialing `/` cause UI to load file from incorrect static dir location.
2023-08-17 15:16:59 -04:00
Hao Liu
d832e75e99 Fix ui-next build step file path issue
Add full path for the mv command so that the command can be run from ui_next and from project root.

Additionally move the rename of file to src build step.
2023-08-17 15:16:59 -04:00
abwalczyk
a89e266feb Fixed task and web docs (#14350) 2023-08-17 12:22:51 -04:00
Hao Liu
8e1516eeb7 Update UI_NEXT build to set PRODUCT and PUBLIC_PATH
https://github.com/ansible/ansible-ui/pull/792 added configurable public path (which was change to '/' in https://github.com/ansible/ansible-ui/pull/766/files#diff-2606df06d89b38ff979770f810c3c269083e7c0fbafb27aba7f9ea0297179828L128-R157)

This PR added the variable when building ui-next
2023-08-16 18:35:12 -04:00
Hao Liu
c7f2fdbe57 Rename ui_next index.html to index_awx.html during build process
Due to change made in https://github.com/ansible/ansible-ui/pull/766/files#diff-7ae45ad102eab3b6d7e7896acd08c427a9b25b346470d7bc6507b6481575d519R18 awx/ui_next/build/awx/index_awx.html was renamed to awx/ui_next/build/awx/index.html

This PR fixes the problem by renaming the file back
2023-08-16 18:35:12 -04:00
delinea-sagar
c75757bf22 Update python-tss-sdk dependency (#14207)
Signed-off-by: delinea-sagar <sagar.wani@c.delinea.com>
2023-08-16 20:07:35 +00:00
Kevin Pavon
b8ec7c4072 Schedule rruleset fix related #13446 (#13611)
Signed-off-by: Kevin Pavon <7450065+KaraokeKev@users.noreply.github.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-16 16:10:31 -03:00
jbreitwe-rh
bb1c155bc9 Fixed typos (#14347) 2023-08-16 15:05:23 -04:00
Jeff Bradberry
4822dd79fc Revert "Improve performance for awx cli export (#13182)" 2023-08-15 15:55:10 -04:00
jainnikhil30
4cd90163fc make the default JOB_EVENT_BUFFER_SECONDS 1 seconds (#14335) 2023-08-12 07:49:34 +05:30
Alan Rominger
8dc6ceffee Fix programming error in facts retry merge (#14336) 2023-08-11 13:54:18 -04:00
Alan Rominger
2c7184f9d2 Add a retry to update host facts on deadlocks (#14325) 2023-08-11 11:13:56 -04:00
Martin Slemr
5cf93febaa HostMetricSummaryMonthly: Analytics export 2023-08-11 09:38:23 -04:00
Alan Rominger
284bd8377a Integrate scheduler into dispatcher main loop (#14067)
Dispatcher refactoring to get pg_notify publish payload
  as separate method

Refactor periodic module under dispatcher entirely
  Use real numbers for schedule reference time
  Run based on due_to_run method

Review comments about naming and code comments
2023-08-10 14:43:07 -04:00
Jeff Bradberry
14992cee17 Add in an async task to migrate the data over 2023-08-10 13:48:58 -04:00
Jeff Bradberry
6db663eacb Modify main/0185 to set aside the json fields that might be a problem
Rename them, then create a new clean field of the new jsonb type.
We'll use a task to do the data conversion.
2023-08-10 13:48:58 -04:00
Ivanilson Junior
87bb70bcc0 Remove extra quote from Skipped task status string (#14318)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
Co-authored-by: kialam <digitalanime@gmail.com>
2023-08-09 15:58:46 -07:00
Pablo Hess
c2d02841e8 Allow importing licenses with a missing "usage" attribute (#14326) 2023-08-09 16:41:14 -04:00
onefourfive
e5a6007bf1 fix broken link to upgrade docs. related #11313 (#14296)
Signed-off-by: onefourfive <>
Co-authored-by: onefourfive <unknown>
2023-08-09 15:06:44 -04:00
Alan Rominger
6f9ea1892b AAP-14538 Only process ansible_facts for successful jobs (#14313) 2023-08-04 17:10:14 -04:00
Sean Sullivan
abc56305cc Add Request time out option for collection (#14157)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-03 15:06:04 -03:00
kialam
9bb6786a58 Wait for new label IDs before setting label prompt values. (#14283) 2023-08-03 09:46:46 -04:00
Michael Abashian
aec9a9ca56 Fix rbac around credential access add button (#14290) 2023-08-03 09:18:21 -04:00
John Westcott IV
7e4cf859f5 Added PR check to ensure JIRA links are present (#13839) 2023-08-02 15:28:13 -04:00
mcen1
90c3d8a275 Update example service-account.yml for container group in documentation (#13479)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Nana <35573203+masbahnana@users.noreply.github.com>
2023-08-02 15:27:18 -04:00
lucas-benedito
6d1c8de4ed Fix trial status and host limit with sub (#14237)
Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-08-02 10:27:20 -04:00
Seth Foster
601b62deef bump python-daemon package (#14301) 2023-08-01 01:39:17 +00:00
Seth Foster
131dd088cd fix linting (#14302) 2023-07-31 20:37:37 -04:00
61 changed files with 1095 additions and 429 deletions

View File

@@ -0,0 +1,35 @@
---
name: Check body for reference to jira
on:
pull_request:
branches:
- release_**
jobs:
pr-check:
if: github.repository_owner == 'ansible' && github.repository != 'awx'
name: Scan PR description for JIRA links
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check for JIRA lines
env:
PR_BODY: ${{ github.event.pull_request.body }}
run: |
echo "$PR_BODY" | grep "JIRA: None" > no_jira
echo "$PR_BODY" | grep "JIRA: https://.*[0-9]+"> jira
exit 0
# We exit 0 and set the shell to prevent the returns from the greps from failing this step
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash {0}
- name: Check for exactly one item
run: |
if [ $(cat no_jira jira | wc -l) != 1 ] ; then
echo "The PR body must contain exactly one of [ 'JIRA: None' or 'JIRA: <one or more links>' ]"
echo "We counted $(cat no_jira jira | wc -l)"
exit 255;
else
exit 0;
fi

View File

@@ -4,6 +4,6 @@
Early versions of AWX did not support seamless upgrades between major versions and required the use of a backup and restore tool to perform upgrades. Early versions of AWX did not support seamless upgrades between major versions and required the use of a backup and restore tool to perform upgrades.
Users who wish to upgrade modern AWX installations should follow the instructions at: As of version 18.0, `awx-operator` is the preferred install/upgrade method. Users who wish to upgrade modern AWX installations should follow the instructions at:
https://github.com/ansible/awx/blob/devel/INSTALL.md#upgrading-from-previous-versions https://github.com/ansible/awx-operator/blob/devel/docs/upgrade/upgrading.md

View File

@@ -366,9 +366,9 @@ class BaseAccess(object):
report_violation = lambda message: None report_violation = lambda message: None
else: else:
report_violation = lambda message: logger.warning(message) report_violation = lambda message: logger.warning(message)
if validation_info.get('trial', False) is True or validation_info['instance_count'] == 10: # basic 10 license if validation_info.get('trial', False) is True:
def report_violation(message): def report_violation(message): # noqa
raise PermissionDenied(message) raise PermissionDenied(message)
if check_expiration and validation_info.get('time_remaining', None) is None: if check_expiration and validation_info.get('time_remaining', None) is None:

View File

@@ -613,3 +613,20 @@ def host_metric_table(since, full_path, until, **kwargs):
since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat() since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat()
) )
return _copy_table(table='host_metric', query=host_metric_query, path=full_path) return _copy_table(table='host_metric', query=host_metric_query, path=full_path)
@register('host_metric_summary_monthly_table', '1.0', format='csv', description=_('HostMetricSummaryMonthly export, full sync'), expensive=trivial_slicing)
def host_metric_summary_monthly_table(since, full_path, **kwargs):
query = '''
COPY (SELECT main_hostmetricsummarymonthly.id,
main_hostmetricsummarymonthly.date,
main_hostmetricsummarymonthly.license_capacity,
main_hostmetricsummarymonthly.license_consumed,
main_hostmetricsummarymonthly.hosts_added,
main_hostmetricsummarymonthly.hosts_deleted,
main_hostmetricsummarymonthly.indirectly_managed_hosts
FROM main_hostmetricsummarymonthly
ORDER BY main_hostmetricsummarymonthly.id ASC) TO STDOUT WITH CSV HEADER
'''
return _copy_table(table='host_metric_summary_monthly', query=query, path=full_path)

View File

@@ -1,7 +1,10 @@
from .plugin import CredentialPlugin from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret try:
from delinea.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
except ImportError:
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = { tss_inputs = {
'fields': [ 'fields': [

View File

@@ -40,8 +40,12 @@ def get_task_queuename():
class PubSub(object): class PubSub(object):
def __init__(self, conn): def __init__(self, conn, select_timeout=None):
self.conn = conn self.conn = conn
if select_timeout is None:
self.select_timeout = 5
else:
self.select_timeout = select_timeout
def listen(self, channel): def listen(self, channel):
with self.conn.cursor() as cur: with self.conn.cursor() as cur:
@@ -72,12 +76,12 @@ class PubSub(object):
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid) n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n yield n
def events(self, select_timeout=5, yield_timeouts=False): def events(self, yield_timeouts=False):
if not self.conn.autocommit: if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode') raise RuntimeError('Listening for events can only be done in autocommit mode')
while True: while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY: if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
if yield_timeouts: if yield_timeouts:
yield None yield None
else: else:
@@ -90,7 +94,7 @@ class PubSub(object):
@contextmanager @contextmanager
def pg_bus_conn(new_connection=False): def pg_bus_conn(new_connection=False, select_timeout=None):
''' '''
Any listeners probably want to establish a new database connection, Any listeners probably want to establish a new database connection,
separate from the Django connection used for queries, because that will prevent separate from the Django connection used for queries, because that will prevent
@@ -115,7 +119,7 @@ def pg_bus_conn(new_connection=False):
raise RuntimeError('Unexpectedly could not connect to postgres for pg_notify actions') raise RuntimeError('Unexpectedly could not connect to postgres for pg_notify actions')
conn = pg_connection.connection conn = pg_connection.connection
pubsub = PubSub(conn) pubsub = PubSub(conn, select_timeout=select_timeout)
yield pubsub yield pubsub
if new_connection: if new_connection:
conn.close() conn.close()

View File

@@ -40,6 +40,9 @@ class Control(object):
def cancel(self, task_ids, *args, **kwargs): def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs) return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)
@classmethod @classmethod
def generate_reply_queue_name(cls): def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}" return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
@@ -52,14 +55,14 @@ class Control(object):
if not connection.get_autocommit(): if not connection.get_autocommit():
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode') raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
with pg_bus_conn() as conn: with pg_bus_conn(select_timeout=timeout) as conn:
conn.listen(reply_queue) conn.listen(reply_queue)
send_data = {'control': command, 'reply_to': reply_queue} send_data = {'control': command, 'reply_to': reply_queue}
if extra_data: if extra_data:
send_data.update(extra_data) send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data)) conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True): for reply in conn.events(yield_timeouts=True):
if reply is None: if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s') logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError(f"{self.service} did not reply within {timeout}s") raise RuntimeError(f"{self.service} did not reply within {timeout}s")

View File

@@ -1,57 +1,142 @@
import logging import logging
import os
import time import time
from multiprocessing import Process import yaml
from datetime import datetime
from django.conf import settings
from django.db import connections
from schedule import Scheduler
from django_guid import set_guid
from django_guid.utils import generate_guid
from awx.main.dispatch.worker import TaskWorker
from awx.main.utils.db import set_connection_name
logger = logging.getLogger('awx.main.dispatch.periodic') logger = logging.getLogger('awx.main.dispatch.periodic')
class Scheduler(Scheduler): class ScheduledTask:
def run_continuously(self): """
idle_seconds = max(1, min(self.jobs).period.total_seconds() / 2) Class representing schedules, very loosely modeled after python schedule library Job
the idea of this class is to:
- only deal in relative times (time since the scheduler global start)
- only deal in integer math for target runtimes, but float for current relative time
def run(): Missed schedule policy:
ppid = os.getppid() Invariant target times are maintained, meaning that if interval=10s offset=0
logger.warning('periodic beat started') and it runs at t=7s, then it calls for next run in 3s.
However, if a complete interval has passed, that is counted as a missed run,
and missed runs are abandoned (no catch-up runs).
"""
set_connection_name('periodic') # set application_name to distinguish from other dispatcher processes def __init__(self, name: str, data: dict):
# parameters need for schedule computation
self.interval = int(data['schedule'].total_seconds())
self.offset = 0 # offset relative to start time this schedule begins
self.index = 0 # number of periods of the schedule that has passed
while True: # parameters that do not affect scheduling logic
if os.getppid() != ppid: self.last_run = None # time of last run, only used for debug
# if the parent PID changes, this process has been orphaned self.completed_runs = 0 # number of times schedule is known to run
# via e.g., segfault or sigkill, we should exit too self.name = name
pid = os.getpid() self.data = data # used by caller to know what to run
logger.warning(f'periodic beat exiting gracefully pid:{pid}')
raise SystemExit()
try:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
set_guid(generate_guid())
self.run_pending()
except Exception:
logger.exception('encountered an error while scheduling periodic tasks')
time.sleep(idle_seconds)
process = Process(target=run) @property
process.daemon = True def next_run(self):
process.start() "Time until the next run with t=0 being the global_start of the scheduler class"
return (self.index + 1) * self.interval + self.offset
def due_to_run(self, relative_time):
return bool(self.next_run <= relative_time)
def expected_runs(self, relative_time):
return int((relative_time - self.offset) / self.interval)
def mark_run(self, relative_time):
self.last_run = relative_time
self.completed_runs += 1
new_index = self.expected_runs(relative_time)
if new_index > self.index + 1:
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
self.index = new_index
def missed_runs(self, relative_time):
"Number of times job was supposed to ran but failed to, only used for debug"
missed_ct = self.expected_runs(relative_time) - self.completed_runs
# if this is currently due to run do not count that as a missed run
if missed_ct and self.due_to_run(relative_time):
missed_ct -= 1
return missed_ct
def run_continuously(): class Scheduler:
scheduler = Scheduler() def __init__(self, schedule):
for task in settings.CELERYBEAT_SCHEDULE.values(): """
apply_async = TaskWorker.resolve_callable(task['task']).apply_async Expects schedule in the form of a dictionary like
total_seconds = task['schedule'].total_seconds() {
scheduler.every(total_seconds).seconds.do(apply_async) 'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
scheduler.run_continuously() }
Only the schedule nearest-second value is used for scheduling,
the rest of the data is for use by the caller to know what to run.
"""
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
min_interval = min(job.interval for job in self.jobs)
num_jobs = len(self.jobs)
# this is intentionally oppioniated against spammy schedules
# a core goal is to spread out the scheduled tasks (for worker management)
# and high-frequency schedules just do not work with that
if num_jobs > min_interval:
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
# even space out jobs over the base interval
for i, job in enumerate(self.jobs):
job.offset = (i * min_interval) // num_jobs
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self):
relative_time = time.time() - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
to_run.append(job)
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
job.mark_run(relative_time)
return to_run
def time_until_next_run(self):
relative_time = time.time() - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
return 0.1
elif delta > 20.0:
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
return 20.0
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
return delta
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
now = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = time.time() - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)
data['total_schedules'] = len(self.jobs)
data['schedule_list'] = dict(
[
(
job.name,
dict(
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
next_run_in_seconds=round(job.next_run - relative_time, 3),
offset_in_seconds=job.offset,
completed_runs=job.completed_runs,
missed_runs=job.missed_runs(relative_time),
),
)
for job in sorted(self.jobs, key=lambda job: job.interval)
]
)
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)

View File

@@ -73,15 +73,15 @@ class task:
return cls.apply_async(args, kwargs) return cls.apply_async(args, kwargs)
@classmethod @classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw): def get_async_body(cls, args=None, kwargs=None, uuid=None, **kw):
"""
Get the python dict to become JSON data in the pg_notify message
This same message gets passed over the dispatcher IPC queue to workers
If a task is submitted to a multiprocessing pool, skipping pg_notify, this might be used directly
"""
task_id = uuid or str(uuid4()) task_id = uuid or str(uuid4())
args = args or [] args = args or []
kwargs = kwargs or {} kwargs = kwargs or {}
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()} obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
guid = get_guid() guid = get_guid()
if guid: if guid:
@@ -89,6 +89,16 @@ class task:
if bind_kwargs: if bind_kwargs:
obj['bind_kwargs'] = bind_kwargs obj['bind_kwargs'] = bind_kwargs
obj.update(**kw) obj.update(**kw)
return obj
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
if callable(queue): if callable(queue):
queue = queue() queue = queue()
if not is_testing(): if not is_testing():
@@ -116,4 +126,5 @@ class task:
setattr(fn, 'name', cls.name) setattr(fn, 'name', cls.name)
setattr(fn, 'apply_async', cls.apply_async) setattr(fn, 'apply_async', cls.apply_async)
setattr(fn, 'delay', cls.delay) setattr(fn, 'delay', cls.delay)
setattr(fn, 'get_async_body', cls.get_async_body)
return fn return fn

View File

@@ -11,11 +11,13 @@ import psycopg
import time import time
from uuid import UUID from uuid import UUID
from queue import Empty as QueueEmpty from queue import Empty as QueueEmpty
from datetime import timedelta
from django import db from django import db
from django.conf import settings from django.conf import settings
from awx.main.dispatch.pool import WorkerPool from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch.periodic import Scheduler
from awx.main.dispatch import pg_bus_conn from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name from awx.main.utils.db import set_connection_name
@@ -64,10 +66,12 @@ class AWXConsumerBase(object):
def control(self, body): def control(self, body):
logger.warning(f'Received control signal:\n{body}') logger.warning(f'Received control signal:\n{body}')
control = body.get('control') control = body.get('control')
if control in ('status', 'running', 'cancel'): if control in ('status', 'schedule', 'running', 'cancel'):
reply_queue = body['reply_to'] reply_queue = body['reply_to']
if control == 'status': if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()]) msg = '\n'.join([self.listening_on, self.pool.debug()])
if control == 'schedule':
msg = self.scheduler.debug()
elif control == 'running': elif control == 'running':
msg = [] msg = []
for worker in self.pool.workers: for worker in self.pool.workers:
@@ -93,16 +97,11 @@ class AWXConsumerBase(object):
else: else:
logger.error('unrecognized control message: {}'.format(control)) logger.error('unrecognized control message: {}'.format(control))
def process_task(self, body): def dispatch_task(self, body):
"""This will place the given body into a worker queue to run method decorated as a task"""
if isinstance(body, dict): if isinstance(body, dict):
body['time_ack'] = time.time() body['time_ack'] = time.time()
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
if len(self.pool): if len(self.pool):
if "uuid" in body and body['uuid']: if "uuid" in body and body['uuid']:
try: try:
@@ -116,6 +115,16 @@ class AWXConsumerBase(object):
self.pool.write(queue, body) self.pool.write(queue, body)
self.total_messages += 1 self.total_messages += 1
def process_task(self, body):
"""Routes the task details in body as either a control task or a task-task"""
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
self.dispatch_task(body)
@log_excess_runtime(logger) @log_excess_runtime(logger)
def record_statistics(self): def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second if time.time() - self.last_stats > 1: # buffer stat recording to once per second
@@ -150,9 +159,9 @@ class AWXConsumerRedis(AWXConsumerBase):
class AWXConsumerPG(AWXConsumerBase): class AWXConsumerPG(AWXConsumerBase):
def __init__(self, *args, **kwargs): def __init__(self, *args, schedule=None, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE self.pg_max_wait = settings.DISPATCHER_DB_DOWNTIME_TOLERANCE
# if no successful loops have ran since startup, then we should fail right away # if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup self.pg_is_down = True # set so that we fail if we get database errors on startup
init_time = time.time() init_time = time.time()
@@ -161,27 +170,53 @@ class AWXConsumerPG(AWXConsumerBase):
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False) self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0 self.listen_cumulative_time = 0.0
if schedule:
schedule = schedule.copy()
else:
schedule = {}
# add control tasks to be ran at regular schedules
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
schedule['pool_cleanup'] = {'control': self.pool.cleanup, 'schedule': timedelta(seconds=60)}
# record subsystem metrics for the dispatcher
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
self.scheduler = Scheduler(schedule)
def record_metrics(self):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run_periodic_tasks(self): def run_periodic_tasks(self):
self.record_statistics() # maintains time buffer in method """
Run general periodic logic, and return maximum time in seconds before
the next requested run
This may be called more often than that when events are consumed
so this should be very efficient in that
"""
try:
self.record_statistics() # maintains time buffer in method
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
current_time = time.time() for job in self.scheduler.get_and_mark_pending():
if current_time - self.last_cleanup > 60: # same as cluster_node_heartbeat if 'control' in job.data:
# NOTE: if we run out of database connections, it is important to still run cleanup try:
# so that we scale down workers and free up connections job.data['control']()
self.pool.cleanup() except Exception:
self.last_cleanup = current_time logger.exception(f'Error running control task {job.data}')
elif 'task' in job.data:
body = self.worker.resolve_callable(job.data['task']).get_async_body()
# bypasses pg_notify for scheduled tasks
self.dispatch_task(body)
# record subsystem metrics for the dispatcher self.pg_is_down = False
if current_time - self.last_metrics_gather > 20: self.listen_start = time.time()
try:
self.pool.produce_subsystem_metrics(self.subsystem_metrics) return self.scheduler.time_until_next_run()
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
except Exception:
logger.exception(f"encountered an error trying to store {self.name} metrics")
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run(self, *args, **kwargs): def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs) super(AWXConsumerPG, self).run(*args, **kwargs)
@@ -197,14 +232,15 @@ class AWXConsumerPG(AWXConsumerBase):
if init is False: if init is False:
self.worker.on_start() self.worker.on_start()
init = True init = True
self.listen_start = time.time() # run_periodic_tasks run scheduled actions and gives time until next scheduled action
# this is saved to the conn (PubSub) object in order to modify read timeout in-loop
conn.select_timeout = self.run_periodic_tasks()
# this is the main operational loop for awx-manage run_dispatcher
for e in conn.events(yield_timeouts=True): for e in conn.events(yield_timeouts=True):
self.listen_cumulative_time += time.time() - self.listen_start self.listen_cumulative_time += time.time() - self.listen_start # for metrics
if e is not None: if e is not None:
self.process_task(json.loads(e.payload)) self.process_task(json.loads(e.payload))
self.run_periodic_tasks() conn.select_timeout = self.run_periodic_tasks()
self.pg_is_down = False
self.listen_start = time.time()
if self.should_stop: if self.should_stop:
return return
except psycopg.InterfaceError: except psycopg.InterfaceError:

View File

@@ -3,15 +3,13 @@
import logging import logging
import yaml import yaml
from django.core.cache import cache as django_cache from django.conf import settings
from django.core.management.base import BaseCommand from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from awx.main.dispatch import get_task_queuename from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.dispatch import periodic
logger = logging.getLogger('awx.main.dispatch') logger = logging.getLogger('awx.main.dispatch')
@@ -21,6 +19,7 @@ class Command(BaseCommand):
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--status', dest='status', action='store_true', help='print the internal state of any running dispatchers') parser.add_argument('--status', dest='status', action='store_true', help='print the internal state of any running dispatchers')
parser.add_argument('--schedule', dest='schedule', action='store_true', help='print the current status of schedules being ran by dispatcher')
parser.add_argument('--running', dest='running', action='store_true', help='print the UUIDs of any tasked managed by this dispatcher') parser.add_argument('--running', dest='running', action='store_true', help='print the UUIDs of any tasked managed by this dispatcher')
parser.add_argument( parser.add_argument(
'--reload', '--reload',
@@ -42,6 +41,9 @@ class Command(BaseCommand):
if options.get('status'): if options.get('status'):
print(Control('dispatcher').status()) print(Control('dispatcher').status())
return return
if options.get('schedule'):
print(Control('dispatcher').schedule())
return
if options.get('running'): if options.get('running'):
print(Control('dispatcher').running()) print(Control('dispatcher').running())
return return
@@ -58,21 +60,11 @@ class Command(BaseCommand):
print(Control('dispatcher').cancel(cancel_data)) print(Control('dispatcher').cancel(cancel_data))
return return
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
# spawn a daemon thread to periodically enqueues scheduled tasks
# (like the node heartbeat)
periodic.run_continuously()
consumer = None consumer = None
try: try:
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()] queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4)) consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4), schedule=settings.CELERYBEAT_SCHEDULE)
consumer.run() consumer.run()
except KeyboardInterrupt: except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher') logger.debug('Terminating Task Dispatcher')

View File

@@ -1,4 +1,4 @@
# Generated by Django 4.2 on 2023-06-09 19:51 # Generated by Django 4.2.3 on 2023-08-02 13:18
import awx.main.models.notifications import awx.main.models.notifications
from django.db import migrations, models from django.db import migrations, models
@@ -11,16 +11,6 @@ class Migration(migrations.Migration):
] ]
operations = [ operations = [
migrations.AlterField(
model_name='activitystream',
name='deleted_actor',
field=models.JSONField(null=True),
),
migrations.AlterField(
model_name='activitystream',
name='setting',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField( migrations.AlterField(
model_name='instancegroup', model_name='instancegroup',
name='policy_instance_list', name='policy_instance_list',
@@ -28,31 +18,11 @@ class Migration(migrations.Migration):
blank=True, default=list, help_text='List of exact-match Instances that will always be automatically assigned to this group' blank=True, default=list, help_text='List of exact-match Instances that will always be automatically assigned to this group'
), ),
), ),
migrations.AlterField(
model_name='job',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='joblaunchconfig',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='joblaunchconfig',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField( migrations.AlterField(
model_name='jobtemplate', model_name='jobtemplate',
name='survey_spec', name='survey_spec',
field=models.JSONField(blank=True, default=dict), field=models.JSONField(blank=True, default=dict),
), ),
migrations.AlterField(
model_name='notification',
name='body',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField( migrations.AlterField(
model_name='notificationtemplate', model_name='notificationtemplate',
name='messages', name='messages',
@@ -94,31 +64,6 @@ class Migration(migrations.Migration):
name='survey_passwords', name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False), field=models.JSONField(blank=True, default=dict, editable=False),
), ),
migrations.AlterField(
model_name='unifiedjob',
name='job_env',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='workflowjob',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='workflowjob',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='workflowjobnode',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='workflowjobnode',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField( migrations.AlterField(
model_name='workflowjobtemplate', model_name='workflowjobtemplate',
name='char_prompts', name='char_prompts',
@@ -139,4 +84,194 @@ class Migration(migrations.Migration):
name='survey_passwords', name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False), field=models.JSONField(blank=True, default=dict, editable=False),
), ),
# These are potentially a problem. Move the existing fields
# aside while pretending like they've been deleted, then add
# in fresh empty fields. Make the old fields nullable where
# needed while we are at it, so that new rows don't hit
# IntegrityError. We'll do the data migration out-of-band
# using a task.
migrations.RunSQL( # Already nullable
"ALTER TABLE main_activitystream RENAME deleted_actor TO deleted_actor_old;",
state_operations=[
migrations.RemoveField(
model_name='activitystream',
name='deleted_actor',
),
],
),
migrations.AddField(
model_name='activitystream',
name='deleted_actor',
field=models.JSONField(null=True),
),
migrations.RunSQL(
"""
ALTER TABLE main_activitystream RENAME setting TO setting_old;
ALTER TABLE main_activitystream ALTER COLUMN setting_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='activitystream',
name='setting',
),
],
),
migrations.AddField(
model_name='activitystream',
name='setting',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_job ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='job',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='job',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='joblaunchconfig',
name='char_prompts',
),
],
),
migrations.AddField(
model_name='joblaunchconfig',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='joblaunchconfig',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='joblaunchconfig',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_notification RENAME body TO body_old;
ALTER TABLE main_notification ALTER COLUMN body_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='notification',
name='body',
),
],
),
migrations.AddField(
model_name='notification',
name='body',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old;
ALTER TABLE main_unifiedjob ALTER COLUMN job_env_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='unifiedjob',
name='job_env',
),
],
),
migrations.AddField(
model_name='unifiedjob',
name='job_env',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjob ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjob',
name='char_prompts',
),
],
),
migrations.AddField(
model_name='workflowjob',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjob ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjob',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='workflowjob',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjobnode',
name='char_prompts',
),
],
),
migrations.AddField(
model_name='workflowjobnode',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjobnode',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='workflowjobnode',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
] ]

View File

@@ -3,6 +3,7 @@
# Django # Django
from django.conf import settings # noqa from django.conf import settings # noqa
from django.db import connection
from django.db.models.signals import pre_delete # noqa from django.db.models.signals import pre_delete # noqa
# AWX # AWX
@@ -99,6 +100,58 @@ User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('accessible_objects', user_accessible_objects) User.add_to_class('accessible_objects', user_accessible_objects)
def convert_jsonfields():
if connection.vendor != 'postgresql':
return
# fmt: off
fields = [
('main_activitystream', 'id', (
'deleted_actor',
'setting',
)),
('main_job', 'unifiedjob_ptr_id', (
'survey_passwords',
)),
('main_joblaunchconfig', 'id', (
'char_prompts',
'survey_passwords',
)),
('main_notification', 'id', (
'body',
)),
('main_unifiedjob', 'id', (
'job_env',
)),
('main_workflowjob', 'unifiedjob_ptr_id', (
'char_prompts',
'survey_passwords',
)),
('main_workflowjobnode', 'id', (
'char_prompts',
'survey_passwords',
)),
]
# fmt: on
with connection.cursor() as cursor:
for table, pkfield, columns in fields:
# Do the renamed old columns still exist? If so, run the task.
old_columns = ','.join(f"'{column}_old'" for column in columns)
cursor.execute(
f"""
select count(1) from information_schema.columns
where
table_name = %s and column_name in ({old_columns});
""",
(table,),
)
if cursor.fetchone()[0]:
from awx.main.tasks.system import migrate_jsonfield
migrate_jsonfield.apply_async([table, pkfield, columns])
def cleanup_created_modified_by(sender, **kwargs): def cleanup_created_modified_by(sender, **kwargs):
# work around a bug in django-polymorphic that doesn't properly # work around a bug in django-polymorphic that doesn't properly
# handle cascades for reverse foreign keys on the polymorphic base model # handle cascades for reverse foreign keys on the polymorphic base model

View File

@@ -29,8 +29,9 @@ class RunnerCallback:
self.safe_env = {} self.safe_env = {}
self.event_ct = 0 self.event_ct = 0
self.model = model self.model = model
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE / 5) self.update_attempts = int(settings.DISPATCHER_DB_DOWNTIME_TOLERANCE / 5)
self.wrapup_event_dispatched = False self.wrapup_event_dispatched = False
self.artifacts_processed = False
self.extra_update_fields = {} self.extra_update_fields = {}
def update_model(self, pk, _attempt=0, **updates): def update_model(self, pk, _attempt=0, **updates):
@@ -211,6 +212,9 @@ class RunnerCallback:
if result_traceback: if result_traceback:
self.delay_update(result_traceback=result_traceback) self.delay_update(result_traceback=result_traceback)
def artifacts_handler(self, artifact_dir):
self.artifacts_processed = True
class RunnerCallbackForProjectUpdate(RunnerCallback): class RunnerCallbackForProjectUpdate(RunnerCallback):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):

View File

@@ -9,6 +9,7 @@ from django.conf import settings
from django.db.models.query import QuerySet from django.db.models.query import QuerySet
from django.utils.encoding import smart_str from django.utils.encoding import smart_str
from django.utils.timezone import now from django.utils.timezone import now
from django.db import OperationalError
# AWX # AWX
from awx.main.utils.common import log_excess_runtime from awx.main.utils.common import log_excess_runtime
@@ -57,6 +58,28 @@ def start_fact_cache(hosts, destination, log_data, timeout=None, inventory_id=No
return None return None
def raw_update_hosts(host_list):
Host.objects.bulk_update(host_list, ['ansible_facts', 'ansible_facts_modified'])
def update_hosts(host_list, max_tries=5):
if not host_list:
return
for i in range(max_tries):
try:
raw_update_hosts(host_list)
except OperationalError as exc:
# Deadlocks can happen if this runs at the same time as another large query
# inventory updates and updating last_job_host_summary are candidates for conflict
# but these would resolve easily on a retry
if i + 1 < max_tries:
logger.info(f'OperationalError (suspected deadlock) saving host facts retry {i}, message: {exc}')
continue
else:
raise
break
@log_excess_runtime( @log_excess_runtime(
logger, logger,
debug_cutoff=0.01, debug_cutoff=0.01,
@@ -111,7 +134,6 @@ def finish_fact_cache(hosts, destination, facts_write_time, log_data, job_id=Non
system_tracking_logger.info('Facts cleared for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name))) system_tracking_logger.info('Facts cleared for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)))
log_data['cleared_ct'] += 1 log_data['cleared_ct'] += 1
if len(hosts_to_update) > 100: if len(hosts_to_update) > 100:
Host.objects.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified']) update_hosts(hosts_to_update)
hosts_to_update = [] hosts_to_update = []
if hosts_to_update: update_hosts(hosts_to_update)
Host.objects.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])

View File

@@ -112,7 +112,7 @@ class BaseTask(object):
def __init__(self): def __init__(self):
self.cleanup_paths = [] self.cleanup_paths = []
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE / 5) self.update_attempts = int(settings.DISPATCHER_DB_DOWNTIME_TOLERANCE / 5)
self.runner_callback = self.callback_class(model=self.model) self.runner_callback = self.callback_class(model=self.model)
def update_model(self, pk, _attempt=0, **updates): def update_model(self, pk, _attempt=0, **updates):
@@ -1094,7 +1094,7 @@ class RunJob(SourceControlMixin, BaseTask):
# actual `run()` call; this _usually_ means something failed in # actual `run()` call; this _usually_ means something failed in
# the pre_run_hook method # the pre_run_hook method
return return
if self.should_use_fact_cache(): if self.should_use_fact_cache() and self.runner_callback.artifacts_processed:
job.log_lifecycle("finish_job_fact_cache") job.log_lifecycle("finish_job_fact_cache")
finish_fact_cache( finish_fact_cache(
job.get_hosts_for_fact_cache(), job.get_hosts_for_fact_cache(),

View File

@@ -464,6 +464,7 @@ class AWXReceptorJob:
event_handler=self.task.runner_callback.event_handler, event_handler=self.task.runner_callback.event_handler,
finished_callback=self.task.runner_callback.finished_callback, finished_callback=self.task.runner_callback.finished_callback,
status_handler=self.task.runner_callback.status_handler, status_handler=self.task.runner_callback.status_handler,
artifacts_handler=self.task.runner_callback.artifacts_handler,
**self.runner_params, **self.runner_params,
) )

View File

@@ -2,6 +2,7 @@
from collections import namedtuple from collections import namedtuple
import functools import functools
import importlib import importlib
import itertools
import json import json
import logging import logging
import os import os
@@ -14,7 +15,7 @@ from datetime import datetime
# Django # Django
from django.conf import settings from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError from django.db import connection, transaction, DatabaseError, IntegrityError
from django.db.models.fields.related import ForeignKey from django.db.models.fields.related import ForeignKey
from django.utils.timezone import now, timedelta from django.utils.timezone import now, timedelta
from django.utils.encoding import smart_str from django.utils.encoding import smart_str
@@ -48,6 +49,7 @@ from awx.main.models import (
SmartInventoryMembership, SmartInventoryMembership,
Job, Job,
HostMetric, HostMetric,
convert_jsonfields,
) )
from awx.main.constants import ACTIVE_STATES from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task from awx.main.dispatch.publish import task
@@ -86,6 +88,11 @@ def dispatch_startup():
if settings.IS_K8S: if settings.IS_K8S:
write_receptor_config() write_receptor_config()
try:
convert_jsonfields()
except Exception:
logger.exception("Failed json field conversion, skipping.")
startup_logger.debug("Syncing Schedules") startup_logger.debug("Syncing Schedules")
for sch in Schedule.objects.all(): for sch in Schedule.objects.all():
try: try:
@@ -129,6 +136,52 @@ def inform_cluster_of_shutdown():
logger.exception('Encountered problem with normal shutdown signal.') logger.exception('Encountered problem with normal shutdown signal.')
@task(queue=get_task_queuename)
def migrate_jsonfield(table, pkfield, columns):
batchsize = 10000
with advisory_lock(f'json_migration_{table}', wait=False) as acquired:
if not acquired:
return
from django.db.migrations.executor import MigrationExecutor
# If Django is currently running migrations, wait until it is done.
while True:
executor = MigrationExecutor(connection)
if not executor.migration_plan(executor.loader.graph.leaf_nodes()):
break
time.sleep(120)
logger.warning(f"Migrating json fields for {table}: {', '.join(columns)}")
with connection.cursor() as cursor:
for i in itertools.count(0, batchsize):
# Are there even any rows in the table beyond this point?
cursor.execute(f"select count(1) from {table} where {pkfield} >= %s limit 1;", (i,))
if not cursor.fetchone()[0]:
break
column_expr = ', '.join(f"{colname} = {colname}_old::jsonb" for colname in columns)
# If any of the old columns have non-null values, the data needs to be cast and copied over.
empty_expr = ' or '.join(f"{colname}_old is not null" for colname in columns)
cursor.execute( # Only clobber the new fields if there is non-null data in the old ones.
f"""
update {table}
set {column_expr}
where {pkfield} >= %s and {pkfield} < %s
and {empty_expr};
""",
(i, i + batchsize),
)
rows = cursor.rowcount
logger.debug(f"Batch {i} to {i + batchsize} copied on {table}, {rows} rows affected.")
column_expr = ', '.join(f"DROP COLUMN {column}_old" for column in columns)
cursor.execute(f"ALTER TABLE {table} {column_expr};")
logger.warning(f"Migration of {table} to jsonb is finished.")
@task(queue=get_task_queuename) @task(queue=get_task_queuename)
def apply_cluster_membership_policies(): def apply_cluster_membership_policies():
from awx.main.signals import disable_activity_stream from awx.main.signals import disable_activity_stream

View File

@@ -3,6 +3,7 @@ import multiprocessing
import random import random
import signal import signal
import time import time
import yaml
from unittest import mock from unittest import mock
from django.utils.timezone import now as tz_now from django.utils.timezone import now as tz_now
@@ -13,6 +14,7 @@ from awx.main.dispatch import reaper
from awx.main.dispatch.pool import StatefulPoolWorker, WorkerPool, AutoscalePool from awx.main.dispatch.pool import StatefulPoolWorker, WorkerPool, AutoscalePool
from awx.main.dispatch.publish import task from awx.main.dispatch.publish import task
from awx.main.dispatch.worker import BaseWorker, TaskWorker from awx.main.dispatch.worker import BaseWorker, TaskWorker
from awx.main.dispatch.periodic import Scheduler
''' '''
@@ -439,3 +441,76 @@ class TestJobReaper(object):
assert job.started > ref_time assert job.started > ref_time
assert job.status == 'running' assert job.status == 'running'
assert job.job_explanation == '' assert job.job_explanation == ''
@pytest.mark.django_db
class TestScheduler:
def test_too_many_schedules_freak_out(self):
with pytest.raises(RuntimeError):
Scheduler({'job1': {'schedule': datetime.timedelta(seconds=1)}, 'job2': {'schedule': datetime.timedelta(seconds=1)}})
def test_spread_out(self):
scheduler = Scheduler(
{
'job1': {'schedule': datetime.timedelta(seconds=16)},
'job2': {'schedule': datetime.timedelta(seconds=16)},
'job3': {'schedule': datetime.timedelta(seconds=16)},
'job4': {'schedule': datetime.timedelta(seconds=16)},
}
)
assert [job.offset for job in scheduler.jobs] == [0, 4, 8, 12]
def test_missed_schedule(self, mocker):
scheduler = Scheduler({'job1': {'schedule': datetime.timedelta(seconds=10)}})
assert scheduler.jobs[0].missed_runs(time.time() - scheduler.global_start) == 0
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=scheduler.global_start + 50)
scheduler.get_and_mark_pending()
assert scheduler.jobs[0].missed_runs(50) > 1
def test_advance_schedule(self, mocker):
scheduler = Scheduler(
{
'job1': {'schedule': datetime.timedelta(seconds=30)},
'joba': {'schedule': datetime.timedelta(seconds=20)},
'jobb': {'schedule': datetime.timedelta(seconds=20)},
}
)
for job in scheduler.jobs:
# HACK: the offsets automatically added make this a hard test to write... so remove offsets
job.offset = 0.0
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=scheduler.global_start + 29)
to_run = scheduler.get_and_mark_pending()
assert set(job.name for job in to_run) == set(['joba', 'jobb'])
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=scheduler.global_start + 39)
to_run = scheduler.get_and_mark_pending()
assert len(to_run) == 1
assert to_run[0].name == 'job1'
@staticmethod
def get_job(scheduler, name):
for job in scheduler.jobs:
if job.name == name:
return job
def test_scheduler_debug(self, mocker):
scheduler = Scheduler(
{
'joba': {'schedule': datetime.timedelta(seconds=20)},
'jobb': {'schedule': datetime.timedelta(seconds=50)},
'jobc': {'schedule': datetime.timedelta(seconds=500)},
'jobd': {'schedule': datetime.timedelta(seconds=20)},
}
)
rel_time = 119.9 # slightly under the 6th 20-second bin, to avoid offset problems
current_time = scheduler.global_start + rel_time
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=current_time - 1.0e-8)
self.get_job(scheduler, 'jobb').mark_run(rel_time)
self.get_job(scheduler, 'jobd').mark_run(rel_time - 20.0)
output = scheduler.debug()
data = yaml.safe_load(output)
assert data['schedule_list']['jobc']['last_run_seconds_ago'] is None
assert data['schedule_list']['joba']['missed_runs'] == 4
assert data['schedule_list']['jobd']['missed_runs'] == 3
assert data['schedule_list']['jobd']['completed_runs'] == 1
assert data['schedule_list']['jobb']['next_run_in_seconds'] > 25.0

View File

@@ -6,6 +6,7 @@ import json
from awx.main.models import ( from awx.main.models import (
Job, Job,
Instance, Instance,
Host,
JobHostSummary, JobHostSummary,
InventoryUpdate, InventoryUpdate,
InventorySource, InventorySource,
@@ -18,6 +19,9 @@ from awx.main.models import (
ExecutionEnvironment, ExecutionEnvironment,
) )
from awx.main.tasks.system import cluster_node_heartbeat from awx.main.tasks.system import cluster_node_heartbeat
from awx.main.tasks.facts import update_hosts
from django.db import OperationalError
from django.test.utils import override_settings from django.test.utils import override_settings
@@ -112,6 +116,51 @@ def test_job_notification_host_data(inventory, machine_credential, project, job_
} }
@pytest.mark.django_db
class TestAnsibleFactsSave:
current_call = 0
def test_update_hosts_deleted_host(self, inventory):
hosts = [Host.objects.create(inventory=inventory, name=f'foo{i}') for i in range(3)]
for host in hosts:
host.ansible_facts = {'foo': 'bar'}
last_pk = hosts[-1].pk
assert inventory.hosts.count() == 3
Host.objects.get(pk=last_pk).delete()
assert inventory.hosts.count() == 2
update_hosts(hosts)
assert inventory.hosts.count() == 2
for host in inventory.hosts.all():
host.refresh_from_db()
assert host.ansible_facts == {'foo': 'bar'}
def test_update_hosts_forever_deadlock(self, inventory, mocker):
hosts = [Host.objects.create(inventory=inventory, name=f'foo{i}') for i in range(3)]
for host in hosts:
host.ansible_facts = {'foo': 'bar'}
db_mock = mocker.patch('awx.main.tasks.facts.Host.objects.bulk_update')
db_mock.side_effect = OperationalError('deadlock detected')
with pytest.raises(OperationalError):
update_hosts(hosts)
def fake_bulk_update(self, host_list):
if self.current_call > 2:
return Host.objects.bulk_update(host_list, ['ansible_facts', 'ansible_facts_modified'])
self.current_call += 1
raise OperationalError('deadlock detected')
def test_update_hosts_resolved_deadlock(self, inventory, mocker):
hosts = [Host.objects.create(inventory=inventory, name=f'foo{i}') for i in range(3)]
for host in hosts:
host.ansible_facts = {'foo': 'bar'}
self.current_call = 0
mocker.patch('awx.main.tasks.facts.raw_update_hosts', new=self.fake_bulk_update)
update_hosts(hosts)
for host in inventory.hosts.all():
host.refresh_from_db()
assert host.ansible_facts == {'foo': 'bar'}
@pytest.mark.django_db @pytest.mark.django_db
class TestLaunchConfig: class TestLaunchConfig:
def test_null_creation_from_prompts(self): def test_null_creation_from_prompts(self):

View File

@@ -283,6 +283,7 @@ class LogstashFormatter(LogstashFormatterBase):
message.update(self.get_debug_fields(record)) message.update(self.get_debug_fields(record))
if settings.LOG_AGGREGATOR_TYPE == 'splunk': if settings.LOG_AGGREGATOR_TYPE == 'splunk':
# splunk messages must have a top level "event" key # splunk messages must have a top level "event" key when using the /services/collector/event receiver.
message = {'event': message} # The event receiver wont scan an event for a timestamp field therefore a time field must also be supplied containing epoch timestamp
message = {'time': record.created, 'event': message}
return self.serialize(message) return self.serialize(message)

View File

@@ -97,8 +97,6 @@ class SpecialInventoryHandler(logging.Handler):
self.event_handler(dispatch_data) self.event_handler(dispatch_data)
ColorHandler = logging.StreamHandler
if settings.COLOR_LOGS is True: if settings.COLOR_LOGS is True:
try: try:
from logutils.colorize import ColorizingStreamHandler from logutils.colorize import ColorizingStreamHandler
@@ -133,3 +131,5 @@ if settings.COLOR_LOGS is True:
except ImportError: except ImportError:
# logutils is only used for colored logs in the dev environment # logutils is only used for colored logs in the dev environment
pass pass
else:
ColorHandler = logging.StreamHandler

View File

@@ -175,7 +175,12 @@ class Licenser(object):
license.setdefault('pool_id', sub['pool']['id']) license.setdefault('pool_id', sub['pool']['id'])
license.setdefault('product_name', sub['pool']['productName']) license.setdefault('product_name', sub['pool']['productName'])
license.setdefault('valid_key', True) license.setdefault('valid_key', True)
license.setdefault('license_type', 'enterprise') if sub['pool']['productId'].startswith('S'):
license.setdefault('trial', True)
license.setdefault('license_type', 'trial')
else:
license.setdefault('trial', False)
license.setdefault('license_type', 'enterprise')
license.setdefault('satellite', False) license.setdefault('satellite', False)
# Use the nearest end date # Use the nearest end date
endDate = parse_date(sub['endDate']) endDate = parse_date(sub['endDate'])
@@ -287,7 +292,7 @@ class Licenser(object):
license['productId'] = sub['product_id'] license['productId'] = sub['product_id']
license['quantity'] = int(sub['quantity']) license['quantity'] = int(sub['quantity'])
license['support_level'] = sub['support_level'] license['support_level'] = sub['support_level']
license['usage'] = sub['usage'] license['usage'] = sub.get('usage')
license['subscription_name'] = sub['name'] license['subscription_name'] = sub['name']
license['subscriptionId'] = sub['subscription_id'] license['subscriptionId'] = sub['subscription_id']
license['accountNumber'] = sub['account_number'] license['accountNumber'] = sub['account_number']

View File

@@ -210,7 +210,7 @@ JOB_EVENT_WORKERS = 4
# The number of seconds to buffer callback receiver bulk # The number of seconds to buffer callback receiver bulk
# writes in memory before flushing via JobEvent.objects.bulk_create() # writes in memory before flushing via JobEvent.objects.bulk_create()
JOB_EVENT_BUFFER_SECONDS = 0.1 JOB_EVENT_BUFFER_SECONDS = 1
# The interval at which callback receiver statistics should be # The interval at which callback receiver statistics should be
# recorded # recorded
@@ -453,7 +453,7 @@ RECEPTOR_SERVICE_ADVERTISEMENT_PERIOD = 60 # https://github.com/ansible/recepto
EXECUTION_NODE_REMEDIATION_CHECKS = 60 * 30 # once every 30 minutes check if an execution node errors have been resolved EXECUTION_NODE_REMEDIATION_CHECKS = 60 * 30 # once every 30 minutes check if an execution node errors have been resolved
# Amount of time dispatcher will try to reconnect to database for jobs and consuming new work # Amount of time dispatcher will try to reconnect to database for jobs and consuming new work
DISPATCHER_DB_DOWNTOWN_TOLLERANCE = 40 DISPATCHER_DB_DOWNTIME_TOLERANCE = 40
BROKER_URL = 'unix:///var/run/redis/redis.sock' BROKER_URL = 'unix:///var/run/redis/redis.sock'
CELERYBEAT_SCHEDULE = { CELERYBEAT_SCHEDULE = {

View File

@@ -28,8 +28,8 @@ SHELL_PLUS_PRINT_SQL = False
# show colored logs in the dev environment # show colored logs in the dev environment
# to disable this, set `COLOR_LOGS = False` in awx/settings/local_settings.py # to disable this, set `COLOR_LOGS = False` in awx/settings/local_settings.py
LOGGING['handlers']['console']['()'] = 'awx.main.utils.handlers.ColorHandler' # noqa
COLOR_LOGS = True COLOR_LOGS = True
LOGGING['handlers']['console']['()'] = 'awx.main.utils.handlers.ColorHandler' # noqa
ALLOWED_HOSTS = ['*'] ALLOWED_HOSTS = ['*']

View File

@@ -6,5 +6,20 @@ class ConstructedInventories extends InstanceGroupsMixin(Base) {
super(http); super(http);
this.baseUrl = 'api/v2/constructed_inventories/'; this.baseUrl = 'api/v2/constructed_inventories/';
} }
async readConstructedInventoryOptions(id, method) {
const {
data: { actions },
} = await this.http.options(`${this.baseUrl}${id}/`);
if (actions[method]) {
return actions[method];
}
throw new Error(
`You have insufficient access to this Constructed Inventory.
Please contact your system administrator if there is an issue with your access.`
);
}
} }
export default ConstructedInventories; export default ConstructedInventories;

View File

@@ -0,0 +1,51 @@
import ConstructedInventories from './ConstructedInventories';
describe('ConstructedInventoriesAPI', () => {
const constructedInventoryId = 1;
const constructedInventoryMethod = 'PUT';
let ConstructedInventoriesAPI;
let mockHttp;
beforeEach(() => {
const optionsPromise = () =>
Promise.resolve({
data: {
actions: {
PUT: {},
},
},
});
mockHttp = {
options: jest.fn(optionsPromise),
};
ConstructedInventoriesAPI = new ConstructedInventories(mockHttp);
});
afterEach(() => {
jest.resetAllMocks();
});
test('readConstructedInventoryOptions calls options with the expected params', async () => {
await ConstructedInventoriesAPI.readConstructedInventoryOptions(
constructedInventoryId,
constructedInventoryMethod
);
expect(mockHttp.options).toHaveBeenCalledTimes(1);
expect(mockHttp.options).toHaveBeenCalledWith(
`api/v2/constructed_inventories/${constructedInventoryId}/`
);
});
test('readConstructedInventory should throw an error if action method is missing', async () => {
try {
await ConstructedInventoriesAPI.readConstructedInventoryOptions(
constructedInventoryId,
'POST'
);
} catch (error) {
expect(error.message).toContain(
'You have insufficient access to this Constructed Inventory.'
);
}
});
});

View File

@@ -77,7 +77,7 @@ function PromptModalForm({
} }
if (launchConfig.ask_labels_on_launch) { if (launchConfig.ask_labels_on_launch) {
const { labelIds } = createNewLabels( const { labelIds } = await createNewLabels(
values.labels, values.labels,
resource.organization resource.organization
); );

View File

@@ -1,10 +1,10 @@
import React, { useCallback, useEffect, useState } from 'react'; import React, { useCallback, useEffect, useState } from 'react';
import { useLocation } from 'react-router-dom'; import { useLocation } from 'react-router-dom';
import { t } from '@lingui/macro'; import { t } from '@lingui/macro';
import { RolesAPI, TeamsAPI, UsersAPI, OrganizationsAPI } from 'api'; import { RolesAPI, TeamsAPI, UsersAPI } from 'api';
import { getQSConfig, parseQueryString } from 'util/qs'; import { getQSConfig, parseQueryString } from 'util/qs';
import useRequest, { useDeleteItems } from 'hooks/useRequest'; import useRequest, { useDeleteItems } from 'hooks/useRequest';
import { useUserProfile, useConfig } from 'contexts/Config'; import { useUserProfile } from 'contexts/Config';
import AddResourceRole from '../AddRole/AddResourceRole'; import AddResourceRole from '../AddRole/AddResourceRole';
import AlertModal from '../AlertModal'; import AlertModal from '../AlertModal';
import DataListToolbar from '../DataListToolbar'; import DataListToolbar from '../DataListToolbar';
@@ -25,8 +25,7 @@ const QS_CONFIG = getQSConfig('access', {
}); });
function ResourceAccessList({ apiModel, resource }) { function ResourceAccessList({ apiModel, resource }) {
const { isSuperUser, isOrgAdmin } = useUserProfile(); const { isSuperUser } = useUserProfile();
const { me } = useConfig();
const [submitError, setSubmitError] = useState(null); const [submitError, setSubmitError] = useState(null);
const [deletionRecord, setDeletionRecord] = useState(null); const [deletionRecord, setDeletionRecord] = useState(null);
const [deletionRole, setDeletionRole] = useState(null); const [deletionRole, setDeletionRole] = useState(null);
@@ -34,42 +33,15 @@ function ResourceAccessList({ apiModel, resource }) {
const [showDeleteModal, setShowDeleteModal] = useState(false); const [showDeleteModal, setShowDeleteModal] = useState(false);
const location = useLocation(); const location = useLocation();
const {
isLoading: isFetchingOrgAdmins,
error: errorFetchingOrgAdmins,
request: fetchOrgAdmins,
result: { isCredentialOrgAdmin },
} = useRequest(
useCallback(async () => {
if (
isSuperUser ||
resource.type !== 'credential' ||
!isOrgAdmin ||
!resource?.organization
) {
return false;
}
const {
data: { count },
} = await OrganizationsAPI.readAdmins(resource.organization, {
id: me.id,
});
return { isCredentialOrgAdmin: !!count };
}, [me.id, isOrgAdmin, isSuperUser, resource.type, resource.organization]),
{
isCredentialOrgAdmin: false,
}
);
useEffect(() => {
fetchOrgAdmins();
}, [fetchOrgAdmins]);
let canAddAdditionalControls = false; let canAddAdditionalControls = false;
if (isSuperUser) { if (isSuperUser) {
canAddAdditionalControls = true; canAddAdditionalControls = true;
} }
if (resource.type === 'credential' && isOrgAdmin && isCredentialOrgAdmin) { if (
resource.type === 'credential' &&
resource?.summary_fields?.user_capabilities?.edit &&
resource?.organization
) {
canAddAdditionalControls = true; canAddAdditionalControls = true;
} }
if (resource.type !== 'credential') { if (resource.type !== 'credential') {
@@ -195,8 +167,8 @@ function ResourceAccessList({ apiModel, resource }) {
return ( return (
<> <>
<PaginatedTable <PaginatedTable
error={contentError || errorFetchingOrgAdmins} error={contentError}
hasContentLoading={isLoading || isDeleteLoading || isFetchingOrgAdmins} hasContentLoading={isLoading || isDeleteLoading}
items={accessRecords} items={accessRecords}
itemCount={itemCount} itemCount={itemCount}
pluralizedItemName={t`Roles`} pluralizedItemName={t`Roles`}

View File

@@ -463,7 +463,7 @@ describe('<ResourceAccessList />', () => {
expect(wrapper.find('ToolbarAddButton').length).toEqual(1); expect(wrapper.find('ToolbarAddButton').length).toEqual(1);
}); });
test('should not show add button for non system admin & non org admin', async () => { test('should not show add button for a user without edit permissions on the credential', async () => {
useUserProfile.mockImplementation(() => { useUserProfile.mockImplementation(() => {
return { return {
isSuperUser: false, isSuperUser: false,
@@ -476,7 +476,21 @@ describe('<ResourceAccessList />', () => {
let wrapper; let wrapper;
await act(async () => { await act(async () => {
wrapper = mountWithContexts( wrapper = mountWithContexts(
<ResourceAccessList resource={credential} apiModel={CredentialsAPI} />, <ResourceAccessList
resource={{
...credential,
summary_fields: {
...credential.summary_fields,
user_capabilities: {
edit: false,
delete: false,
copy: false,
use: false,
},
},
}}
apiModel={CredentialsAPI}
/>,
{ context: { router: { credentialHistory } } } { context: { router: { credentialHistory } } }
); );
}); });

View File

@@ -47,7 +47,7 @@ export default function StatusLabel({ status, tooltipContent = '', children }) {
unreachable: t`Unreachable`, unreachable: t`Unreachable`,
running: t`Running`, running: t`Running`,
pending: t`Pending`, pending: t`Pending`,
skipped: t`Skipped'`, skipped: t`Skipped`,
timedOut: t`Timed out`, timedOut: t`Timed out`,
waiting: t`Waiting`, waiting: t`Waiting`,
disabled: t`Disabled`, disabled: t`Disabled`,

View File

@@ -8722,8 +8722,8 @@ msgid "Skipped"
msgstr "Skipped" msgstr "Skipped"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "Skipped'" msgstr "Skipped"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141 #: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8190,8 +8190,8 @@ msgid "Skipped"
msgstr "Omitido" msgstr "Omitido"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "Omitido'" msgstr "Omitido"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141 #: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8078,7 +8078,7 @@ msgid "Skipped"
msgstr "Ignoré" msgstr "Ignoré"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "Ignoré" msgstr "Ignoré"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200

View File

@@ -8118,8 +8118,8 @@ msgid "Skipped"
msgstr "スキップ済" msgstr "スキップ済"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "スキップ済'" msgstr "スキップ済"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141 #: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8072,8 +8072,8 @@ msgid "Skipped"
msgstr "건너뜀" msgstr "건너뜀"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "건너뜀'" msgstr "건너뜀"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141 #: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8096,8 +8096,8 @@ msgid "Skipped"
msgstr "Overgeslagen" msgstr "Overgeslagen"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "Overgeslagen'" msgstr "Overgeslagen"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141 #: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8072,8 +8072,8 @@ msgid "Skipped"
msgstr "跳过" msgstr "跳过"
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "跳过'" msgstr "跳过"
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141 #: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8503,7 +8503,7 @@ msgid "Skipped"
msgstr "" msgstr ""
#: components/StatusLabel/StatusLabel.js:50 #: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'" msgid "Skipped"
msgstr "" msgstr ""
#: components/NotificationList/NotificationList.js:200 #: components/NotificationList/NotificationList.js:200

View File

@@ -1,14 +1,43 @@
import React, { useState } from 'react'; import React, { useCallback, useEffect, useState } from 'react';
import { useHistory } from 'react-router-dom'; import { useHistory } from 'react-router-dom';
import { Card, PageSection } from '@patternfly/react-core'; import { Card, PageSection } from '@patternfly/react-core';
import { ConstructedInventoriesAPI, InventoriesAPI } from 'api'; import { ConstructedInventoriesAPI, InventoriesAPI } from 'api';
import useRequest from 'hooks/useRequest';
import { CardBody } from 'components/Card'; import { CardBody } from 'components/Card';
import ContentError from 'components/ContentError';
import ContentLoading from 'components/ContentLoading';
import ConstructedInventoryForm from '../shared/ConstructedInventoryForm'; import ConstructedInventoryForm from '../shared/ConstructedInventoryForm';
function ConstructedInventoryAdd() { function ConstructedInventoryAdd() {
const history = useHistory(); const history = useHistory();
const [submitError, setSubmitError] = useState(null); const [submitError, setSubmitError] = useState(null);
const {
isLoading: isLoadingOptions,
error: optionsError,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(async () => {
const res = await ConstructedInventoriesAPI.readOptions();
const { data } = res;
return data.actions.POST;
}, []),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
if (isLoadingOptions || (!options && !optionsError)) {
return <ContentLoading />;
}
if (optionsError) {
return <ContentError error={optionsError} />;
}
const handleCancel = () => { const handleCancel = () => {
history.push('/inventories'); history.push('/inventories');
}; };
@@ -48,6 +77,7 @@ function ConstructedInventoryAdd() {
onCancel={handleCancel} onCancel={handleCancel}
onSubmit={handleSubmit} onSubmit={handleSubmit}
submitError={submitError} submitError={submitError}
options={options}
/> />
</CardBody> </CardBody>
</Card> </Card>

View File

@@ -55,6 +55,7 @@ describe('<ConstructedInventoryAdd />', () => {
context: { router: { history } }, context: { router: { history } },
}); });
}); });
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
}); });
afterEach(() => { afterEach(() => {

View File

@@ -20,6 +20,27 @@ function ConstructedInventoryEdit({ inventory }) {
const detailsUrl = `/inventories/constructed_inventory/${inventory.id}/details`; const detailsUrl = `/inventories/constructed_inventory/${inventory.id}/details`;
const constructedInventoryId = inventory.id; const constructedInventoryId = inventory.id;
const {
isLoading: isLoadingOptions,
error: optionsError,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(
() =>
ConstructedInventoriesAPI.readConstructedInventoryOptions(
constructedInventoryId,
'PUT'
),
[constructedInventoryId]
),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
const { const {
result: { initialInstanceGroups, initialInputInventories }, result: { initialInstanceGroups, initialInputInventories },
request: fetchedRelatedData, request: fetchedRelatedData,
@@ -44,6 +65,7 @@ function ConstructedInventoryEdit({ inventory }) {
isLoading: true, isLoading: true,
} }
); );
useEffect(() => { useEffect(() => {
fetchedRelatedData(); fetchedRelatedData();
}, [fetchedRelatedData]); }, [fetchedRelatedData]);
@@ -99,12 +121,12 @@ function ConstructedInventoryEdit({ inventory }) {
const handleCancel = () => history.push(detailsUrl); const handleCancel = () => history.push(detailsUrl);
if (isLoading) { if (contentError || optionsError) {
return <ContentLoading />; return <ContentError error={contentError || optionsError} />;
} }
if (contentError) { if (isLoading || isLoadingOptions || (!options && !optionsError)) {
return <ContentError error={contentError} />; return <ContentLoading />;
} }
return ( return (
@@ -116,6 +138,7 @@ function ConstructedInventoryEdit({ inventory }) {
constructedInventory={inventory} constructedInventory={inventory}
instanceGroups={initialInstanceGroups} instanceGroups={initialInstanceGroups}
inputInventories={initialInputInventories} inputInventories={initialInputInventories}
options={options}
/> />
</CardBody> </CardBody>
); );

View File

@@ -51,27 +51,22 @@ describe('<ConstructedInventoryEdit />', () => {
}; };
beforeEach(async () => { beforeEach(async () => {
ConstructedInventoriesAPI.readOptions.mockResolvedValue({ ConstructedInventoriesAPI.readConstructedInventoryOptions.mockResolvedValue(
data: { {
related: {}, limit: {
actions: { label: 'Limit',
POST: { help_text: '',
limit: {
label: 'Limit',
help_text: '',
},
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
},
}, },
}, update_cache_timeout: {
}); label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
}
);
InventoriesAPI.readInstanceGroups.mockResolvedValue({ InventoriesAPI.readInstanceGroups.mockResolvedValue({
data: { data: {
results: associatedInstanceGroups, results: associatedInstanceGroups,
@@ -169,6 +164,21 @@ describe('<ConstructedInventoryEdit />', () => {
expect(wrapper.find('ContentError').length).toBe(1); expect(wrapper.find('ContentError').length).toBe(1);
}); });
test('should throw content error if user has insufficient options permissions', async () => {
expect(wrapper.find('ContentError').length).toBe(0);
ConstructedInventoriesAPI.readConstructedInventoryOptions.mockImplementationOnce(
() => Promise.reject(new Error())
);
await act(async () => {
wrapper = mountWithContexts(
<ConstructedInventoryEdit inventory={mockInv} />
);
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
expect(wrapper.find('ContentError').length).toBe(1);
});
test('unsuccessful form submission should show an error message', async () => { test('unsuccessful form submission should show an error message', async () => {
const error = { const error = {
response: { response: {

View File

@@ -1,14 +1,10 @@
import React, { useCallback, useEffect } from 'react'; import React, { useCallback } from 'react';
import { Formik, useField, useFormikContext } from 'formik'; import { Formik, useField, useFormikContext } from 'formik';
import { func, shape } from 'prop-types'; import { func, shape } from 'prop-types';
import { t } from '@lingui/macro'; import { t } from '@lingui/macro';
import { ConstructedInventoriesAPI } from 'api';
import { minMaxValue, required } from 'util/validators'; import { minMaxValue, required } from 'util/validators';
import useRequest from 'hooks/useRequest';
import { Form, FormGroup } from '@patternfly/react-core'; import { Form, FormGroup } from '@patternfly/react-core';
import { VariablesField } from 'components/CodeEditor'; import { VariablesField } from 'components/CodeEditor';
import ContentError from 'components/ContentError';
import ContentLoading from 'components/ContentLoading';
import FormActionGroup from 'components/FormActionGroup/FormActionGroup'; import FormActionGroup from 'components/FormActionGroup/FormActionGroup';
import FormField, { FormSubmitError } from 'components/FormField'; import FormField, { FormSubmitError } from 'components/FormField';
import { FormFullWidthLayout, FormColumnLayout } from 'components/FormLayout'; import { FormFullWidthLayout, FormColumnLayout } from 'components/FormLayout';
@@ -165,6 +161,7 @@ function ConstructedInventoryForm({
onCancel, onCancel,
onSubmit, onSubmit,
submitError, submitError,
options,
}) { }) {
const initialValues = { const initialValues = {
kind: 'constructed', kind: 'constructed',
@@ -179,32 +176,6 @@ function ConstructedInventoryForm({
source_vars: constructedInventory?.source_vars || '---', source_vars: constructedInventory?.source_vars || '---',
}; };
const {
isLoading,
error,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(async () => {
const res = await ConstructedInventoriesAPI.readOptions();
const { data } = res;
return data.actions.POST;
}, []),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
if (isLoading || (!options && !error)) {
return <ContentLoading />;
}
if (error) {
return <ContentError error={error} />;
}
return ( return (
<Formik initialValues={initialValues} onSubmit={onSubmit}> <Formik initialValues={initialValues} onSubmit={onSubmit}>
{(formik) => ( {(formik) => (

View File

@@ -1,6 +1,5 @@
import React from 'react'; import React from 'react';
import { act } from 'react-dom/test-utils'; import { act } from 'react-dom/test-utils';
import { ConstructedInventoriesAPI } from 'api';
import { import {
mountWithContexts, mountWithContexts,
waitForElement, waitForElement,
@@ -19,38 +18,35 @@ const mockFormValues = {
inputInventories: [{ id: 100, name: 'East' }], inputInventories: [{ id: 100, name: 'East' }],
}; };
const options = {
limit: {
label: 'Limit',
help_text: '',
},
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
};
describe('<ConstructedInventoryForm />', () => { describe('<ConstructedInventoryForm />', () => {
let wrapper; let wrapper;
const onSubmit = jest.fn(); const onSubmit = jest.fn();
beforeEach(async () => { beforeEach(async () => {
ConstructedInventoriesAPI.readOptions.mockResolvedValue({
data: {
related: {},
actions: {
POST: {
limit: {
label: 'Limit',
help_text: '',
},
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
},
},
},
});
await act(async () => { await act(async () => {
wrapper = mountWithContexts( wrapper = mountWithContexts(
<ConstructedInventoryForm onCancel={() => {}} onSubmit={onSubmit} /> <ConstructedInventoryForm
onCancel={() => {}}
onSubmit={onSubmit}
options={options}
/>
); );
}); });
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
}); });
afterEach(() => { afterEach(() => {
@@ -104,20 +100,4 @@ describe('<ConstructedInventoryForm />', () => {
'The plugin parameter is required.' 'The plugin parameter is required.'
); );
}); });
test('should throw content error when option request fails', async () => {
let newWrapper;
ConstructedInventoriesAPI.readOptions.mockImplementationOnce(() =>
Promise.reject(new Error())
);
await act(async () => {
newWrapper = mountWithContexts(
<ConstructedInventoryForm onCancel={() => {}} onSubmit={() => {}} />
);
});
expect(newWrapper.find('ContentError').length).toBe(0);
newWrapper.update();
expect(newWrapper.find('ContentError').length).toBe(1);
jest.clearAllMocks();
});
}); });

View File

@@ -5,10 +5,13 @@ UI_NEXT_DIR := $(patsubst %/,%,$(dir $(lastword $(MAKEFILE_LIST))))
# NOTE: you will not be able to build within the docker-compose development environment if you use this option # NOTE: you will not be able to build within the docker-compose development environment if you use this option
UI_NEXT_LOCAL ?= UI_NEXT_LOCAL ?=
# Git repo and branch to the UI_NEXT repo ## Git repo and branch to the UI_NEXT repo
UI_NEXT_GIT_REPO ?= https://github.com/ansible/ansible-ui.git UI_NEXT_GIT_REPO ?= https://github.com/ansible/ansible-ui.git
UI_NEXT_GIT_BRANCH ?= main UI_NEXT_GIT_BRANCH ?= main
## Product name to display on the UI used in UI_NEXT build process
PRODUCT ?= AWX
.PHONY: ui-next .PHONY: ui-next
## Default build target of ui-next Makefile, builds ui-next/build ## Default build target of ui-next Makefile, builds ui-next/build
ui-next: ui-next/build ui-next: ui-next/build
@@ -32,7 +35,8 @@ ui-next/src/build: $(UI_NEXT_DIR)/src/build/awx
## True target for ui-next/src/build. Build ui_next from source. ## True target for ui-next/src/build. Build ui_next from source.
$(UI_NEXT_DIR)/src/build/awx: $(UI_NEXT_DIR)/src $(UI_NEXT_DIR)/src/node_modules/webpack $(UI_NEXT_DIR)/src/build/awx: $(UI_NEXT_DIR)/src $(UI_NEXT_DIR)/src/node_modules/webpack
@echo "=== Building ui_next ===" @echo "=== Building ui_next ==="
@cd $(UI_NEXT_DIR)/src && npm run build:awx @cd $(UI_NEXT_DIR)/src && PRODUCT="$(PRODUCT)" PUBLIC_PATH=/static/awx/ npm run build:awx
@mv $(UI_NEXT_DIR)/src/build/awx/index.html $(UI_NEXT_DIR)/src/build/awx/index_awx.html
.PHONY: ui-next/src .PHONY: ui-next/src
## Clone or link src of UI_NEXT to ui-next/src, will re-clone/link/update if necessary. ## Clone or link src of UI_NEXT to ui-next/src, will re-clone/link/update if necessary.

View File

@@ -50,6 +50,11 @@ options:
- If value not set, will try environment variable C(CONTROLLER_VERIFY_SSL) and then config files - If value not set, will try environment variable C(CONTROLLER_VERIFY_SSL) and then config files
type: bool type: bool
aliases: [ tower_verify_ssl ] aliases: [ tower_verify_ssl ]
request_timeout:
description:
- Specify the timeout Ansible should use in requests to the controller host.
- Defaults to 10s, but this is handled by the shared module_utils code
type: float
controller_config_file: controller_config_file:
description: description:
- Path to the controller config file. - Path to the controller config file.

View File

@@ -68,6 +68,14 @@ options:
why: Collection name change why: Collection name change
alternatives: 'CONTROLLER_VERIFY_SSL' alternatives: 'CONTROLLER_VERIFY_SSL'
aliases: [ validate_certs ] aliases: [ validate_certs ]
request_timeout:
description:
- Specify the timeout Ansible should use in requests to the controller host.
- Defaults to 10 seconds
- This will not work with the export or import modules.
type: float
env:
- name: CONTROLLER_REQUEST_TIMEOUT
notes: notes:
- If no I(config_file) is provided we will attempt to use the tower-cli library - If no I(config_file) is provided we will attempt to use the tower-cli library

View File

@@ -214,7 +214,7 @@ class LookupModule(LookupBase):
if not isinstance(rule[field_name], list): if not isinstance(rule[field_name], list):
rule[field_name] = rule[field_name].split(',') rule[field_name] = rule[field_name].split(',')
for value in rule[field_name]: for value in rule[field_name]:
value = value.strip() value = value.strip().lower()
if value not in valid_list: if value not in valid_list:
raise AnsibleError('In rule {0} {1} must only contain values in {2}'.format(rule_number, field_name, ', '.join(valid_list.keys()))) raise AnsibleError('In rule {0} {1} must only contain values in {2}'.format(rule_number, field_name, ', '.join(valid_list.keys())))
return_values.append(valid_list[value]) return_values.append(valid_list[value])

View File

@@ -51,6 +51,7 @@ class ControllerModule(AnsibleModule):
controller_username=dict(required=False, aliases=['tower_username'], fallback=(env_fallback, ['CONTROLLER_USERNAME', 'TOWER_USERNAME'])), controller_username=dict(required=False, aliases=['tower_username'], fallback=(env_fallback, ['CONTROLLER_USERNAME', 'TOWER_USERNAME'])),
controller_password=dict(no_log=True, aliases=['tower_password'], required=False, fallback=(env_fallback, ['CONTROLLER_PASSWORD', 'TOWER_PASSWORD'])), controller_password=dict(no_log=True, aliases=['tower_password'], required=False, fallback=(env_fallback, ['CONTROLLER_PASSWORD', 'TOWER_PASSWORD'])),
validate_certs=dict(type='bool', aliases=['tower_verify_ssl'], required=False, fallback=(env_fallback, ['CONTROLLER_VERIFY_SSL', 'TOWER_VERIFY_SSL'])), validate_certs=dict(type='bool', aliases=['tower_verify_ssl'], required=False, fallback=(env_fallback, ['CONTROLLER_VERIFY_SSL', 'TOWER_VERIFY_SSL'])),
request_timeout=dict(type='float', required=False, fallback=(env_fallback, ['CONTROLLER_REQUEST_TIMEOUT'])),
controller_oauthtoken=dict( controller_oauthtoken=dict(
type='raw', no_log=True, aliases=['tower_oauthtoken'], required=False, fallback=(env_fallback, ['CONTROLLER_OAUTH_TOKEN', 'TOWER_OAUTH_TOKEN']) type='raw', no_log=True, aliases=['tower_oauthtoken'], required=False, fallback=(env_fallback, ['CONTROLLER_OAUTH_TOKEN', 'TOWER_OAUTH_TOKEN'])
), ),
@@ -63,12 +64,14 @@ class ControllerModule(AnsibleModule):
'username': 'controller_username', 'username': 'controller_username',
'password': 'controller_password', 'password': 'controller_password',
'verify_ssl': 'validate_certs', 'verify_ssl': 'validate_certs',
'request_timeout': 'request_timeout',
'oauth_token': 'controller_oauthtoken', 'oauth_token': 'controller_oauthtoken',
} }
host = '127.0.0.1' host = '127.0.0.1'
username = None username = None
password = None password = None
verify_ssl = True verify_ssl = True
request_timeout = 10
oauth_token = None oauth_token = None
oauth_token_id = None oauth_token_id = None
authenticated = False authenticated = False
@@ -304,7 +307,7 @@ class ControllerAPIModule(ControllerModule):
kwargs['supports_check_mode'] = True kwargs['supports_check_mode'] = True
super().__init__(argument_spec=argument_spec, direct_params=direct_params, error_callback=error_callback, warn_callback=warn_callback, **kwargs) super().__init__(argument_spec=argument_spec, direct_params=direct_params, error_callback=error_callback, warn_callback=warn_callback, **kwargs)
self.session = Request(cookies=CookieJar(), validate_certs=self.verify_ssl) self.session = Request(cookies=CookieJar(), timeout=self.request_timeout, validate_certs=self.verify_ssl)
if 'update_secrets' in self.params: if 'update_secrets' in self.params:
self.update_secrets = self.params.pop('update_secrets') self.update_secrets = self.params.pop('update_secrets')
@@ -500,7 +503,14 @@ class ControllerAPIModule(ControllerModule):
data = dumps(kwargs.get('data', {})) data = dumps(kwargs.get('data', {}))
try: try:
response = self.session.open(method, url.geturl(), headers=headers, validate_certs=self.verify_ssl, follow_redirects=True, data=data) response = self.session.open(
method, url.geturl(),
headers=headers,
timeout=self.request_timeout,
validate_certs=self.verify_ssl,
follow_redirects=True,
data=data
)
except (SSLValidationError) as ssl_err: except (SSLValidationError) as ssl_err:
self.fail_json(msg="Could not establish a secure connection to your host ({1}): {0}.".format(url.netloc, ssl_err)) self.fail_json(msg="Could not establish a secure connection to your host ({1}): {0}.".format(url.netloc, ssl_err))
except (ConnectionError) as con_err: except (ConnectionError) as con_err:
@@ -612,6 +622,7 @@ class ControllerAPIModule(ControllerModule):
'POST', 'POST',
api_token_url, api_token_url,
validate_certs=self.verify_ssl, validate_certs=self.verify_ssl,
timeout=self.request_timeout,
follow_redirects=True, follow_redirects=True,
force_basic_auth=True, force_basic_auth=True,
url_username=self.username, url_username=self.username,
@@ -988,6 +999,7 @@ class ControllerAPIModule(ControllerModule):
'DELETE', 'DELETE',
api_token_url, api_token_url,
validate_certs=self.verify_ssl, validate_certs=self.verify_ssl,
timeout=self.request_timeout,
follow_redirects=True, follow_redirects=True,
force_basic_auth=True, force_basic_auth=True,
url_username=self.username, url_username=self.username,

View File

@@ -356,3 +356,19 @@
that: that:
- results is success - results is success
- "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=MONTHLY;BYMONTHDAY=12,13,14,15,16,17,18;BYDAY=SA;INTERVAL=1' == complex_rule" - "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=MONTHLY;BYMONTHDAY=12,13,14,15,16,17,18;BYDAY=SA;INTERVAL=1' == complex_rule"
- name: mondays, Tuesdays, and WEDNESDAY with case-insensitivity
set_fact:
complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45', rules=rrules, timezone='UTC' ) }}"
ignore_errors: True
register: results
vars:
rrules:
- frequency: 'day'
interval: 1
byweekday: monday, Tuesday, WEDNESDAY
- assert:
that:
- results is success
- "'DTSTART;TZID=UTC:20220430T103045 RRULE:FREQ=DAILY;BYDAY=MO,TU,WE;INTERVAL=1' == complex_rule"

View File

@@ -53,6 +53,23 @@
that: that:
- result is not changed - result is not changed
- name: Create a git project and wait with short request timeout.
project:
name: "{{ project_name1 }}"
organization: Default
scm_type: git
scm_url: https://github.com/ansible/test-playbooks
wait: true
state: exists
request_timeout: .001
register: result
ignore_errors: true
- assert:
that:
- result is failed
- "'timed out' in result.msg"
- name: Delete a git project without credentials and wait - name: Delete a git project without credentials and wait
project: project:
name: "{{ project_name1 }}" name: "{{ project_name1 }}"

View File

@@ -162,7 +162,7 @@ class ApiV2(base.Base):
export_key = 'create_approval_template' export_key = 'create_approval_template'
rel_option_endpoint = _page.related.get('create_approval_template') rel_option_endpoint = _page.related.get('create_approval_template')
rel_post_fields = self._cache.get_post_fields(rel_option_endpoint) rel_post_fields = utils.get_post_fields(rel_option_endpoint, self._cache)
if rel_post_fields is None: if rel_post_fields is None:
log.debug("%s is a read-only endpoint.", rel_endpoint) log.debug("%s is a read-only endpoint.", rel_endpoint)
continue continue
@@ -202,7 +202,7 @@ class ApiV2(base.Base):
return utils.remove_encrypted(fields) return utils.remove_encrypted(fields)
def _export_list(self, endpoint): def _export_list(self, endpoint):
post_fields = self._cache.get_post_fields(endpoint) post_fields = utils.get_post_fields(endpoint, self._cache)
if post_fields is None: if post_fields is None:
return None return None
@@ -267,7 +267,7 @@ class ApiV2(base.Base):
def _import_list(self, endpoint, assets): def _import_list(self, endpoint, assets):
log.debug("_import_list -- endpoint: %s, assets: %s", endpoint.endpoint, repr(assets)) log.debug("_import_list -- endpoint: %s, assets: %s", endpoint.endpoint, repr(assets))
post_fields = self._cache.get_post_fields(endpoint) post_fields = utils.get_post_fields(endpoint, self._cache)
changed = False changed = False

View File

@@ -495,7 +495,6 @@ class TentativePage(str):
class PageCache(object): class PageCache(object):
def __init__(self): def __init__(self):
self.options = {} self.options = {}
self.post_fields = {}
self.pages_by_url = {} self.pages_by_url = {}
self.pages_by_natural_key = {} self.pages_by_natural_key = {}
@@ -517,29 +516,6 @@ class PageCache(object):
return self.options.setdefault(url, options) return self.options.setdefault(url, options)
def get_post_fields(self, page):
url = page.endpoint if isinstance(page, Page) else str(page)
key = get_registered_page(url)
if key in self.post_fields:
return self.post_fields[key]
options_page = self.get_options(page)
if options_page is None:
return None
if 'POST' not in options_page.r.headers.get('Allow', ''):
return None
if 'POST' in options_page.json['actions']:
post_fields = options_page.json['actions']['POST']
else:
log.warning("Insufficient privileges on %s, inferring POST fields from description.", options_page.endpoint)
post_fields = utils.parse_description(options_page.json['description'])
self.post_fields[key] = post_fields
return post_fields
def set_page(self, page): def set_page(self, page):
log.debug("set_page: %s %s", type(page), page.endpoint) log.debug("set_page: %s %s", type(page), page.endpoint)
self.pages_by_url[page.endpoint] = page self.pages_by_url[page.endpoint] = page

View File

@@ -31,3 +31,18 @@ def remove_encrypted(value):
if isinstance(value, dict): if isinstance(value, dict):
return {k: remove_encrypted(v) for k, v in value.items()} return {k: remove_encrypted(v) for k, v in value.items()}
return value return value
def get_post_fields(page, cache):
options_page = cache.get_options(page)
if options_page is None:
return None
if 'POST' not in options_page.r.headers.get('Allow', ''):
return None
if 'POST' in options_page.json['actions']:
return options_page.json['actions']['POST']
else:
log.warning("Insufficient privileges on %s, inferring POST fields from description.", options_page.endpoint)
return parse_description(options_page.json['description'])

View File

@@ -13,30 +13,35 @@
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: awx name: containergroup-service-account
namespace: containergroup-namespace
--- ---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: pod-manager name: role-containergroup-service-account
namespace: containergroup-namespace
rules: rules:
- apiGroups: [""] # "" indicates the core API group - apiGroups: [""]
resources: ["pods"] resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""] - apiGroups: [""]
resources: ["pods/exec"] resources: ["pods/log"]
verbs: ["create"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
--- ---
kind: RoleBinding kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: awx-pod-manager name: role-containergroup-service-account-binding
namespace: containergroup-namespace
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: awx name: containergroup-service-account
namespace: containergroup-namespace
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role kind: Role
name: pod-manager name: role-containergroup-service-account
apiGroup: rbac.authorization.k8s.io

View File

@@ -137,12 +137,12 @@ To retrieve your admin password
To tail logs from the task containers To tail logs from the task containers
```bash ```bash
kubectl logs -f deployment/awx -n awx -c awx-web kubectl logs -f deployment/awx-task -n awx -c awx-task
``` ```
To tail logs from the web containers To tail logs from the web containers
```bash ```bash
kubectl logs -f deployment/awx -n awx -c awx-web kubectl logs -f deployment/awx-web -n awx -c awx-web
``` ```
NOTE: If there's multiple replica of the awx deployment you can use `stern` to tail logs from all replicas. For more information about `stern` check out https://github.com/wercker/stern. NOTE: If there's multiple replica of the awx deployment you can use `stern` to tail logs from all replicas. For more information about `stern` check out https://github.com/wercker/stern.

View File

@@ -1,19 +0,0 @@
Copyright (c) 2013-2019 Python Charmers Pty Ltd, Australia
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2013 Daniel Bader (http://dbader.org)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -40,12 +40,12 @@ psycopg
psutil psutil
pygerduty pygerduty
pyparsing==2.4.6 # Upgrading to v3 of pyparsing introduce errors on smart host filtering: Expected 'or' term, found 'or' (at char 15), (line:1, col:16) pyparsing==2.4.6 # Upgrading to v3 of pyparsing introduce errors on smart host filtering: Expected 'or' term, found 'or' (at char 15), (line:1, col:16)
python-daemon>3.0.0
python-dsv-sdk python-dsv-sdk
python-tss-sdk==1.0.0 python-tss-sdk==1.2.1
python-ldap python-ldap
pyyaml>=6.0.1 pyyaml>=6.0.1
receptorctl==1.3.0 receptorctl==1.3.0
schedule==0.6.0
social-auth-core[openidconnect]==4.3.0 # see UPGRADE BLOCKERs social-auth-core[openidconnect]==4.3.0 # see UPGRADE BLOCKERs
social-auth-app-django==5.0.0 # see UPGRADE BLOCKERs social-auth-app-django==5.0.0 # see UPGRADE BLOCKERs
sqlparse >= 0.4.4 # Required by django https://github.com/ansible/awx/security/dependabot/96 sqlparse >= 0.4.4 # Required by django https://github.com/ansible/awx/security/dependabot/96

View File

@@ -155,8 +155,6 @@ frozenlist==1.3.3
# via # via
# aiohttp # aiohttp
# aiosignal # aiosignal
future==0.18.3
# via django-radius
gitdb==4.0.10 gitdb==4.0.10
# via gitpython # via gitpython
gitpython==3.1.30 gitpython==3.1.30
@@ -315,8 +313,10 @@ pyrad==2.4
# via django-radius # via django-radius
pyrsistent==0.19.2 pyrsistent==0.19.2
# via jsonschema # via jsonschema
python-daemon==2.3.2 python-daemon==3.0.1
# via ansible-runner # via
# -r /awx_devel/requirements/requirements.in
# ansible-runner
python-dateutil==2.8.2 python-dateutil==2.8.2
# via # via
# adal # adal
@@ -333,7 +333,7 @@ python-ldap==3.4.3
# django-auth-ldap # django-auth-ldap
python-string-utils==1.0.0 python-string-utils==1.0.0
# via openshift # via openshift
python-tss-sdk==1.0.0 python-tss-sdk==1.2.1
# via -r /awx_devel/requirements/requirements.in # via -r /awx_devel/requirements/requirements.in
python3-openid==3.2.0 python3-openid==3.2.0
# via social-auth-core # via social-auth-core
@@ -380,8 +380,6 @@ rsa==4.9
# python-jose # python-jose
s3transfer==0.6.0 s3transfer==0.6.0
# via boto3 # via boto3
schedule==0.6.0
# via -r /awx_devel/requirements/requirements.in
semantic-version==2.10.0 semantic-version==2.10.0
# via setuptools-rust # via setuptools-rust
service-identity==21.1.0 service-identity==21.1.0
@@ -392,7 +390,6 @@ setuptools-scm[toml]==7.0.5
# via -r /awx_devel/requirements/requirements.in # via -r /awx_devel/requirements/requirements.in
six==1.16.0 six==1.16.0
# via # via
# ansible-runner
# automat # automat
# azure-core # azure-core
# django-pglocks # django-pglocks