Compare commits

...

127 Commits

Author SHA1 Message Date
Hao Liu
b0565e9937 Switch to docker_compose_v2 in tools playbook (#14872)
Switch to docker_compose_v2

Fix
```
"Configuration error - kwargs_from_env() got an unexpected keyword argument 'ssl_version'"}
```
2024-02-13 13:05:33 -05:00
Hao Liu
44d85b589c Retries on vault on seal (#14873)
Sometime we tried to unseal when vault is not ready yet
2024-02-13 13:05:23 -05:00
Alan Rominger
46f816e7a4 Remove cython lib 2024-02-13 14:45:28 +00:00
Alan Rominger
54b32c10f0 WIP consider droping cython dep 2024-02-13 14:45:28 +00:00
Alan Rominger
20202054cc Use lowercase password 2024-02-13 14:36:39 +00:00
Alan Rominger
e84e2962d0 Support DB configs where PASSWORD is not used 2024-02-13 14:36:39 +00:00
Chris Meyers
2259047527 Websockets now use rest_framework configed auth
* Always support cookies, session, and also allow rest_framework
  configured auth methods over the browser websocket.
* The node -> node websocket auth remains locked down and unchanged
2024-02-13 12:05:25 +00:00
Chris Meyers
f429ef6ca7 Allow connecting to websockets via api/websocket/
* Before, we just allowed websockets on <host>/websocket/. With this
  change, they can now come from <host>/api/websocket/
2024-02-13 12:02:44 +00:00
Hao Liu
4b637c1319 gitignore pyenv python-version file
pyenv local can be use to set directory specific python version in `.python-version` file in the directory

Signed-off-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-13 12:00:57 +00:00
Alan Rominger
4c41f6b018 Ability to use updater script to pin dev requirements (#14644)
* Add a dev option for updater script to pin CI reqs

* Avoid removing git links for dev requirements

* Add dev to primary options

* Fix up sanitize git switch
2024-02-12 11:57:59 -05:00
Jesse Wattenbarger
3ae72219b4 Change parsing of docker info in dev build
This is a non-functional change. The way os_info is populated with docker info
and grep 'Operating System' breaks on podman and likely in other places. This
makes it work on both podman and docker, and it will continue to return the
exact same strings everywhere else.
2024-02-12 16:40:48 +00:00
jessicamack
402c29dc52 Switch mailing list to forum (#14600)
* Switch mailing list to forum

* add link to community

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* use correct channel name

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-12 15:51:31 +00:00
Alan Rominger
8eb4a9a2a0 Update location of logstash build context (#14676) 2024-02-12 15:49:29 +00:00
Jeff Bradberry
36f3b46726 Avoid using SmartFilter during migrations (#14786)
Our migrations that touch roles tend to bring in our real models via
migration_utils.set_current_apps_for_migrations, and that can have
some undesirable side-effects.
2024-02-12 15:48:58 +00:00
Bikouo Aubin
55c6a319dc Add new credential type to support Terraform backend configuration (#14828)
* Add new credential type to support configuration of Terraform Backend

* Fix unit tests
2024-02-12 15:47:24 +00:00
Dave
56b6a07f6e Remove json serialization for notify validation (#14847)
* Remove json serialization for notify validation

* Update serializers.py
2024-02-12 15:43:43 +00:00
Jake Jackson
519fd22bec Add ldap support to vault container in docker dev environment (#14777)
* add ldap_auth mount and configure it

* added in key engines, userpass auth method, still needs testing

* add policies and fix ldap_user

* start awx automation for vault demo and move ldap

* update docs with new flags/new credentials
2024-02-09 15:19:17 -05:00
TVo
2e5306ae8e Added LDAP support for HashiCorp Vault lookup credential (#14833)
* Added LDAP support for HashiCorp Vault lookup credential

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Incorporated review feedback from @thedoubl3j and @djyasin.
2024-02-09 10:17:14 -07:00
Cesar Francisco San Nicolas Martinez
068e6acbd5 Fix the way we are passing the awxkit base path to resources (#14862) 2024-02-09 15:31:52 +01:00
Seth Foster
f9a23a5645 Remove enable button for hop nodes (#14861)
Enabled/disabled does not apply to hop nodes,
since hop nodes don't run jobs.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 22:10:10 -05:00
Seth Foster
40150a2be8 Fix UI peers_from_control_nodes (#14858)
* Fix UI peers_from_control_nodes

Fixes bug where peers_from_control_node was
greyed out in UI.

Additional changes:
- Make edit instance button only show for instances
with managed = False
- Make remove instance button only show for instances
with managed = False
- InstanceList selectable only for instances with
managed = False

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 14:13:11 -05:00
TVo
b79aa5b1ed Removed erroneous line for basic auth 2024-02-08 12:45:47 -05:00
Jeff Bradberry
b3aeb962ce Fix the test_export_system_auditor collection test 2024-02-07 15:55:19 -05:00
Jeff Bradberry
2300b8fddf Fix linting problem 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
3a3284b5df rework with check if POST exists 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
2359004cc1 fix decription extraction on export 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
694d7e98e7 leave $encrypted$ on export
add encrypted removal from import when object not exists
2024-02-07 14:29:30 -05:00
Julen Landa Alustiza
8c9c02c975 awxkit: allow to modify api base url (#14835)
Signed-off-by: Julen Landa Alustiza <jlanda@redhat.com>
2024-02-07 12:26:42 +01:00
Chris Meyers
8a902debd5 Per-service metrics http server
* Organize metrics into their respective service
* Server per-service metrics on a per-service http server
* Increase prometheus client usage over our custom metrics fields
2024-02-05 15:17:24 -05:00
Seth Foster
6dcaa09dfb UI rename Endpoints to Listener Addresses
Listener Addresses is a better name to
emphasize these are routable addresses to
reach a listener service on the node.

Also removed expand toggle on the listener
addresses list items, as the expanded mode
had no additional information.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
21fd6af0f9 InstanceLink unique constraint source and target
Prevent creating InstanceLinks with duplicate
source and target pairings.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
eeae1d59d4 Disable health check button if managed
Also, update ui screen tests to expect
injecting "listener_port: null" if
listener_port is empty

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a252d0ae33 Fix UI lint by running npm prettier
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
48971411cc Protocol blank if no canonical address
Make protocol be blank on instance if there
is no canonical address for this instance.

It was defaulting to "tcp" before.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
083c05f12a Prevent duplicating instance links
In receptor address post-save method:
- Fixed detecting if address was missing
a link from control nodes
- Use InstanceLink create_or_update to prevent
adding duplicate InstanceLink source and target
peers

In instance serializer create_or_update,
delete receptor addresses first before doing
instance create or update. This ensures that we don't
trigger unnecessary post-save methods that might
attempt to manipulate receptor addresses that
will just be removed later.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
b558397b67 Remove redundant tests
test_listener_port
test_peers_from_control_nodes
test_peers_from_control_nodes_without_listener_port

are covered in the following tests:

test_no_op
test_creates_canonical_address
test_deletes_canonical_address
test_updates_canonical_address
test_canonical_address_validation_error

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
904c6001e9 If managed, cannot modify peers_from_control_nodes
Adds validation to prevent changing
peers_from_control_nodes if instance managed=True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
818e11dfdc Test inspect_established_receptor_connections
Add functional test case for inspecting
established receptor connections.

InstanceLink starts in ADDING state, and should
move to ESTABLISHED state if the connection
is detected in the receptor status output.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
7fc13a0569 Write tests around the two special instance serializer fields
and all of the cases that they might be in.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
92c693f14e Break out peer validation into its own method 2024-02-02 10:37:41 -05:00
Jeff Bradberry
f2417f0ed2 Make the peer validation more compact
and reuse information.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8f22188116 Use a select_related to build the peers queryset in the install bundle
Since the relationship is ReceptorAddress -> Instance,
prefetch_related isn't necessary.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
05502c0af8 Placeholder FIXMEs for things of concern 2024-02-02 10:37:41 -05:00
Jeff Bradberry
957ce59bf7 Use Counter to find duplicate peer relationships
Gives a bit more readability.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
cc4cc37d46 'managed' is a read-only field on InstanceSerializer
so we don't need complex logic to compare an incoming to existing value.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
1e254c804c The listener port cannot be disabled when setting peers_from_control_nodes
If the port is explicitly set to null (causing any ReceptorAddress to
be deleted), then that's a validation error.

If the port is left off but a ReceptorAddress doesn't already exist,
we should not infer a port number and that is also a validation error.
2024-02-02 10:37:41 -05:00
Seth Foster
1b44bebed3 Peers_from_control_nodes requires listener port
Adds validation and a unit test to ensure:

- peers_from_control_nodes=True should fail if
listener_port is not set
- peers_from_control_nodes=False should be NOOP if
listener_port is not set

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a4cf55bdba Require receptor collection 2.0.3
This release supports ingress hop node, and
is backwards compatible with previous AWX
version.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c333d0e82f Prevent modifying peers on managed node
Add validation to prevent any managed node
from modifying "peers" through the API

Peering from these nodes should be handled
by setting peers_from_control_nodes only.

Managed nodes are control nodes and
ingress hop nodes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
b093c89a84 Form hardening and node type exclusion
Disabled add/edit/remove for managed nodes
Tightened validation on Peers from control nodes
2024-02-02 10:37:41 -05:00
Jeff Bradberry
f98493aa61 Support wss as ws-listener in the Receptor config 2024-02-02 10:37:41 -05:00
Hao Liu
c36d2b0485 Update requirements.yml 2024-02-02 10:37:41 -05:00
Jeff Bradberry
8ddb604bf1 Template the listener protocol into the receptor install bundle (#14792)
Previously we were hard-coding tcp, now we need to also support ws/wss.
2024-02-02 10:37:41 -05:00
Seth Foster
cd9dd43be7 Make InstanceLink target non-nullable
InstanceLink target should not be null.

Should be safe to set to null=False, because we have
a custom RunPython method to explicitly set
target to a proper key.

Also, add new test to test_migrations
which ensures data integrity after migrating
the receptor address model changes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
82323390a7 Reconstitute migration file
Do the things as a proper 3-step process -- add new stuff, copy the data, drop the old.
2024-02-02 10:37:41 -05:00
Seth Foster
4c5ac1d3da Add management command to remove address
Adds remove_receptor_address to delete a
receptor address from the database

Also, enforce that only 1 canonical address
can be added to an instance via
the add_receptor_address command.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
9c06370e33 InstanceAdd sends null for port_listener 2024-02-02 10:37:41 -05:00
David O Neill
449b95d1eb Fix remaning tests, removed unused code 2024-02-02 10:37:41 -05:00
David O Neill
1712540c8e Updates for receptor reaslese to ui for protocol and is_managed 2024-02-02 10:37:41 -05:00
Seth Foster
7cf639d8eb Add migration to support InstanceLink changes
- Add forwards method to create a receptor address
for any existing Instance that has listener_port defined

- Add forwards method to modify each InstanceLink object
that changes target to the newly created receptor addresses

This migration was implemented as follows:

1. Add a target_new to InstanceLink which is a foreign key
to ReceptorAddress
2. create receptor addresses
3. link to these receptor addresses using the target_new field
4. rename target_new to target
5. drop listener_port and peers_from_control_nodes from Instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
dbfcc40d7c Only create receptor address if port is defined
If a Instance endpoint is patched with
{"peers_from_control_nodes" True}

but a listener_port is not defined on the instance,
or is part of the patch payload, do not create a
receptor address.

Only update or create a receptor address if listener_port
is set, either in the payload or already on the instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
73d2c92ae3 Fix condition for creating receptor_address
If listener_port is not explicitly defined don't create a receptor_address
2024-02-02 10:37:41 -05:00
Seth Foster
24a4242147 Add protocol to receptor address serializer
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
92ce85b688 Remove unused warnings import
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9531f8377a Ensure register_peers target is ReceptorAddress
command has 3 "targets", all of which should be a
ReceptorAddress object

- peers, disconnect, exact

Add logic to make sure each entry in those lists
are receptor addresses.

When creating InstanceLink objects, make sure
target is ReceptorAddress, not an Instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
15a16b3dd1 Update bootstrap_development.sh 2024-02-02 10:37:41 -05:00
Seth Foster
a37e7bf147 Add canonical=True when creating ReceptorAddress in tests
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
a2fcd2f97a Fix ui-lint error 2024-02-02 10:37:41 -05:00
Hao Liu
c394ffdd19 Fix lint error, remove unused import 2024-02-02 10:37:41 -05:00
Seth Foster
69102cf265 Remove receptor_address module from collection
After removing CRUD from receptor addresses, we need
to remove the module.

- remove receptor_address module
- Add listener_port to instance module
- Add peers_from_control_nodes to instance module

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a188798543 Make canonical field default to False
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
60108ebd10 Rename migration dependency
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8c7c00451a Join across the InstanceLink.target to the underlying Instance
to avoid having to do a lookup for every instance in the list for the
mesh visualizer endpoint.
2024-02-02 10:37:41 -05:00
Seth Foster
7a1ed406da Remove CRUD for Receptor Addresses
Removes ability to directly create and delete
receptor addresses for a given node.

Instead, receptor addresses are created automatically
if listener_port is set on the Instance.

For example patching "hop" instance

with {"listener_port": 6667}

will create a canonical receptor address with port
6667.

Likewise, peers_from_control_nodes on the instance
sets the peers_from_control_nodes on the canonical
address (if listener port is also set).

protocol is a read-only field that simply reflects
the canonical address protocol.

Other Changes:

- rename k8s_routable to is_internal
- add protocol to ReceptorAddress
- remove peers_from_control_nodes and listener_port
from Instance model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
f916ffe1e9 Adjust migration names and dependencies 2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
901dbd697e Comment unused dependency 2024-02-02 10:37:41 -05:00
David O Neill
d8b4a9825e Cleanup
- Remove peer selection on add and edit instance
- Added conconcial name and order comlums names same on endpoints and
  peers
- Other cleanup items
- rename name to instance name
2024-02-02 10:37:41 -05:00
Hao Liu
6db66c5f81 Fix provision instance not respecting protocol 2024-02-02 10:37:41 -05:00
David O Neill
82ad7dcf40 Mesh UI support
- add endpoint
- delete endpoint (wip)
- associate
- disassociate
2024-02-02 10:37:41 -05:00
David O Neill
93500f9fea Fix lint trailing whitespace 2024-02-02 10:37:41 -05:00
Seth Foster
9ba70c151d Add canonical receptor address
Creates a non-deletable address that acts as
the "main" address for this instance.

All other addresses for that instance must
be non-canonical.

When listener_port on an instance is set, automatically
create a canonical receptor address where:
  - address is hostname of instance
  - port is listener_port
  - canonical is True

Additionally, protocol field is added to instance to
denote the receptor listener protocol to use (ws, tcp).

The receptor config listener information is derived from
the listener_port and protocol information. Having a
canonical address that mirrors the listener_port ensures that
an address exists that matches the receptor config information.

Other changes:
- Add managed field to receptor address.
If managed is True, no fields on on this address can be edited
via the API.
If canonical is True, only the address cannot be edited.

- Add managed field to instance. If managed is True, users
cannot set node_state to deprovisioning (i.e. cannot delete node)

This change to our mechanism to prevent users from deleting
the mesh ingress hop node.

- Field is_internal is now renamed to k8s_routable

- Add reverse_peers on instance which is a list of instance IDs
that peer to this instance (via an address)

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
46dc61253f UI Updates for receptor peering 2024-02-02 10:37:41 -05:00
Seth Foster
6cb2cd18b0 Fix proper indent to instance module
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5d1dd8ec41 Fix inconsistent tab width
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9f69daf787 Add choices to module protocol field
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
16ece5de7e Remove unused variables and imports
Addresses flake8 and linting failures

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
ab0e9265c5 Add search fields to views
Make receptoraddress list views
searchable by "address"

Other changes:
- Add help text to source and target of the
InstanceLink model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
04cbbbccfa Update awx_collection to support ReceptorAddress
- Add receptor_address module which allows
users to create addresses for instances

- Update awx_collection functional and integration
tests to support new peering design

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
d1cacf64de Add functional and unit tests
Updated existing tests to support the
ReceptorAddress model

- cannot peer to self
- cannot peer to node that is already peered to me
- cannot peer to node more than once (via 2+ addresses)
- cannot set is_internal True

Other changes:
Change post save signal to only call
schedule_write_receptor_config() when an actual change is detected.

Make functional tests more robust by
checking for specific validation error in the
response.

I.e. instead of just checking for 400, just for 400
and that the error message corresponds to the
validation we are testing for.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5385eb0fb3 Add validation when setting peers
- cannot peer to self
- cannot peer to instance that is already peered to self

Other changes:
- ReceptorAddress protocol field restricted to choices: tcp, ws, wss
- fix awx-manage list_instances when instance.last_seen is None
- InstanceLink make source and target unique together
- Add help text to the ReceptorAddress fields

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
7d7503279d Add API validation when creating ReceptorAddress
- websocket_path can only be set if protocol is ws
- is_internal must be False
- only 1 address per instance can have
peers_from_control_nodes set to True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
d860d1d91b Temp change to aid dev
Temporarily switch to feature branch for receptor-collection in instance_install_bundle to make dev easier
2024-02-02 10:37:41 -05:00
Seth Foster
3a17c45b64 Register_peers support for receptor_addresses
register_peers has inputs:

source: source instance
peers: list of instances the source should peer to

InstanceLink "target" is now expected to be a ReceptorAddress

For each peer, we can just use the first receptor address. If
multiple receptor addresses exist, throw a command error.

Currently this command is only used on VM-deployments, where
there is only a single receptor address per instance, so this
should work fine.

Other changes:
drop listener_port field from Instance. Listener port is now just
"port" on ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
bca68bcdf1 Add install bundle support
group_vars all.yaml changes:
- peer entry has two fields, address and port
- receptor_port is inferred from the first
receptor_address entry that uses protocol tcp

other changes:
ActivityStream now records when receptor_addresses
are peered to

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c32f234ebb Add peers_from_control_nodes to ReceptorAddress
- write_receptor_config peers to ReceptorAddress entries
that have peers_from_control_nodes enabled

- peers_from_control_nodes and listener_port removed from Instance model

- peers_from_control_nodes added to ReceptorAddress model

- ReceptorAddress is now unique by address and protocol combination

- Write receptor config task is dispatched upon ReceptorAddress creation
or deletion, and when control node is first created

- InstanceLinkSerializer adds a target_address field and has logic
to grab the instance hostname associated with the peered ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5cb3d3b078 Update receptor conf when address changes
Add post save and post delete hooks to
call write_receptor_config when
a receptor address is added / removed.

Add peers_from_control_nodes to
provision_instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5199cc5246 Add ReceptorAddress to root urls
- Add database contraints to make sure addresses
are unique
If port is defined:
address, port, protocol, websocket_path are unique together

if port is not defined:
address, protocol, websocket_path are unique together

- Allow deleting address via API
- Add ReceptorAddressAccess to determine permissions
- awx-manage add_receptor_address returns changed: True
if successful
2024-02-02 10:37:41 -05:00
Hao Liu
387e877485 Connect from controlplane node to mesh ingress 2024-02-02 10:37:41 -05:00
Seth Foster
d54c5934ff Add support for inbound hop nodes 2024-02-02 10:37:41 -05:00
Chris Meyers
2fa5116197 Project updates do not run against hosts
* The project update event has no host_id or host_name because they
  only run on localhost.
2024-02-01 14:45:16 -05:00
Chris Meyers
527755d986 Always get host from event data
* Regardless of if there is a host map or not, get the host that Ansible
  reports and assign it to the Events' host_name
2024-01-31 09:42:01 -05:00
Chris Meyers
f9c0b97c53 Avoid EDA dev env port conflict
* Not many, if any, folks use the notebook feature. It kind of goes in
  and out of popularity. We've used it in the past when we work on
  features that require visualization (i.e. network graphs, workflows).
  Might as well keep it around in case we use it again.
2024-01-30 11:17:30 -05:00
TVo
65655f84de Added section for using private image for default EE. (#14815)
* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Fixed build and image rendering issues.

* Replaced graphic with many many nodes!

* Updated images with latest

* Incorporated review inputs from @fosterseth.

* Incorporated more review feedback from @fosterseth

* Resized graphic

* Resized graphic again

* Reduced image sizes to scale better on different browsers.

* Added section for using private image for default EE.

* Deleted .md file for execution nodes due to migration to RTD.
2024-01-29 17:48:53 -07:00
Elijah DeLee
9aa3d5584a fix nginx append slash to respect proxy
This is already fixed in awx-operator.
See a534c856db/roles/installer/templates/configmaps/config.yaml.j2 (L215)
This just makes it so a development environment can also work correctly
behind a proxy

Fixes problem of
GET to https://$PROXY/something/awx/v2/me
rewritten to https://$AWX/something/awx/v2/me/ (which doesn't exist)

instead path is correctly rewritten as https://$PROXY/something/awx/v2/me/
2024-01-29 15:30:16 -05:00
TVo
266e31d71a Added hop node information and beefed up exec nodes section. (#14787)
* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Fixed build and image rendering issues.

* Replaced graphic with many many nodes!

* Updated images with latest

* Incorporated review inputs from @fosterseth.

* Incorporated more review feedback from @fosterseth

* Resized graphic

* Resized graphic again

* Reduced image sizes to scale better on different browsers.
2024-01-27 14:23:32 -07:00
Alan Rominger
a1bbe75aed Adopt new rules from black upgrade (#14809) 2024-01-26 12:54:44 -05:00
Sandra McCann
695f1cf892 Replaced old tower docs link with new AWX docs link (#14801) 2024-01-25 17:23:48 -07:00
Chris Meyers
0ab103d8c4 Get that new AWX DAB hotness 2024-01-25 15:45:18 -05:00
Chris Meyers
9ac1c0f6c2 Specify docker network when multiple networks
* We introduced multi networks to our docker env. The code this replaces
  would return both networks' ip addresses concatinated i.e.
  '192.168.2.1192.168.2.3'.
2024-01-25 14:47:02 -05:00
Lila Yasin
2e168d8177 Add userpass and LDAP support for HashiCorp vault credential_plugin (#14654)
* Add username and password to handle_auth and update exception message

Revise naming of ldap username and password

* Add url for LDAP and userpass to method_auth

* Add information regarding LDAP and username and password to credential plugins documentation

Revise ldap_auth to userpass_auth and revised exception to better reflect functionality

* Revise method_auth to ensure certs can be used with username and ensure namespace functionality is not hindered
2024-01-25 09:50:13 -05:00
Kristof Wevers
d4f7bfef18 feat: Add retries to requests sessions
Every so often we get connection timed out errors towards our HCP Vault
endpoint. This is usually when a larger number of jobs is running
simultaneously. Considering requests for other jobs do still succeed this
is probably load related and adding a retry should help in making this a
bit more robust.
2024-01-24 15:45:54 -05:00
dependabot[bot]
985a8d499d Bump jinja2 from 3.1.2 to 3.1.3 in /docs/docsite
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.2...3.1.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-24 15:36:19 -05:00
Chris Meyers
e3b52f0169 Join the service-mesh docker network
* Put the awx node(s) on a service-mesh docker network so they can be
  proxied to. Also put all the other containers on an explicit awx
  network otherwise they can not talk to each other. We might could be
  more surgical about what containers we put on awx but I just added all
  of them.
2024-01-24 10:34:44 -05:00
jessicamack
f69f600cff Refer to the ansible repo for django-ansible-base requirement (#14793)
* update to the proper repo

* refer to devel
2024-01-22 10:29:47 -05:00
TVo
74cd23be5c Fixed/updated URL for “Passing Variables on the Command Line" link. (#14763) 2024-01-19 11:41:17 -07:00
jessicamack
209747d88e Update for django-ansible-base split (#14783)
* update paths and names

* temp to get tests passing

* fix typo
2024-01-19 12:30:32 -05:00
Alan Rominger
d91da39f81 New setting for pg_notify listener DB settings, add keepalive (#14755) 2024-01-17 13:44:04 -05:00
Michael Tipton
5cd029df96 Add secure flag option for userLoggedIn cookie if SESSION_COOKIE_SECU… (#14762)
Add secure flag option for userLoggedIn cookie if SESSION_COOKIE_SECURE set to True
2024-01-17 09:36:06 -05:00
Michael Abashian
5a93a519f6 Fix linting error in SubscriptionUsageChart 2024-01-16 14:12:55 -05:00
jessicamack
5f5cd960d5 Add django-ansible-base settings (#14768)
add ansible base settings
2024-01-16 15:55:59 +00:00
Jeff Bradberry
42701f32fe Build the source distribution bundle to also upload to PyPI
This has been broken since 20.0.1.
2024-01-11 15:49:07 -05:00
Hao Liu
30d4df788f Update dependency django-ansible-base (#14752) 2024-01-10 11:05:57 -05:00
Cameron McLaughlin
1bcd71a8ac Update execution environment documentation link (fixes #14690) (#14741)
Update execution enviorment documentation link (fixes #14690)

Signed-off-by: Cameron McLaughlin <auatr7@protonmail.com>
2024-01-05 15:47:55 -07:00
Patrick Uiterwijk
43be90f051 Add support for Bitbucket Data Center webhooks (#14674)
Add support for receiving webhooks from Bitbucket Data Center, and add support for posting build statuses back

Note that this is very explicitly only for Bitbucket Data Center.
The entire webhook format and API is entirely different for Bitbucket Cloud.
2024-01-05 09:34:29 -05:00
TVo
bb1922cdbb Update conf.py to 2024 (#14743)
* Update conf.py to 2023

* Update awxkit/awxkit/cli/docs/source/conf.py

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-01-04 12:38:21 -07:00
Martin Slemr
403f545071 Fix port conflicts when running other Ansible dev environments (#14701)
AAP: Docker port conflicts
2024-01-04 09:10:55 -05:00
Nenodema
a06a2a883c Adding "address" property 2024-01-03 15:08:18 -05:00
174 changed files with 2878 additions and 1344 deletions

View File

@@ -71,7 +71,7 @@ runs:
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
AWX_IP=$(docker inspect -f '{{.NetworkSettings.Networks._sources_awx.IPAddress}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -66,7 +66,7 @@ jobs:
- name: Build awxkit and upload to pypi
run: |
git reset --hard
cd awxkit && python3 setup.py bdist_wheel
cd awxkit && python3 setup.py sdist bdist_wheel
twine upload \
-r ${{ env.pypi_repo }} \
-u ${{ secrets.PYPI_USERNAME }} \

3
.gitignore vendored
View File

@@ -169,3 +169,6 @@ awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/
# Pyenv
.python-version

View File

@@ -538,7 +538,8 @@ docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS);
-e vault_tls=$(VAULT_TLS) \
-e enable_ldap=$(LDAP);
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources

View File

@@ -7,7 +7,7 @@ AWX provides a web-based user interface, REST API, and task engine built on top
To install AWX, please view the [Install guide](./INSTALL.md).
To learn more about using AWX, and Tower, view the [Tower docs site](http://docs.ansible.com/ansible-tower/index.html).
To learn more about using AWX, view the [AWX docs site](https://ansible.readthedocs.io/projects/awx/en/latest/).
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).

View File

@@ -30,8 +30,8 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
from ansible_base.filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.utils.models import get_all_field_names
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names
# AWX
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
@@ -91,7 +91,7 @@ class LoggedLoginView(auth_views.LoginView):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true')
ret.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
@@ -107,7 +107,7 @@ class LoggedLogoutView(auth_views.LogoutView):
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
ret.set_cookie('userLoggedIn', 'false')
ret.set_cookie('userLoggedIn', 'false', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
if (not current_user or not getattr(current_user, 'pk', True)) and current_user != original_user:
logger.info("User {} logged out.".format(original_user.username))
return ret

View File

@@ -6,7 +6,7 @@ import copy
import json
import logging
import re
from collections import OrderedDict
from collections import Counter, OrderedDict
from datetime import timedelta
from uuid import uuid4
@@ -43,7 +43,7 @@ from rest_framework.utils.serializer_helpers import ReturnList
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
from ansible_base.utils.models import get_type_for_model
from ansible_base.lib.utils.models import get_type_for_model
# AWX
from awx.main.access import get_user_capabilities
@@ -82,6 +82,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
RefreshToken,
Role,
Schedule,
@@ -636,7 +637,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
exclusions = self.get_validation_exclusions(self.instance)
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions:
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
obj.full_clean(exclude=exclusions)
# full_clean may modify values on the instance; copy those changes
@@ -5176,16 +5177,21 @@ class NotificationTemplateSerializer(BaseSerializer):
body = messages[event].get('body', {})
if body:
try:
rendered_body = (
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
)
potential_body = json.loads(rendered_body)
if not isinstance(potential_body, dict):
error_list.append(
_("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
)
except json.JSONDecodeError as exc:
error_list.append(_("Webhook body for '{}' is not a valid json dictionary ({}).".format(event, exc)))
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
# https://github.com/ansible/awx/issues/14410
# When rendering something such as "{{ job.id }}"
# the return type is not a dict, unlike "{{ job_metadata }}" which is a dict
# potential_body = json.loads(rendered_body)
# if not isinstance(potential_body, dict):
# error_list.append(
# _("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
# )
except Exception as exc:
error_list.append(_("Webhook body for '{}' is not valid. The following gave an error ({}).".format(event, exc)))
if error_list:
raise serializers.ValidationError(error_list)
@@ -5458,17 +5464,25 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer):
class Meta:
model = InstanceLink
fields = ('id', 'url', 'related', 'source', 'target', 'link_state')
fields = ('id', 'related', 'source', 'target', 'target_full_address', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SerializerMethodField()
target_full_address = serializers.SerializerMethodField()
def get_related(self, obj):
res = super(InstanceLinkSerializer, self).get_related(obj)
res['source_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.source.id})
res['target_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.target.id})
res['target_address'] = self.reverse('api:receptor_address_detail', kwargs={'pk': obj.target.id})
return res
def get_target(self, obj):
return obj.target.instance.hostname
def get_target_full_address(self, obj):
return obj.target.get_full_address()
class InstanceNodeSerializer(BaseSerializer):
class Meta:
@@ -5476,6 +5490,29 @@ class InstanceNodeSerializer(BaseSerializer):
fields = ('id', 'hostname', 'node_type', 'node_state', 'enabled')
class ReceptorAddressSerializer(BaseSerializer):
full_address = serializers.SerializerMethodField()
class Meta:
model = ReceptorAddress
fields = (
'id',
'url',
'address',
'port',
'protocol',
'websocket_path',
'is_internal',
'canonical',
'instance',
'peers_from_control_nodes',
'full_address',
)
def get_full_address(self, obj):
return obj.get_full_address()
class InstanceSerializer(BaseSerializer):
show_capabilities = ['edit']
@@ -5484,11 +5521,17 @@ class InstanceSerializer(BaseSerializer):
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField()
peers = serializers.SlugRelatedField(many=True, required=False, slug_field="hostname", queryset=Instance.objects.all())
peers = serializers.PrimaryKeyRelatedField(
help_text=_('Primary keys of receptor addresses to peer to.'), many=True, required=False, queryset=ReceptorAddress.objects.all()
)
reverse_peers = serializers.SerializerMethodField()
listener_port = serializers.IntegerField(source='canonical_address_port', required=False, allow_null=True)
peers_from_control_nodes = serializers.BooleanField(source='canonical_address_peers_from_control_nodes', required=False)
protocol = serializers.SerializerMethodField()
class Meta:
model = Instance
read_only_fields = ('ip_address', 'uuid', 'version')
read_only_fields = ('ip_address', 'uuid', 'version', 'managed', 'reverse_peers')
fields = (
'id',
'hostname',
@@ -5519,10 +5562,13 @@ class InstanceSerializer(BaseSerializer):
'managed_by_policy',
'node_type',
'node_state',
'managed',
'ip_address',
'listener_port',
'peers',
'reverse_peers',
'listener_port',
'peers_from_control_nodes',
'protocol',
)
extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
@@ -5544,16 +5590,54 @@ class InstanceSerializer(BaseSerializer):
def get_related(self, obj):
res = super(InstanceSerializer, self).get_related(obj)
res['receptor_addresses'] = self.reverse('api:instance_receptor_addresses_list', kwargs={'pk': obj.pk})
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if obj.node_type in [Instance.Types.EXECUTION, Instance.Types.HOP]:
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
if obj.node_type == 'execution':
res['health_check'] = self.reverse('api:instance_health_check', kwargs={'pk': obj.pk})
return res
def create_or_update(self, validated_data, obj=None, create=True):
# create a managed receptor address if listener port is defined
port = validated_data.pop('listener_port', -1)
peers_from_control_nodes = validated_data.pop('peers_from_control_nodes', -1)
# delete the receptor address if the port is explicitly set to None
if obj and port == None:
obj.receptor_addresses.filter(address=obj.hostname).delete()
if create:
instance = super(InstanceSerializer, self).create(validated_data)
else:
instance = super(InstanceSerializer, self).update(obj, validated_data)
instance.refresh_from_db() # instance canonical address lookup is deferred, so needs to be reloaded
# only create or update if port is defined in validated_data or already exists in the
# canonical address
# this prevents creating a receptor address if peers_from_control_nodes is in
# validated_data but a port is not set
if (port != None and port != -1) or instance.canonical_address_port:
kwargs = {}
if port != -1:
kwargs['port'] = port
if peers_from_control_nodes != -1:
kwargs['peers_from_control_nodes'] = peers_from_control_nodes
if kwargs:
kwargs['canonical'] = True
instance.receptor_addresses.update_or_create(address=instance.hostname, defaults=kwargs)
return instance
def create(self, validated_data):
return self.create_or_update(validated_data, create=True)
def update(self, obj, validated_data):
return self.create_or_update(validated_data, obj, create=False)
def get_summary_fields(self, obj):
summary = super().get_summary_fields(obj)
@@ -5563,6 +5647,16 @@ class InstanceSerializer(BaseSerializer):
return summary
def get_reverse_peers(self, obj):
return Instance.objects.prefetch_related('peers').filter(peers__in=obj.receptor_addresses.all()).values_list('id', flat=True)
def get_protocol(self, obj):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in obj.receptor_addresses.all():
if addr.canonical:
return addr.protocol
return ""
def get_consumed_capacity(self, obj):
return obj.consumed_capacity
@@ -5576,47 +5670,20 @@ class InstanceSerializer(BaseSerializer):
return obj.health_check_pending
def validate(self, attrs):
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
def check_peers_changed():
'''
return True if
- 'peers' in attrs
- instance peers matches peers in attrs
'''
return self.instance and 'peers' in attrs and set(self.instance.peers.all()) != set(attrs['peers'])
# Oddly, using 'source' on a DRF field populates attrs with the source name, so we should rename it back
if 'canonical_address_port' in attrs:
attrs['listener_port'] = attrs.pop('canonical_address_port')
if 'canonical_address_peers_from_control_nodes' in attrs:
attrs['peers_from_control_nodes'] = attrs.pop('canonical_address_peers_from_control_nodes')
if not self.instance and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only create instances on Kubernetes or OpenShift."))
node_type = get_field_from_model_or_attrs("node_type")
peers_from_control_nodes = get_field_from_model_or_attrs("peers_from_control_nodes")
listener_port = get_field_from_model_or_attrs("listener_port")
peers = attrs.get('peers', [])
if peers_from_control_nodes and node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("peers_from_control_nodes can only be enabled for execution or hop nodes."))
if node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
if check_peers_changed():
raise serializers.ValidationError(
_("Setting peers manually for control nodes is not allowed. Enable peers_from_control_nodes on the hop and execution nodes instead.")
)
if not listener_port and peers_from_control_nodes:
raise serializers.ValidationError(_("Field listener_port must be a valid integer when peers_from_control_nodes is enabled."))
if not listener_port and self.instance and self.instance.peers_from.exists():
raise serializers.ValidationError(_("Field listener_port must be a valid integer when other nodes peer to it."))
for peer in peers:
if peer.listener_port is None:
raise serializers.ValidationError(_("Field listener_port must be set on peer ") + peer.hostname + ".")
if not settings.IS_K8S:
if check_peers_changed():
raise serializers.ValidationError(_("Cannot change peers."))
# cannot enable peers_from_control_nodes if listener_port is not set
if attrs.get('peers_from_control_nodes'):
port = attrs.get('listener_port', -1) # -1 denotes missing, None denotes explicit null
if (port is None) or (port == -1 and self.instance and self.instance.canonical_address is None):
raise serializers.ValidationError(_("Cannot enable peers_from_control_nodes if listener_port is not set."))
return super().validate(attrs)
@@ -5636,8 +5703,8 @@ class InstanceSerializer(BaseSerializer):
raise serializers.ValidationError(_("Can only change the state on Kubernetes or OpenShift."))
if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError(_("Can only change instances to the 'deprovisioning' state."))
if self.instance.node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("Can only deprovision execution or hop nodes."))
if self.instance.managed:
raise serializers.ValidationError(_("Cannot deprovision managed nodes."))
else:
if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError(_("Can only create instances in the 'installed' state."))
@@ -5656,18 +5723,48 @@ class InstanceSerializer(BaseSerializer):
def validate_listener_port(self, value):
"""
Cannot change listener port, unless going from none to integer, and vice versa
If instance is managed, cannot change listener port at all
"""
if value and self.instance and self.instance.listener_port and self.instance.listener_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
if self.instance:
canonical_address_port = self.instance.canonical_address_port
if value and canonical_address_port and canonical_address_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
if self.instance.managed and value != canonical_address_port:
raise serializers.ValidationError(_("Cannot change listener port for managed nodes."))
return value
def validate_peers(self, value):
# cannot peer to an instance more than once
peers_instances = Counter(p.instance_id for p in value)
if any(count > 1 for count in peers_instances.values()):
raise serializers.ValidationError(_("Cannot peer to the same instance more than once."))
if self.instance:
instance_addresses = set(self.instance.receptor_addresses.all())
setting_peers = set(value)
peers_changed = set(self.instance.peers.all()) != setting_peers
if not settings.IS_K8S and peers_changed:
raise serializers.ValidationError(_("Cannot change peers."))
if self.instance.managed and peers_changed:
raise serializers.ValidationError(_("Setting peers manually for managed nodes is not allowed."))
# cannot peer to self
if instance_addresses & setting_peers:
raise serializers.ValidationError(_("Instance cannot peer to its own address."))
# cannot peer to an instance that is already peered to this instance
if instance_addresses:
for p in setting_peers:
if set(p.instance.peers.all()) & instance_addresses:
raise serializers.ValidationError(_(f"Instance {p.instance.hostname} is already peered to this instance."))
return value
def validate_peers_from_control_nodes(self, value):
"""
Can only enable for K8S based deployments
"""
if value and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only be enabled on Kubernetes or Openshift."))
if self.instance and self.instance.managed and self.instance.canonical_address_peers_from_control_nodes != value:
raise serializers.ValidationError(_("Cannot change peers_from_control_nodes for managed nodes."))
return value

View File

@@ -17,19 +17,18 @@ custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp'
{% if instance.listener_port %}
{% if listener_port %}
receptor_protocol: {{ listener_protocol }}
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_port: {{ listener_port }}
{% else %}
receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- host: {{ peer.host }}
port: {{ peer.port }}
protocol: tcp
- address: {{ peer.address }}
protocol: {{ peer.protocol }}
{% endfor %}
{% endif %}
{% verbatim %}

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.2
version: 2.0.3

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
InstanceInstanceGroupsList,
InstanceHealthCheck,
InstancePeersList,
InstanceReceptorAddressesList,
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle
@@ -21,6 +22,7 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', InstanceInstanceGroupsList.as_view(), name='instance_instance_groups_list'),
re_path(r'^(?P<pk>[0-9]+)/health_check/$', InstanceHealthCheck.as_view(), name='instance_health_check'),
re_path(r'^(?P<pk>[0-9]+)/peers/$', InstancePeersList.as_view(), name='instance_peers_list'),
re_path(r'^(?P<pk>[0-9]+)/receptor_addresses/$', InstanceReceptorAddressesList.as_view(), name='instance_receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/install_bundle/$', InstanceInstallBundle.as_view(), name='instance_install_bundle'),
]

View File

@@ -0,0 +1,17 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import (
ReceptorAddressesList,
ReceptorAddressDetail,
)
urls = [
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),
]
__all__ = ['urls']

View File

@@ -85,6 +85,7 @@ from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
from .receptor_address import urls as receptor_address_urls
v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -155,6 +156,7 @@ v2_urls = [
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/host_delete/$', BulkHostDeleteView.as_view(), name='bulk_host_delete'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
re_path(r'^receptor_addresses/', include(receptor_address_urls)),
]

View File

@@ -1,10 +1,11 @@
from django.urls import re_path
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver, BitbucketDcWebhookReceiver
urlpatterns = [
re_path(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
re_path(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),
re_path(r'^gitlab/$', GitlabWebhookReceiver.as_view(), name='webhook_receiver_gitlab'),
re_path(r'^bitbucket_dc/$', BitbucketDcWebhookReceiver.as_view(), name='webhook_receiver_bitbucket_dc'),
]

View File

@@ -337,12 +337,20 @@ class InstanceList(ListCreateAPIView):
search_fields = ('hostname',)
ordering = ('id',)
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
class InstanceDetail(RetrieveUpdateAPIView):
name = _("Instance Detail")
model = models.Instance
serializer_class = serializers.InstanceSerializer
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('node_type', None)
@@ -375,13 +383,37 @@ class InstanceUnifiedJobsList(SubListAPIView):
class InstancePeersList(SubListAPIView):
name = _("Instance Peers")
name = _("Peers")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
model = models.Instance
serializer_class = serializers.InstanceSerializer
parent_access = 'read'
search_fields = {'hostname'}
relationship = 'peers'
search_fields = ('address',)
class InstanceReceptorAddressesList(SubListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
parent_key = 'instance'
parent_model = models.Instance
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressesList(ListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressDetail(RetrieveAPIView):
name = _("Receptor Address Detail")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
relationship = 'receptor_addresses'
class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAttachDetachAPIView):

View File

@@ -124,10 +124,19 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj):
# get peers
peers = []
for instance in instance_obj.peers.all():
peers.append(dict(host=instance.hostname, port=instance.listener_port))
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj, peers=peers))
for addr in instance_obj.peers.select_related('instance'):
peers.append(dict(address=addr.get_full_address(), protocol=addr.protocol))
context = dict(instance=instance_obj, peers=peers)
canonical_addr = instance_obj.canonical_address
if canonical_addr:
context['listener_port'] = canonical_addr.port
protocol = canonical_addr.protocol if canonical_addr.protocol != 'wss' else 'ws'
context['listener_protocol'] = protocol
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=context)
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)

View File

@@ -17,7 +17,7 @@ class MeshVisualizer(APIView):
def get(self, request, format=None):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target__instance', 'source'), many=True).data,
}
return Response(data)

View File

@@ -84,6 +84,7 @@ class ApiVersionRootView(APIView):
data['ping'] = reverse('api:api_v2_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['receptor_addresses'] = reverse('api:receptor_addresses_list', request=request)
data['config'] = reverse('api:api_v2_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)

View File

@@ -1,4 +1,4 @@
from hashlib import sha1
from hashlib import sha1, sha256
import hmac
import logging
import urllib.parse
@@ -99,14 +99,31 @@ class WebhookReceiverBase(APIView):
def get_signature(self):
raise NotImplementedError
def must_check_signature(self):
return True
def is_ignored_request(self):
return False
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
if not self.must_check_signature():
logger.debug("skipping signature validation")
return
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
logger.debug("header signature: %s", self.get_signature())
hash_alg, expected_digest = self.get_signature()
if hash_alg == 'sha1':
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
elif hash_alg == 'sha256':
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha256)
else:
logger.debug("Unsupported signature type, supported: sha1, sha256, received: {}".format(hash_alg))
raise PermissionDenied
logger.debug("header signature: %s", expected_digest)
logger.debug("calculated signature: %s", force_bytes(mac.hexdigest()))
if not hmac.compare_digest(force_bytes(mac.hexdigest()), self.get_signature()):
if not hmac.compare_digest(force_bytes(mac.hexdigest()), expected_digest):
raise PermissionDenied
@csrf_exempt
@@ -118,6 +135,10 @@ class WebhookReceiverBase(APIView):
obj = self.get_object()
self.check_signature(obj)
if self.is_ignored_request():
# This was an ignored request type (e.g. ping), don't act on it
return Response({'message': _("Webhook ignored")}, status=status.HTTP_200_OK)
event_type = self.get_event_type()
event_guid = self.get_event_guid()
event_ref = self.get_event_ref()
@@ -186,7 +207,7 @@ class GithubWebhookReceiver(WebhookReceiverBase):
if hash_alg != 'sha1':
logger.debug("Unsupported signature type, expected: sha1, received: {}".format(hash_alg))
raise PermissionDenied
return force_bytes(signature)
return hash_alg, force_bytes(signature)
class GitlabWebhookReceiver(WebhookReceiverBase):
@@ -214,15 +235,73 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
return "{}://{}/api/v4/projects/{}/statuses/{}".format(parsed.scheme, parsed.netloc, project['id'], self.get_event_ref())
def get_signature(self):
return force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
token_from_request = force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
# GitLab only returns the secret token, not an hmac hash. Use
# the hmac `compare_digest` helper function to prevent timing
# analysis by attackers.
if not hmac.compare_digest(force_bytes(obj.webhook_key), self.get_signature()):
if not hmac.compare_digest(force_bytes(obj.webhook_key), token_from_request):
raise PermissionDenied
class BitbucketDcWebhookReceiver(WebhookReceiverBase):
service = 'bitbucket_dc'
ref_keys = {
'repo:refs_changed': 'changes.0.toHash',
'mirror:repo_synchronized': 'changes.0.toHash',
'pr:opened': 'pullRequest.toRef.latestCommit',
'pr:from_ref_updated': 'pullRequest.toRef.latestCommit',
'pr:modified': 'pullRequest.toRef.latestCommit',
}
def get_event_type(self):
return self.request.META.get('HTTP_X_EVENT_KEY')
def get_event_guid(self):
return self.request.META.get('HTTP_X_REQUEST_ID')
def get_event_status_api(self):
# https://<bitbucket-base-url>/rest/build-status/1.0/commits/<commit-hash>
if self.get_event_type() not in self.ref_keys.keys():
return
if self.get_event_ref() is None:
return
any_url = None
if 'actor' in self.request.data:
any_url = self.request.data['actor'].get('links', {}).get('self')
if any_url is None and 'repository' in self.request.data:
any_url = self.request.data['repository'].get('links', {}).get('self')
if any_url is None:
return
any_url = any_url[0].get('href')
if any_url is None:
return
parsed = urllib.parse.urlparse(any_url)
return "{}://{}/rest/build-status/1.0/commits/{}".format(parsed.scheme, parsed.netloc, self.get_event_ref())
def is_ignored_request(self):
return self.get_event_type() not in [
'repo:refs_changed',
'mirror:repo_synchronized',
'pr:opened',
'pr:from_ref_updated',
'pr:modified',
]
def must_check_signature(self):
# Bitbucket does not sign ping requests...
return self.get_event_type() != 'diagnostics:ping'
def get_signature(self):
header_sig = self.request.META.get('HTTP_X_HUB_SIGNATURE')
if not header_sig:
logger.debug("Expected signature missing from header key HTTP_X_HUB_SIGNATURE")
raise PermissionDenied
hash_alg, signature = header_sig.split('=')
return hash_alg, force_bytes(signature)

View File

@@ -7,7 +7,7 @@ import json
# Django
from django.db import models
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.main.models.base import CreatedModifiedModel

View File

@@ -20,7 +20,7 @@ from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
from ansible_base.utils.validation import to_python_boolean
from ansible_base.lib.utils.validation import to_python_boolean
# AWX
from awx.main.utils import (
@@ -57,6 +57,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
Role,
Schedule,
SystemJob,
@@ -2430,6 +2431,29 @@ class InventoryUpdateEventAccess(BaseAccess):
return False
class ReceptorAddressAccess(BaseAccess):
"""
I can see receptor address records whenever I can access the instance
"""
model = ReceptorAddress
def filtered_queryset(self):
return self.model.objects.filter(Q(instance__in=Instance.accessible_pk_qs(self.user, 'read_role')))
@check_superuser
def can_add(self, data):
return False
@check_superuser
def can_change(self, obj, data):
return False
@check_superuser
def can_delete(self, obj):
return False
class SystemJobEventAccess(BaseAccess):
"""
I can only see manage System Jobs events if I'm a super user

View File

@@ -2,7 +2,7 @@
import logging
# AWX
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename
@@ -11,4 +11,5 @@ logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_task_queuename)
def send_subsystem_metrics():
Metrics().send_metrics()
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -1,10 +1,15 @@
import itertools
import redis
import json
import time
import logging
import prometheus_client
from prometheus_client.core import GaugeMetricFamily, HistogramMetricFamily
from prometheus_client.registry import CollectorRegistry
from django.conf import settings
from django.apps import apps
from django.http import HttpRequest
from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
@@ -13,6 +18,30 @@ root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
class MetricsNamespace:
def __init__(self, namespace):
self._namespace = namespace
class MetricsServerSettings(MetricsNamespace):
def port(self):
return settings.METRICS_SUBSYSTEM_CONFIG['server'][self._namespace]['port']
class MetricsServer(MetricsServerSettings):
def __init__(self, namespace, registry):
MetricsNamespace.__init__(self, namespace)
self._registry = registry
def start(self):
try:
# TODO: addr for ipv6 ?
prometheus_client.start_http_server(self.port(), addr='localhost', registry=self._registry)
except Exception:
logger.error(f"MetricsServer failed to start for service '{self._namespace}.")
raise
class BaseM:
def __init__(self, field, help_text):
self.field = field
@@ -148,76 +177,40 @@ class HistogramM(BaseM):
return output_text
class Metrics:
def __init__(self, auto_pipe_execute=False, instance_name=None):
class Metrics(MetricsNamespace):
# metric name, help_text
METRICSLIST = []
_METRICSLIST = [
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
]
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
MetricsNamespace.__init__(self, namespace)
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.last_pipe_execute = time.time()
# track if metrics have been modified since last saved to redis
# start with True so that we get an initial save to redis
self.metrics_have_changed = True
self.metrics_have_changed = metrics_have_changed
self.pipe_execute_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SAVE_TO_REDIS
self.send_metrics_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SEND_METRICS
# auto pipe execute will commit transaction of metric data to redis
# at a regular interval (pipe_execute_interval). If set to False,
# the calling function should call .pipe_execute() explicitly
self.auto_pipe_execute = auto_pipe_execute
Instance = apps.get_model('main', 'Instance')
if instance_name:
self.instance_name = instance_name
elif is_testing():
self.instance_name = "awx_testing"
else:
self.instance_name = Instance.objects.my_hostname()
self.instance_name = settings.CLUSTER_HOST_ID # Same as Instance.objects.my_hostname() BUT we do not need to import Instance
# metric name, help_text
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
# turn metric list into dictionary with the metric name as a key
self.METRICS = {}
for m in METRICSLIST:
for m in itertools.chain(self.METRICSLIST, self._METRICSLIST):
self.METRICS[m.field] = m
# track last time metrics were sent to other nodes
@@ -230,7 +223,7 @@ class Metrics:
m.reset_value(self.conn)
self.metrics_have_changed = True
self.conn.delete(root_key + "_lock")
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
self.conn.delete(m)
def inc(self, field, value):
@@ -297,7 +290,7 @@ class Metrics:
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '_lock')
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
return
try:
@@ -307,9 +300,10 @@ class Metrics:
payload = {
'instance': self.instance_name,
'metrics': serialized_metrics,
'metrics_namespace': self._namespace,
}
# store the serialized data locally as well, so that load_other_metrics will read it
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics)
self.conn.set(root_key + '-' + self._namespace + '_instance_' + self.instance_name, serialized_metrics)
emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time)
@@ -331,14 +325,14 @@ class Metrics:
instances_filter = request.query_params.getlist("node")
# get a sorted list of instance names
instance_names = [self.instance_name]
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
instance_names.append(m.decode('UTF-8').split('_instance_')[1])
instance_names.sort()
# load data, including data from the this local instance
instance_data = {}
for instance in instance_names:
if len(instances_filter) == 0 or instance in instances_filter:
instance_data_from_redis = self.conn.get(root_key + '_instance_' + instance)
instance_data_from_redis = self.conn.get(root_key + '-' + self._namespace + '_instance_' + instance)
# data from other instances may not be available. That is OK.
if instance_data_from_redis:
instance_data[instance] = json.loads(instance_data_from_redis.decode('UTF-8'))
@@ -357,6 +351,120 @@ class Metrics:
return output_text
class DispatcherMetrics(Metrics):
METRICSLIST = [
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_DISPATCHER, *args, **kwargs)
class CallbackReceiverMetrics(Metrics):
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, *args, **kwargs)
def metrics(request):
m = Metrics()
return m.generate_metrics(request)
output_text = ''
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
output_text += m.generate_metrics(request)
return output_text
class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
"""
Takes the metric data from redis -> our custom metric fields -> prometheus
library metric fields.
The plan is to get rid of the use of redis, our custom metric fields, and
to switch fully to the prometheus library. At that point, this translation
code will be deleted.
"""
def __init__(self, metrics_obj, *args, **kwargs):
super().__init__(*args, **kwargs)
self._metrics = metrics_obj
def collect(self):
my_hostname = settings.CLUSTER_HOST_ID
instance_data = self._metrics.load_other_metrics(Request(HttpRequest()))
if not instance_data:
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
logger.debug(f"{self._metrics._namespace} metric '{metric.field}' not found in redis data payload {json.dumps(instance_data, indent=2)}")
continue
if isinstance(metric, HistogramM):
buckets = list(zip(metric.buckets, entry['counts']))
buckets = [[str(i[0]), str(i[1])] for i in buckets]
yield HistogramMetricFamily(metric.field, metric.help_text, buckets=buckets, sum_value=entry['sum'])
else:
yield GaugeMetricFamily(metric.field, metric.help_text, value=entry)
class CallbackReceiverMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(CallbackReceiverMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
class WebsocketsMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
# registry.register()
super().__init__(settings.METRICS_SERVICE_WEBSOCKETS, registry)

View File

@@ -106,7 +106,7 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)

View File

@@ -58,7 +58,7 @@ aim_inputs = {
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Default: Content Ex: Username, Address, etc.'),
'help_text': _('The property of the object to return. Available properties: Username, Password and Address.'),
},
{
'id': 'reason',
@@ -111,8 +111,12 @@ def aim_backend(**kwargs):
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property.lower() == 'password':
object_property = 'Content'
elif object_property.lower() == 'address':
object_property = 'Address'
elif object_property not in res:
raise KeyError('Property {} not found in object'.format(object_property))
raise KeyError('Property {} not found in object, available properties: Username, Password and Address'.format(object_property))
else:
object_property = object_property.capitalize()

View File

@@ -87,6 +87,20 @@ base_inputs = {
' see https://www.vaultproject.io/docs/auth/kubernetes#configuration'
),
},
{
'id': 'username',
'label': _('Username'),
'type': 'string',
'secret': False,
'help_text': _('Username for user authentication.'),
},
{
'id': 'password',
'label': _('Password'),
'type': 'string',
'secret': True,
'help_text': _('Password for user authentication.'),
},
{
'id': 'default_auth_path',
'label': _('Path to Auth'),
@@ -185,9 +199,10 @@ hashi_ssh_inputs['required'].extend(['public_key', 'role'])
def handle_auth(**kwargs):
token = None
if kwargs.get('token'):
token = kwargs['token']
elif kwargs.get('username') and kwargs.get('password'):
token = method_auth(**kwargs, auth_param=userpass_auth(**kwargs))
elif kwargs.get('role_id') and kwargs.get('secret_id'):
token = method_auth(**kwargs, auth_param=approle_auth(**kwargs))
elif kwargs.get('kubernetes_role'):
@@ -195,11 +210,14 @@ def handle_auth(**kwargs):
elif kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
token = method_auth(**kwargs, auth_param=client_cert_auth(**kwargs))
else:
raise Exception('Either a token or AppRole, Kubernetes, or TLS authentication parameters must be set')
raise Exception('Token, Username/Password, AppRole, Kubernetes, or TLS authentication parameters must be set')
return token
def userpass_auth(**kwargs):
return {'username': kwargs['username'], 'password': kwargs['password']}
def approle_auth(**kwargs):
return {'role_id': kwargs['role_id'], 'secret_id': kwargs['secret_id']}
@@ -227,11 +245,14 @@ def method_auth(**kwargs):
cacert = kwargs.get('cacert', None)
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
# Namespace support
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
if kwargs['auth_param'].get('username'):
request_url = request_url + '/' + (kwargs['username'])
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
# TLS client certificate support
@@ -263,6 +284,7 @@ def kv_backend(**kwargs):
}
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatibility header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
@@ -333,6 +355,7 @@ def ssh_backend(**kwargs):
request_kwargs['json']['valid_principals'] = kwargs['valid_principals']
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
sess.headers['Authorization'] = 'Bearer {}'.format(token)
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']

View File

@@ -93,6 +93,26 @@ class PubSub(object):
self.conn.close()
def create_listener_connection():
conf = settings.DATABASES['default'].copy()
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy()
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
# Apply overrides specifically for the listener connection
for k, v in settings.LISTENER_DATABASES.get('default', {}).items():
conf[k] = v
for k, v in settings.LISTENER_DATABASES.get('default', {}).get('OPTIONS', {}).items():
conf['OPTIONS'][k] = v
# Allow password-less authentication
if 'PASSWORD' in conf:
conf['OPTIONS']['password'] = conf.pop('PASSWORD')
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} port={conf['PORT']}"
return psycopg.connect(connection_data, autocommit=True, **conf['OPTIONS'])
@contextmanager
def pg_bus_conn(new_connection=False, select_timeout=None):
'''
@@ -106,12 +126,7 @@ def pg_bus_conn(new_connection=False, select_timeout=None):
'''
if new_connection:
conf = settings.DATABASES['default'].copy()
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy()
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} password={conf['PASSWORD']} port={conf['PORT']}"
conn = psycopg.connect(connection_data, autocommit=True, **conf['OPTIONS'])
conn = create_listener_connection()
else:
if pg_connection.connection is None:
pg_connection.connect()

View File

@@ -168,7 +168,7 @@ class AWXConsumerPG(AWXConsumerBase):
init_time = time.time()
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period
self.last_cleanup = init_time
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
if schedule:
@@ -214,7 +214,10 @@ class AWXConsumerPG(AWXConsumerBase):
# bypasses pg_notify for scheduled tasks
self.dispatch_task(body)
self.pg_is_down = False
if self.pg_is_down:
logger.info('Dispatcher listener connection established')
self.pg_is_down = False
self.listen_start = time.time()
return self.scheduler.time_until_next_run()

View File

@@ -72,7 +72,7 @@ class CallbackBrokerWorker(BaseWorker):
def __init__(self):
self.buff = {}
self.redis = redis.Redis.from_url(settings.BROKER_URL)
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.CallbackReceiverMetrics(auto_pipe_execute=False)
self.queue_pop = 0
self.queue_name = settings.CALLBACK_QUEUE
self.prof = AWXProfiler("CallbackBrokerWorker")

View File

@@ -5,6 +5,7 @@
import copy
import json
import re
import sys
import urllib.parse
from jinja2 import sandbox, StrictUndefined
@@ -406,11 +407,13 @@ class SmartFilterField(models.TextField):
# https://docs.python.org/2/library/stdtypes.html#truth-value-testing
if not value:
return None
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
raise models.base.ValidationError(e)
# avoid doing too much during migrations
if 'migrate' not in sys.argv:
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
raise models.base.ValidationError(e)
return super(SmartFilterField, self).get_prep_value(value)

View File

@@ -0,0 +1,53 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.core.management.base import BaseCommand
from awx.main.models import Instance, ReceptorAddress
def add_address(**kwargs):
try:
instance = Instance.objects.get(hostname=kwargs.pop('instance'))
kwargs['instance'] = instance
if kwargs.get('canonical') and instance.receptor_addresses.filter(canonical=True).exclude(address=kwargs['address']).exists():
print(f"Instance {instance.hostname} already has a canonical address, skipping")
return False
# if ReceptorAddress already exists with address, just update
# otherwise, create new ReceptorAddress
addr, _ = ReceptorAddress.objects.update_or_create(address=kwargs.pop('address'), defaults=kwargs)
print(f"Successfully added receptor address {addr.get_full_address()}")
return True
except Exception as e:
print(f"Error adding receptor address: {e}")
return False
class Command(BaseCommand):
"""
Internal controller command.
Register receptor address to an already-registered instance.
"""
help = "Add receptor address to an instance."
def add_arguments(self, parser):
parser.add_argument('--instance', dest='instance', required=True, type=str, help="Instance hostname this address is added to")
parser.add_argument('--address', dest='address', required=True, type=str, help="Receptor address")
parser.add_argument('--port', dest='port', type=int, help="Receptor listener port")
parser.add_argument('--websocket_path', dest='websocket_path', type=str, default="", help="Path for websockets")
parser.add_argument('--is_internal', action='store_true', help="If true, address only resolvable within the Kubernetes cluster")
parser.add_argument('--protocol', type=str, default='tcp', choices=['tcp', 'ws', 'wss'], help="Protocol to use for the Receptor listener")
parser.add_argument('--canonical', action='store_true', help="If true, address is the canonical address for the instance")
parser.add_argument('--peers_from_control_nodes', action='store_true', help="If true, control nodes will peer to this address")
def handle(self, **options):
address_options = {
k: options[k]
for k in ('instance', 'address', 'port', 'websocket_path', 'is_internal', 'protocol', 'peers_from_control_nodes', 'canonical')
if options[k]
}
changed = add_address(**address_options)
if changed:
print("(changed: True)")

View File

@@ -55,7 +55,7 @@ class Command(BaseCommand):
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.last_seen else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}{end_color}')
print()

View File

@@ -25,20 +25,17 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning")
parser.add_argument('--listener_port', dest='listener_port', type=int, help="Receptor listener port")
parser.add_argument('--node_type', type=str, default='hybrid', choices=['control', 'execution', 'hop', 'hybrid'], help="Instance Node type")
parser.add_argument('--uuid', type=str, help="Instance UUID")
def _register_hostname(self, hostname, node_type, uuid, listener_port):
def _register_hostname(self, hostname, node_type, uuid):
if not hostname:
if not settings.AWX_AUTO_DEPROVISION_INSTANCES:
raise CommandError('Registering with values from settings only intended for use in K8s installs')
from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(
ip_address=os.environ.get('MY_POD_IP'), listener_port=listener_port, node_type='control', node_uuid=settings.SYSTEM_UUID
)
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', node_uuid=settings.SYSTEM_UUID)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME,
@@ -51,16 +48,17 @@ class Command(BaseCommand):
max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS,
).register()
else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid, listener_port=listener_port)
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid)
if changed:
print("Successfully registered instance {}".format(hostname))
else:
print("Instance already registered {}".format(instance.hostname))
self.changed = changed
@transaction.atomic
def handle(self, **options):
self.changed = False
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'), options.get('listener_port'))
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'))
if self.changed:
print("(changed: True)")

View File

@@ -1,9 +1,7 @@
import warnings
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from awx.main.models import Instance, InstanceLink
from awx.main.models import Instance, InstanceLink, ReceptorAddress
class Command(BaseCommand):
@@ -28,7 +26,9 @@ class Command(BaseCommand):
def handle(self, **options):
# provides a mapping of hostname to Instance objects
nodes = Instance.objects.in_bulk(field_name='hostname')
nodes = Instance.objects.all().in_bulk(field_name='hostname')
# provides a mapping of address to ReceptorAddress objects
addresses = ReceptorAddress.objects.all().in_bulk(field_name='address')
if options['source'] not in nodes:
raise CommandError(f"Host {options['source']} is not a registered instance.")
@@ -39,6 +39,14 @@ class Command(BaseCommand):
if options['exact'] is not None and options['disconnect']:
raise CommandError("The option --disconnect may not be used with --exact.")
# make sure each target has a receptor address
peers = options['peers'] or []
disconnect = options['disconnect'] or []
exact = options['exact'] or []
for peer in peers + disconnect + exact:
if peer not in addresses:
raise CommandError(f"Peer {peer} does not have a receptor address.")
# No 1-cycles
for collection in ('peers', 'disconnect', 'exact'):
if options[collection] is not None and options['source'] in options[collection]:
@@ -47,9 +55,12 @@ class Command(BaseCommand):
# No 2-cycles
if options['peers'] or options['exact'] is not None:
peers = set(options['peers'] or options['exact'])
incoming = set(InstanceLink.objects.filter(target=nodes[options['source']]).values_list('source__hostname', flat=True))
if options['source'] in addresses:
incoming = set(InstanceLink.objects.filter(target=addresses[options['source']]).values_list('source__hostname', flat=True))
else:
incoming = set()
if peers & incoming:
warnings.warn(f"Source node {options['source']} should not link to nodes already peering to it: {peers & incoming}.")
raise CommandError(f"Source node {options['source']} should not link to nodes already peering to it: {peers & incoming}.")
if options['peers']:
missing_peers = set(options['peers']) - set(nodes)
@@ -60,7 +71,7 @@ class Command(BaseCommand):
results = 0
for target in options['peers']:
_, created = InstanceLink.objects.update_or_create(
source=nodes[options['source']], target=nodes[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
source=nodes[options['source']], target=addresses[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
)
if created:
results += 1
@@ -70,9 +81,9 @@ class Command(BaseCommand):
if options['disconnect']:
results = 0
for target in options['disconnect']:
if target not in nodes: # Be permissive, the node might have already been de-registered.
if target not in addresses: # Be permissive, the node might have already been de-registered.
continue
n, _ = InstanceLink.objects.filter(source=nodes[options['source']], target=nodes[target]).delete()
n, _ = InstanceLink.objects.filter(source=nodes[options['source']], target=addresses[target]).delete()
results += n
print(f"{results} peer links removed from the database.")
@@ -81,11 +92,11 @@ class Command(BaseCommand):
additions = 0
with transaction.atomic():
peers = set(options['exact'])
links = set(InstanceLink.objects.filter(source=nodes[options['source']]).values_list('target__hostname', flat=True))
removals, _ = InstanceLink.objects.filter(source=nodes[options['source']], target__hostname__in=links - peers).delete()
links = set(InstanceLink.objects.filter(source=nodes[options['source']]).values_list('target__address', flat=True))
removals, _ = InstanceLink.objects.filter(source=nodes[options['source']], target__instance__hostname__in=links - peers).delete()
for target in peers - links:
_, created = InstanceLink.objects.update_or_create(
source=nodes[options['source']], target=nodes[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
source=nodes[options['source']], target=addresses[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
)
if created:
additions += 1

View File

@@ -0,0 +1,26 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.core.management.base import BaseCommand
from awx.main.models import ReceptorAddress
class Command(BaseCommand):
"""
Internal controller command.
Delete a receptor address.
"""
help = "Add receptor address to an instance."
def add_arguments(self, parser):
parser.add_argument('--address', dest='address', type=str, help="Receptor address to remove")
def handle(self, **options):
deleted = ReceptorAddress.objects.filter(address=options['address']).delete()
if deleted[0]:
print(f"Successfully removed {options['address']}")
print("(changed: True)")
else:
print(f"Did not remove {options['address']}, not found")

View File

@@ -3,6 +3,7 @@
from django.conf import settings
from django.core.management.base import BaseCommand
from awx.main.analytics.subsystem_metrics import CallbackReceiverMetricsServer
from awx.main.dispatch.control import Control
from awx.main.dispatch.worker import AWXConsumerRedis, CallbackBrokerWorker
@@ -25,6 +26,9 @@ class Command(BaseCommand):
print(Control('callback_receiver').status())
return
consumer = None
CallbackReceiverMetricsServer().start()
try:
consumer = AWXConsumerRedis(
'callback_receiver',

View File

@@ -10,6 +10,7 @@ from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.analytics.subsystem_metrics import DispatcherMetricsServer
logger = logging.getLogger('awx.main.dispatch')
@@ -62,6 +63,8 @@ class Command(BaseCommand):
consumer = None
DispatcherMetricsServer().start()
try:
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4), schedule=settings.CELERYBEAT_SCHEDULE)

View File

@@ -16,6 +16,7 @@ from awx.main.analytics.broadcast_websocket import (
RelayWebsocketStatsManager,
safe_name,
)
from awx.main.analytics.subsystem_metrics import WebsocketsMetricsServer
from awx.main.wsrelay import WebSocketRelayManager
@@ -91,6 +92,8 @@ class Command(BaseCommand):
return host_stats
def handle(self, *arg, **options):
WebsocketsMetricsServer().start()
# it's necessary to delay this import in case
# database migrations are still running
from awx.main.models.ha import Instance

View File

@@ -115,7 +115,14 @@ class InstanceManager(models.Manager):
return node[0]
raise RuntimeError("No instance found with the current cluster host id")
def register(self, node_uuid=None, hostname=None, ip_address="", listener_port=None, node_type='hybrid', defaults=None):
def register(
self,
node_uuid=None,
hostname=None,
ip_address="",
node_type='hybrid',
defaults=None,
):
if not hostname:
hostname = settings.CLUSTER_HOST_ID
@@ -161,9 +168,6 @@ class InstanceManager(models.Manager):
if instance.node_type != node_type:
instance.node_type = node_type
update_fields.append('node_type')
if instance.listener_port != listener_port:
instance.listener_port = listener_port
update_fields.append('listener_port')
if update_fields:
instance.save(update_fields=update_fields)
return (True, instance)
@@ -174,11 +178,13 @@ class InstanceManager(models.Manager):
create_defaults = {
'node_state': Instance.States.INSTALLED,
'capacity': 0,
'managed': True,
}
if defaults is not None:
create_defaults.update(defaults)
uuid_option = {'uuid': node_uuid if node_uuid is not None else uuid.uuid4()}
if node_type == 'execution' and 'version' not in create_defaults:
create_defaults['version'] = RECEPTOR_PENDING
instance = self.create(hostname=hostname, ip_address=ip_address, listener_port=listener_port, node_type=node_type, **create_defaults, **uuid_option)
instance = self.create(hostname=hostname, ip_address=ip_address, node_type=node_type, **create_defaults, **uuid_option)
return (True, instance)

View File

@@ -0,0 +1,52 @@
# Generated by Django 4.2.6 on 2023-11-16 21:00
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0187_hop_nodes'),
]
operations = [
migrations.AlterField(
model_name='job',
name='webhook_service',
field=models.CharField(
blank=True,
choices=[('github', 'GitHub'), ('gitlab', 'GitLab'), ('bitbucket_dc', 'BitBucket DataCenter')],
help_text='Service that webhook requests will be accepted from',
max_length=16,
),
),
migrations.AlterField(
model_name='jobtemplate',
name='webhook_service',
field=models.CharField(
blank=True,
choices=[('github', 'GitHub'), ('gitlab', 'GitLab'), ('bitbucket_dc', 'BitBucket DataCenter')],
help_text='Service that webhook requests will be accepted from',
max_length=16,
),
),
migrations.AlterField(
model_name='workflowjob',
name='webhook_service',
field=models.CharField(
blank=True,
choices=[('github', 'GitHub'), ('gitlab', 'GitLab'), ('bitbucket_dc', 'BitBucket DataCenter')],
help_text='Service that webhook requests will be accepted from',
max_length=16,
),
),
migrations.AlterField(
model_name='workflowjobtemplate',
name='webhook_service',
field=models.CharField(
blank=True,
choices=[('github', 'GitHub'), ('gitlab', 'GitLab'), ('bitbucket_dc', 'BitBucket DataCenter')],
help_text='Service that webhook requests will be accepted from',
max_length=16,
),
),
]

View File

@@ -0,0 +1,150 @@
# Generated by Django 4.2.6 on 2024-01-19 19:24
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
def create_receptor_addresses(apps, schema_editor):
"""
If listener_port was defined on an instance, create a receptor address for it
"""
Instance = apps.get_model('main', 'Instance')
ReceptorAddress = apps.get_model('main', 'ReceptorAddress')
for instance in Instance.objects.exclude(listener_port=None):
ReceptorAddress.objects.create(
instance=instance,
address=instance.hostname,
port=instance.listener_port,
peers_from_control_nodes=instance.peers_from_control_nodes,
protocol='tcp',
is_internal=False,
canonical=True,
)
def link_to_receptor_addresses(apps, schema_editor):
"""
Modify each InstanceLink to point to the newly created
ReceptorAddresses, using the new target field
"""
InstanceLink = apps.get_model('main', 'InstanceLink')
for link in InstanceLink.objects.all():
link.target = link.target_old.receptor_addresses.get()
link.save()
class Migration(migrations.Migration):
dependencies = [
('main', '0188_add_bitbucket_dc_webhook'),
]
operations = [
migrations.CreateModel(
name='ReceptorAddress',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('address', models.CharField(help_text='Routable address for this instance.', max_length=255)),
(
'port',
models.IntegerField(
default=27199,
help_text='Port for the address.',
validators=[django.core.validators.MinValueValidator(0), django.core.validators.MaxValueValidator(65535)],
),
),
('websocket_path', models.CharField(blank=True, default='', help_text='Websocket path.', max_length=255)),
(
'protocol',
models.CharField(
choices=[('tcp', 'TCP'), ('ws', 'WS'), ('wss', 'WSS')],
default='tcp',
help_text="Protocol to use for the Receptor listener, 'tcp', 'wss', or 'ws'.",
max_length=10,
),
),
('is_internal', models.BooleanField(default=False, help_text='If True, only routable within the Kubernetes cluster.')),
('canonical', models.BooleanField(default=False, help_text='If True, this address is the canonical address for the instance.')),
(
'peers_from_control_nodes',
models.BooleanField(default=False, help_text='If True, control plane cluster nodes should automatically peer to it.'),
),
],
),
migrations.RemoveConstraint(
model_name='instancelink',
name='source_and_target_can_not_be_equal',
),
migrations.RenameField(
model_name='instancelink',
old_name='target',
new_name='target_old',
),
migrations.AlterUniqueTogether(
name='instancelink',
unique_together=set(),
),
migrations.AddField(
model_name='instance',
name='managed',
field=models.BooleanField(default=False, editable=False, help_text='If True, this instance is managed by the control plane.'),
),
migrations.AlterField(
model_name='instancelink',
name='source',
field=models.ForeignKey(help_text='The source instance of this peer link.', on_delete=django.db.models.deletion.CASCADE, to='main.instance'),
),
migrations.AddField(
model_name='receptoraddress',
name='instance',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='receptor_addresses', to='main.instance'),
),
migrations.AddField(
model_name='activitystream',
name='receptor_address',
field=models.ManyToManyField(blank=True, to='main.receptoraddress'),
),
migrations.AddConstraint(
model_name='receptoraddress',
constraint=models.UniqueConstraint(fields=('address',), name='unique_receptor_address', violation_error_message='Receptor address must be unique.'),
),
migrations.AddField(
model_name='instancelink',
name='target',
field=models.ForeignKey(
help_text='The target receptor address of this peer link.', null=True, on_delete=django.db.models.deletion.CASCADE, to='main.receptoraddress'
),
),
migrations.RunPython(create_receptor_addresses),
migrations.RunPython(link_to_receptor_addresses),
migrations.RemoveField(
model_name='instance',
name='peers_from_control_nodes',
),
migrations.RemoveField(
model_name='instance',
name='listener_port',
),
migrations.RemoveField(
model_name='instancelink',
name='target_old',
),
migrations.AlterField(
model_name='instance',
name='peers',
field=models.ManyToManyField(related_name='peers_from', through='main.InstanceLink', to='main.receptoraddress'),
),
migrations.AlterField(
model_name='instancelink',
name='target',
field=models.ForeignKey(
help_text='The target receptor address of this peer link.', on_delete=django.db.models.deletion.CASCADE, to='main.receptoraddress'
),
),
migrations.AddConstraint(
model_name='instancelink',
constraint=models.UniqueConstraint(
fields=('source', 'target'), name='unique_source_target', violation_error_message='Field source and target must be unique together.'
),
),
]

View File

@@ -76,7 +76,7 @@ class azure_rm(PluginFileInjector):
user_filters = []
old_filterables = [
('resource_groups', 'resource_group'),
('tags', 'tags')
('tags', 'tags'),
# locations / location would be an entry
# but this would conflict with source_regions
]

View File

@@ -6,7 +6,7 @@ from django.conf import settings # noqa
from django.db import connection
from django.db.models.signals import pre_delete # noqa
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.main.models.base import BaseModel, PrimordialModel, accepts_json, CLOUD_INVENTORY_SOURCES, VERBOSITY_CHOICES # noqa
@@ -14,6 +14,7 @@ from awx.main.models.unified_jobs import UnifiedJob, UnifiedJobTemplate, StdoutM
from awx.main.models.organization import Organization, Profile, Team, UserSessionMembership # noqa
from awx.main.models.credential import Credential, CredentialType, CredentialInputSource, ManagedCredentialType, build_safe_env # noqa
from awx.main.models.projects import Project, ProjectUpdate # noqa
from awx.main.models.receptor_address import ReceptorAddress # noqa
from awx.main.models.inventory import ( # noqa
CustomInventoryScript,
Group,

View File

@@ -77,6 +77,7 @@ class ActivityStream(models.Model):
notification_template = models.ManyToManyField("NotificationTemplate", blank=True)
notification = models.ManyToManyField("Notification", blank=True)
label = models.ManyToManyField("Label", blank=True)
receptor_address = models.ManyToManyField("ReceptorAddress", blank=True)
role = models.ManyToManyField("Role", blank=True)
instance = models.ManyToManyField("Instance", blank=True)
instance_group = models.ManyToManyField("InstanceGroup", blank=True)

View File

@@ -12,7 +12,7 @@ from django.utils.text import Truncator
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import ValidationError
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse

View File

@@ -953,6 +953,25 @@ ManagedCredentialType(
},
)
ManagedCredentialType(
namespace='bitbucket_dc_token',
kind='token',
name=gettext_noop('Bitbucket Data Center HTTP Access Token'),
managed=True,
inputs={
'fields': [
{
'id': 'token',
'label': gettext_noop('Token'),
'type': 'string',
'secret': True,
'help_text': gettext_noop('This token needs to come from your user settings in Bitbucket'),
}
],
'required': ['token'],
},
)
ManagedCredentialType(
namespace='insights',
kind='insights',
@@ -1197,6 +1216,26 @@ ManagedCredentialType(
},
)
ManagedCredentialType(
namespace='terraform',
kind='cloud',
name=gettext_noop('Terraform backend configuration'),
managed=True,
inputs={
'fields': [
{
'id': 'configuration',
'label': gettext_noop('Backend configuration'),
'type': 'string',
'secret': True,
'multiline': True,
'help_text': gettext_noop('Terraform backend config as Hashicorp configuration language.'),
},
],
'required': ['configuration'],
},
)
class CredentialInputSource(PrimordialModel):
class Meta:

View File

@@ -122,3 +122,11 @@ def kubernetes_bearer_token(cred, env, private_data_dir):
env['K8S_AUTH_SSL_CA_CERT'] = to_container_path(path, private_data_dir)
else:
env['K8S_AUTH_VERIFY_SSL'] = 'False'
def terraform(cred, env, private_data_dir):
handle, path = tempfile.mkstemp(dir=os.path.join(private_data_dir, 'env'))
with os.fdopen(handle, 'w') as f:
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
f.write(cred.get_input('configuration'))
env['TF_BACKEND_CONFIG_FILE'] = to_container_path(path, private_data_dir)

View File

@@ -124,8 +124,6 @@ class BasePlaybookEvent(CreatedModifiedModel):
'parent_uuid',
'start_line',
'end_line',
'host_id',
'host_name',
'verbosity',
]
WRAPUP_EVENT = 'playbook_on_stats'
@@ -473,7 +471,7 @@ class JobEvent(BasePlaybookEvent):
An event/message logged from the callback when running a job.
"""
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id', 'workflow_job_id', 'job_created']
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id', 'workflow_job_id', 'job_created', 'host_id', 'host_name']
JOB_REFERENCE = 'job_id'
objects = DeferJobCreatedManager()

View File

@@ -5,7 +5,7 @@ from decimal import Decimal
import logging
import os
from django.core.validators import MinValueValidator, MaxValueValidator
from django.core.validators import MinValueValidator
from django.db import models, connection
from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
@@ -17,7 +17,7 @@ from django.db.models import Sum, Q
import redis
from solo.models import SingletonModel
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx import __version__ as awx_application_version
@@ -34,6 +34,7 @@ from awx.main.models.rbac import (
from awx.main.models.unified_jobs import UnifiedJob
from awx.main.utils.common import get_corrected_cpu, get_cpu_effective_capacity, get_corrected_memory, get_mem_effective_capacity
from awx.main.models.mixins import RelatedJobsMixin, ResourceMixin
from awx.main.models.receptor_address import ReceptorAddress
# ansible-runner
from ansible_runner.utils.capacity import get_cpu_count, get_mem_in_bytes
@@ -64,8 +65,19 @@ class HasPolicyEditsMixin(HasEditsMixin):
class InstanceLink(BaseModel):
source = models.ForeignKey('Instance', on_delete=models.CASCADE, related_name='+')
target = models.ForeignKey('Instance', on_delete=models.CASCADE, related_name='reverse_peers')
class Meta:
ordering = ("id",)
# add constraint for source and target to be unique together
constraints = [
models.UniqueConstraint(
fields=["source", "target"],
name="unique_source_target",
violation_error_message=_("Field source and target must be unique together."),
)
]
source = models.ForeignKey('Instance', on_delete=models.CASCADE, help_text=_("The source instance of this peer link."))
target = models.ForeignKey('ReceptorAddress', on_delete=models.CASCADE, help_text=_("The target receptor address of this peer link."))
class States(models.TextChoices):
ADDING = 'adding', _('Adding')
@@ -76,11 +88,6 @@ class InstanceLink(BaseModel):
choices=States.choices, default=States.ADDING, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.")
)
class Meta:
unique_together = ('source', 'target')
ordering = ("id",)
constraints = [models.CheckConstraint(check=~models.Q(source=models.F('target')), name='source_and_target_can_not_be_equal')]
class Instance(HasPolicyEditsMixin, BaseModel):
"""A model representing an AWX instance running against this database."""
@@ -110,6 +117,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
default="",
max_length=50,
)
# Auto-fields, implementation is different from BaseModel
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
@@ -185,16 +193,9 @@ class Instance(HasPolicyEditsMixin, BaseModel):
node_state = models.CharField(
choices=States.choices, default=States.READY, max_length=16, help_text=_("Indicates the current life cycle stage of this instance.")
)
listener_port = models.PositiveIntegerField(
blank=True,
null=True,
default=None,
validators=[MinValueValidator(1024), MaxValueValidator(65535)],
help_text=_("Port that Receptor will listen for incoming connections on."),
)
peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target'), related_name='peers_from')
peers_from_control_nodes = models.BooleanField(default=False, help_text=_("If True, control plane cluster nodes should automatically peer to it."))
managed = models.BooleanField(help_text=_("If True, this instance is managed by the control plane."), default=False, editable=False)
peers = models.ManyToManyField('ReceptorAddress', through=InstanceLink, through_fields=('source', 'target'), related_name='peers_from')
POLICY_FIELDS = frozenset(('managed_by_policy', 'hostname', 'capacity_adjustment'))
@@ -241,6 +242,26 @@ class Instance(HasPolicyEditsMixin, BaseModel):
return True
return self.health_check_started > self.last_health_check
@property
def canonical_address(self):
return self.receptor_addresses.filter(canonical=True).first()
@property
def canonical_address_port(self):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in self.receptor_addresses.all():
if addr.canonical:
return addr.port
return None
@property
def canonical_address_peers_from_control_nodes(self):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in self.receptor_addresses.all():
if addr.canonical:
return addr.peers_from_control_nodes
return False
def get_cleanup_task_kwargs(self, **kwargs):
"""
Produce options to use for the command: ansible-runner worker cleanup
@@ -501,6 +522,35 @@ def schedule_write_receptor_config(broadcast=True):
write_receptor_config() # just run locally
@receiver(post_save, sender=ReceptorAddress)
def receptor_address_saved(sender, instance, **kwargs):
from awx.main.signals import disable_activity_stream
address = instance
control_instances = set(Instance.objects.filter(node_type__in=[Instance.Types.CONTROL, Instance.Types.HYBRID]))
if address.peers_from_control_nodes:
# if control_instances is not a subset of current peers of address, then
# that means we need to add some InstanceLinks
if not control_instances <= set(address.peers_from.all()):
with disable_activity_stream():
for control_instance in control_instances:
InstanceLink.objects.update_or_create(source=control_instance, target=address)
schedule_write_receptor_config()
else:
if address.peers_from.exists():
with disable_activity_stream():
address.peers_from.remove(*control_instances)
schedule_write_receptor_config()
@receiver(post_delete, sender=ReceptorAddress)
def receptor_address_deleted(sender, instance, **kwargs):
address = instance
if address.peers_from_control_nodes:
schedule_write_receptor_config()
@receiver(post_save, sender=Instance)
def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
'''
@@ -511,11 +561,14 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
2. a node changes its value of peers_from_control_nodes
3. a new control node comes online and has instances to peer to
'''
from awx.main.signals import disable_activity_stream
if created and settings.IS_K8S and instance.node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
inst = Instance.objects.filter(peers_from_control_nodes=True)
if set(instance.peers.all()) != set(inst):
instance.peers.set(inst)
schedule_write_receptor_config(broadcast=False)
peers_addresses = ReceptorAddress.objects.filter(peers_from_control_nodes=True)
if peers_addresses.exists():
with disable_activity_stream():
instance.peers.add(*peers_addresses)
schedule_write_receptor_config(broadcast=False)
if settings.IS_K8S and instance.node_type in [Instance.Types.HOP, Instance.Types.EXECUTION]:
if instance.node_state == Instance.States.DEPROVISIONING:
@@ -524,16 +577,6 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
# wait for jobs on the node to complete, then delete the
# node and kick off write_receptor_config
connection.on_commit(lambda: remove_deprovisioned_node.apply_async([instance.hostname]))
else:
control_instances = set(Instance.objects.filter(node_type__in=[Instance.Types.CONTROL, Instance.Types.HYBRID]))
if instance.peers_from_control_nodes:
if (control_instances & set(instance.peers_from.all())) != set(control_instances):
instance.peers_from.add(*control_instances)
schedule_write_receptor_config() # keep method separate to make pytest mocking easier
else:
if set(control_instances) & set(instance.peers_from.all()):
instance.peers_from.remove(*control_instances)
schedule_write_receptor_config()
if created or instance.has_policy_changes():
schedule_policy_task()
@@ -548,8 +591,6 @@ def on_instance_group_deleted(sender, instance, using, **kwargs):
@receiver(post_delete, sender=Instance)
def on_instance_deleted(sender, instance, using, **kwargs):
schedule_policy_task()
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION, Instance.Types.HOP) and instance.peers_from_control_nodes:
schedule_write_receptor_config()
class UnifiedJobTemplateInstanceGroupMembership(models.Model):

View File

@@ -25,7 +25,7 @@ from django.db.models import Q
# REST Framework
from rest_framework.exceptions import ParseError
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse

View File

@@ -20,7 +20,7 @@ from django.core.exceptions import FieldDoesNotExist
# REST Framework
from rest_framework.exceptions import ParseError
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse

View File

@@ -16,7 +16,7 @@ from django.db.models.query import QuerySet
from django.utils.crypto import get_random_string
from django.utils.translation import gettext_lazy as _
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.main.models.rbac import Role, RoleAncestorEntry
@@ -527,7 +527,6 @@ class CustomVirtualEnvMixin(models.Model):
class RelatedJobsMixin(object):
"""
This method is intended to be overwritten.
Called by get_active_jobs()
@@ -562,6 +561,7 @@ class WebhookTemplateMixin(models.Model):
SERVICES = [
('github', "GitHub"),
('gitlab', "GitLab"),
('bitbucket_dc', "BitBucket DataCenter"),
]
webhook_service = models.CharField(max_length=16, choices=SERVICES, blank=True, help_text=_('Service that webhook requests will be accepted from'))
@@ -622,6 +622,7 @@ class WebhookMixin(models.Model):
service_header = {
'github': ('Authorization', 'token {}'),
'gitlab': ('PRIVATE-TOKEN', '{}'),
'bitbucket_dc': ('Authorization', 'Bearer {}'),
}
service_statuses = {
'github': {
@@ -639,6 +640,14 @@ class WebhookMixin(models.Model):
'error': 'failed', # GitLab doesn't have an 'error' status distinct from 'failed' :(
'canceled': 'canceled',
},
'bitbucket_dc': {
'pending': 'INPROGRESS', # Bitbucket DC doesn't have any other statuses distinct from INPROGRESS, SUCCESSFUL, FAILED :(
'running': 'INPROGRESS',
'successful': 'SUCCESSFUL',
'failed': 'FAILED',
'error': 'FAILED',
'canceled': 'FAILED',
},
}
statuses = service_statuses[self.webhook_service]
@@ -647,11 +656,18 @@ class WebhookMixin(models.Model):
return
try:
license_type = get_licenser().validate().get('license_type')
data = {
'state': statuses[status],
'context': 'ansible/awx' if license_type == 'open' else 'ansible/tower',
'target_url': self.get_ui_url(),
}
if self.webhook_service == 'bitbucket_dc':
data = {
'state': statuses[status],
'key': 'ansible/awx' if license_type == 'open' else 'ansible/tower',
'url': self.get_ui_url(),
}
else:
data = {
'state': statuses[status],
'context': 'ansible/awx' if license_type == 'open' else 'ansible/tower',
'target_url': self.get_ui_url(),
}
k, v = service_header[self.webhook_service]
headers = {k: v.format(self.webhook_credential.get_input('token')), 'Content-Type': 'application/json'}
response = requests.post(status_api, data=json.dumps(data), headers=headers, timeout=30)

View File

@@ -15,7 +15,7 @@ from django.utils.encoding import smart_str, force_str
from jinja2 import sandbox, ChainableUndefined
from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse

View File

@@ -0,0 +1,67 @@
from django.db import models
from django.core.validators import MinValueValidator, MaxValueValidator
from django.utils.translation import gettext_lazy as _
from awx.api.versioning import reverse
class Protocols(models.TextChoices):
TCP = 'tcp', 'TCP'
WS = 'ws', 'WS'
WSS = 'wss', 'WSS'
class ReceptorAddress(models.Model):
class Meta:
app_label = 'main'
constraints = [
models.UniqueConstraint(
fields=["address"],
name="unique_receptor_address",
violation_error_message=_("Receptor address must be unique."),
)
]
address = models.CharField(help_text=_("Routable address for this instance."), max_length=255)
port = models.IntegerField(help_text=_("Port for the address."), default=27199, validators=[MinValueValidator(0), MaxValueValidator(65535)])
websocket_path = models.CharField(help_text=_("Websocket path."), max_length=255, default="", blank=True)
protocol = models.CharField(
help_text=_("Protocol to use for the Receptor listener, 'tcp', 'wss', or 'ws'."), max_length=10, default=Protocols.TCP, choices=Protocols.choices
)
is_internal = models.BooleanField(help_text=_("If True, only routable within the Kubernetes cluster."), default=False)
canonical = models.BooleanField(help_text=_("If True, this address is the canonical address for the instance."), default=False)
peers_from_control_nodes = models.BooleanField(help_text=_("If True, control plane cluster nodes should automatically peer to it."), default=False)
instance = models.ForeignKey(
'Instance',
related_name='receptor_addresses',
on_delete=models.CASCADE,
null=False,
)
def __str__(self):
return self.get_full_address()
def get_full_address(self):
scheme = ""
path = ""
port = ""
if self.protocol == "ws":
scheme = "wss://"
if self.protocol == "ws" and self.websocket_path:
path = f"/{self.websocket_path}"
if self.port:
port = f":{self.port}"
return f"{scheme}{self.address}{port}{path}"
def get_peer_type(self):
if self.protocol == 'tcp':
return 'tcp-peer'
elif self.protocol in ['ws', 'wss']:
return 'ws-peer'
else:
return None
def get_absolute_url(self, request=None):
return reverse('api:receptor_address_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -30,7 +30,7 @@ from rest_framework.exceptions import ParseError
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
from ansible_base.utils.models import prevent_search, get_type_for_model
from ansible_base.lib.utils.models import prevent_search, get_type_for_model
# AWX
from awx.main.models.base import CommonModelNameNotUnique, PasswordFieldsModel, NotificationFieldsModel

View File

@@ -23,7 +23,7 @@ from crum import get_current_user
from jinja2 import sandbox
from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
from ansible_base.utils.models import prevent_search
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse

View File

@@ -4,9 +4,10 @@ import logging
from django.conf import settings
from django.urls import re_path
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
from ansible_base.lib.channels.middleware import DrfAuthMiddlewareStack
from . import consumers
@@ -27,12 +28,13 @@ class AWXProtocolTypeRouter(ProtocolTypeRouter):
websocket_urlpatterns = [
re_path(r'api/websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()),
]
application = AWXProtocolTypeRouter(
{
'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),
'websocket': DrfAuthMiddlewareStack(URLRouter(websocket_urlpatterns)),
}
)

View File

@@ -17,7 +17,7 @@ from django.utils.timezone import now as tz_now
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from ansible_base.utils.models import get_type_for_model
from ansible_base.lib.utils.models import get_type_for_model
# AWX
from awx.main.dispatch.reaper import reap_job
@@ -68,7 +68,7 @@ class TaskBase:
# initialize each metric to 0 and force metric_has_changed to true. This
# ensures each task manager metric will be overridden when pipe_execute
# is called later.
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
self.start_time = time.time()
# We want to avoid calling settings in loops, so cache these settings at init time
@@ -105,7 +105,7 @@ class TaskBase:
try:
# increment task_manager_schedule_calls regardless if the other
# metrics are recorded
s_metrics.Metrics(auto_pipe_execute=True).inc(f"{self.prefix}__schedule_calls", 1)
s_metrics.DispatcherMetrics(auto_pipe_execute=True).inc(f"{self.prefix}__schedule_calls", 1)
# Only record metrics if the last time recording was more
# than SUBSYSTEM_METRICS_TASK_MANAGER_RECORD_INTERVAL ago.
# Prevents a short-duration task manager that runs directly after a

View File

@@ -95,17 +95,17 @@ class RunnerCallback:
if self.parent_workflow_job_id:
event_data['workflow_job_id'] = self.parent_workflow_job_id
event_data['job_created'] = self.job_created
if self.host_map:
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
if isinstance(self, RunnerCallbackForProjectUpdate):
# need a better way to have this check.

View File

@@ -27,7 +27,7 @@ from awx.main.utils.common import (
)
from awx.main.constants import MAX_ISOLATED_PATH_COLON_DELIMITER
from awx.main.tasks.signals import signal_state, signal_callback, SignalExit
from awx.main.models import Instance, InstanceLink, UnifiedJob
from awx.main.models import Instance, InstanceLink, UnifiedJob, ReceptorAddress
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task
from awx.main.utils.pglock import advisory_lock
@@ -676,36 +676,44 @@ RECEPTOR_CONFIG_STARTER = (
)
def should_update_config(instances):
def should_update_config(new_config):
'''
checks that the list of instances matches the list of
tcp-peers in the config
'''
current_config = read_receptor_config() # this gets receptor conf lock
current_peers = []
for config_entry in current_config:
for key, value in config_entry.items():
if key.endswith('-peer'):
current_peers.append(value['address'])
intended_peers = [f"{i.hostname}:{i.listener_port}" for i in instances]
logger.debug(f"Peers current {current_peers} intended {intended_peers}")
if set(current_peers) == set(intended_peers):
return False # config file is already update to date
return True
current_config = read_receptor_config() # this gets receptor conf lock
for config_entry in current_config:
if config_entry not in new_config:
logger.warning(f"{config_entry} should not be in receptor config. Updating.")
return True
for config_entry in new_config:
if config_entry not in current_config:
logger.warning(f"{config_entry} missing from receptor config. Updating.")
return True
return False
def generate_config_data():
# returns two values
# receptor config - based on current database peers
# should_update - If True, receptor_config differs from the receptor conf file on disk
instances = Instance.objects.filter(node_type__in=(Instance.Types.EXECUTION, Instance.Types.HOP), peers_from_control_nodes=True)
addresses = ReceptorAddress.objects.filter(peers_from_control_nodes=True)
receptor_config = list(RECEPTOR_CONFIG_STARTER)
for instance in instances:
peer = {'tcp-peer': {'address': f'{instance.hostname}:{instance.listener_port}', 'tls': 'tlsclient'}}
receptor_config.append(peer)
should_update = should_update_config(instances)
for address in addresses:
if address.get_peer_type():
peer = {
f'{address.get_peer_type()}': {
'address': f'{address.get_full_address()}',
'tls': 'tlsclient',
}
}
receptor_config.append(peer)
else:
logger.warning(f"Receptor address {address} has unsupported peer type, skipping.")
should_update = should_update_config(receptor_config)
return receptor_config, should_update
@@ -747,14 +755,13 @@ def write_receptor_config():
with lock:
with open(__RECEPTOR_CONF, 'w') as file:
yaml.dump(receptor_config, file, default_flow_style=False)
reload_receptor()
@task(queue=get_task_queuename)
def remove_deprovisioned_node(hostname):
InstanceLink.objects.filter(source__hostname=hostname).update(link_state=InstanceLink.States.REMOVING)
InstanceLink.objects.filter(target__hostname=hostname).update(link_state=InstanceLink.States.REMOVING)
InstanceLink.objects.filter(target__instance__hostname=hostname).update(link_state=InstanceLink.States.REMOVING)
node_jobs = UnifiedJob.objects.filter(
execution_node=hostname,

View File

@@ -62,7 +62,7 @@ from awx.main.tasks.receptor import get_receptor_ctl, worker_info, worker_cleanu
from awx.main.consumers import emit_channel_notification
from awx.main import analytics
from awx.conf import settings_registry
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.analytics.subsystem_metrics import DispatcherMetrics
from rest_framework.exceptions import PermissionDenied
@@ -113,7 +113,7 @@ def dispatch_startup():
cluster_node_heartbeat()
reaper.startup_reaping()
reaper.reap_waiting(grace_period=0)
m = Metrics()
m = DispatcherMetrics()
m.reset_values()
@@ -495,7 +495,7 @@ def inspect_established_receptor_connections(mesh_status):
update_links = []
for link in all_links:
if link.link_state != InstanceLink.States.REMOVING:
if link.target.hostname in active_receptor_conns.get(link.source.hostname, {}):
if link.target.instance.hostname in active_receptor_conns.get(link.source.hostname, {}):
if link.link_state is not InstanceLink.States.ESTABLISHED:
link.link_state = InstanceLink.States.ESTABLISHED
update_links.append(link)

View File

@@ -1,19 +1,16 @@
import pytest
import yaml
import itertools
from unittest import mock
from django.db.utils import IntegrityError
from awx.api.versioning import reverse
from awx.main.models import Instance
from awx.main.models import Instance, ReceptorAddress
from awx.api.views.instance_install_bundle import generate_group_vars_all_yml
def has_peer(group_vars, peer):
peers = group_vars.get('receptor_peers', [])
for p in peers:
if f"{p['host']}:{p['port']}" == peer:
if p['address'] == peer:
return True
return False
@@ -24,119 +21,314 @@ class TestPeers:
def configure_settings(self, settings):
settings.IS_K8S = True
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_prevent_peering_to_self(self, node_type):
@pytest.mark.parametrize('node_type', ['hop', 'execution'])
def test_peering_to_self(self, node_type, admin_user, patch):
"""
cannot peer to self
"""
control_instance = Instance.objects.create(hostname='abc', node_type=node_type)
with pytest.raises(IntegrityError):
control_instance.peers.add(control_instance)
instance = Instance.objects.create(hostname='abc', node_type=node_type)
addr = ReceptorAddress.objects.create(instance=instance, address='abc', canonical=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': instance.pk}),
data={"hostname": "abc", "node_type": node_type, "peers": [addr.id]},
user=admin_user,
expect=400,
)
assert 'Instance cannot peer to its own address.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['control', 'hybrid', 'hop', 'execution'])
def test_creating_node(self, node_type, admin_user, post):
"""
can only add hop and execution nodes via API
"""
post(
resp = post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type},
user=admin_user,
expect=400 if node_type in ['control', 'hybrid'] else 201,
)
if resp.status_code == 400:
assert 'Can only create execution or hop nodes.' in str(resp.data)
def test_changing_node_type(self, admin_user, patch):
"""
cannot change node type
"""
hop = Instance.objects.create(hostname='abc', node_type="hop")
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_type": "execution"},
user=admin_user,
expect=400,
)
assert 'Cannot change node type.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['hop', 'execution'])
def test_listener_port_null(self, node_type, admin_user, post):
"""
listener_port can be None
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type, "listener_port": None},
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from',
[
(-1, -1, None, None),
(-1, -1, 27199, False),
(-1, -1, 27199, True),
(None, -1, None, None),
(None, False, None, None),
(-1, False, None, None),
(27199, True, 27199, True),
(27199, False, 27199, False),
(27199, -1, 27199, True),
(27199, -1, 27199, False),
(-1, True, 27199, True),
(-1, False, 27199, False),
],
)
def test_no_op(self, payload_port, payload_peers_from, initial_port, initial_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
if initial_port is not None:
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
else:
assert ReceptorAddress.objects.filter(instance=node).count() == 0
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=201,
expect=200,
)
@pytest.mark.parametrize('node_type, allowed', [('control', False), ('hybrid', False), ('hop', True), ('execution', True)])
def test_peers_from_control_nodes_allowed(self, node_type, allowed, post, admin_user):
"""
only hop and execution nodes can have peers_from_control_nodes set to True
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "peers_from_control_nodes": True, "node_type": node_type, "listener_port": 6789},
assert ReceptorAddress.objects.filter(instance=node).count() == (0 if initial_port is None else 1)
if initial_port is not None:
ra = ReceptorAddress.objects.get(instance=node, canonical=True)
assert ra.port == initial_port
assert ra.peers_from_control_nodes == initial_peers_from
@pytest.mark.parametrize(
'payload_port, payload_peers_from',
[
(27199, True),
(27199, False),
(27199, -1),
],
)
def test_creates_canonical_address(self, payload_port, payload_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
assert ReceptorAddress.objects.filter(instance=node).count() == 0
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=201 if allowed else 400,
expect=200,
)
def test_listener_port_is_required(self, admin_user, post):
"""
if adding instance to peers list, that instance must have listener_port set
"""
Instance.objects.create(hostname='abc', node_type="hop", listener_port=None)
post(
url=reverse('api:instance_list'),
data={"hostname": "ex", "peers_from_control_nodes": False, "node_type": "execution", "listener_port": None, "peers": ["abc"]},
assert ReceptorAddress.objects.filter(instance=node).count() == 1
ra = ReceptorAddress.objects.get(instance=node, canonical=True)
assert ra.port == payload_port
assert ra.peers_from_control_nodes == (payload_peers_from if payload_peers_from != -1 else False)
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from',
[
(None, False, 27199, True),
(None, -1, 27199, True),
(None, False, 27199, False),
(None, -1, 27199, False),
],
)
def test_deletes_canonical_address(self, payload_port, payload_peers_from, initial_port, initial_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=200,
)
assert ReceptorAddress.objects.filter(instance=node).count() == 0
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from',
[
(27199, True, 27199, False),
(27199, False, 27199, True),
(-1, True, 27199, False),
(-1, False, 27199, True),
],
)
def test_updates_canonical_address(self, payload_port, payload_peers_from, initial_port, initial_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=200,
)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
ra = ReceptorAddress.objects.get(instance=node, canonical=True)
assert ra.port == initial_port # At the present time, changing ports is not allowed
assert ra.peers_from_control_nodes == payload_peers_from
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from, error_msg',
[
(-1, True, None, None, "Cannot enable peers_from_control_nodes"),
(None, True, None, None, "Cannot enable peers_from_control_nodes"),
(None, True, 21799, True, "Cannot enable peers_from_control_nodes"),
(None, True, 21799, False, "Cannot enable peers_from_control_nodes"),
(21800, -1, 21799, True, "Cannot change listener port"),
(21800, True, 21799, True, "Cannot change listener port"),
(21800, False, 21799, True, "Cannot change listener port"),
(21800, -1, 21799, False, "Cannot change listener port"),
(21800, True, 21799, False, "Cannot change listener port"),
(21800, False, 21799, False, "Cannot change listener port"),
],
)
def test_canonical_address_validation_error(self, payload_port, payload_peers_from, initial_port, initial_peers_from, error_msg, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
if initial_port is not None:
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
else:
assert ReceptorAddress.objects.filter(instance=node).count() == 0
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=400,
)
def test_peers_from_control_nodes_listener_port_enabled(self, admin_user, post):
assert error_msg in str(resp.data)
def test_changing_managed_listener_port(self, admin_user, patch):
"""
if peers_from_control_nodes is True, listener_port must an integer
Assert that all other combinations are allowed
if instance is managed, cannot change listener port at all
"""
for index, item in enumerate(itertools.product(['hop', 'execution'], [True, False], [None, 6789])):
node_type, peers_from, listener_port = item
# only disallowed case is when peers_from is True and listener port is None
disallowed = peers_from and not listener_port
post(
url=reverse('api:instance_list'),
data={"hostname": f"abc{index}", "peers_from_control_nodes": peers_from, "node_type": node_type, "listener_port": listener_port},
user=admin_user,
expect=400 if disallowed else 201,
)
hop = Instance.objects.create(hostname='abc', node_type="hop", managed=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"listener_port": 5678},
user=admin_user,
expect=400, # cannot set port
)
assert 'Cannot change listener port for managed nodes.' in str(resp.data)
ReceptorAddress.objects.create(instance=hop, address='hop', port=27199, canonical=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"listener_port": None},
user=admin_user,
expect=400, # cannot unset port
)
assert 'Cannot change listener port for managed nodes.' in str(resp.data)
def test_bidirectional_peering(self, admin_user, patch):
"""
cannot peer to node that is already to peered to it
if A -> B, then disallow B -> A
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', canonical=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
hop1.peers.add(hop2addr)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers": [hop1addr.id]},
user=admin_user,
expect=400,
)
assert 'Instance hop1 is already peered to this instance.' in str(resp.data)
def test_multiple_peers_same_instance(self, admin_user, patch):
"""
cannot peer to more than one address of the same instance
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr1 = ReceptorAddress.objects.create(instance=hop1, address='hop1', canonical=True)
hop1addr2 = ReceptorAddress.objects.create(instance=hop1, address='hop1alternate')
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers": [hop1addr1.id, hop1addr2.id]},
user=admin_user,
expect=400,
)
assert 'Cannot peer to the same instance more than once.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_disallow_modifying_peers_control_nodes(self, node_type, admin_user, patch):
def test_changing_peers_control_nodes(self, node_type, admin_user, patch):
"""
for control nodes, peers field should not be
modified directly via patch.
"""
control = Instance.objects.create(hostname='abc', node_type=node_type)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', peers_from_control_nodes=False, listener_port=6789)
assert [hop1] == list(control.peers.all()) # only hop1 should be peered
patch(
control = Instance.objects.create(hostname='abc', node_type=node_type, managed=True)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=True, canonical=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
assert [hop1addr] == list(control.peers.all()) # only hop1addr should be peered
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop2"]},
data={"peers": [hop2addr.id]},
user=admin_user,
expect=400, # cannot add peers directly
expect=400, # cannot add peers manually
)
assert 'Setting peers manually for managed nodes is not allowed.' in str(resp.data)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop1"]},
data={"peers": [hop1addr.id]},
user=admin_user,
expect=200, # patching with current peers list should be okay
)
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": []},
user=admin_user,
expect=400, # cannot remove peers directly
)
assert 'Setting peers manually for managed nodes is not allowed.' in str(resp.data)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={},
@@ -148,23 +340,25 @@ class TestPeers:
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers_from_control_nodes": True},
user=admin_user,
expect=200, # patching without data should be fine too
expect=200,
)
assert {hop1, hop2} == set(control.peers.all()) # hop1 and hop2 should now be peered from control node
assert {hop1addr, hop2addr} == set(control.peers.all()) # hop1 and hop2 should now be peered from control node
def test_disallow_changing_hostname(self, admin_user, patch):
def test_changing_hostname(self, admin_user, patch):
"""
cannot change hostname
"""
hop = Instance.objects.create(hostname='hop', node_type='hop')
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"hostname": "hop2"},
user=admin_user,
expect=400,
)
def test_disallow_changing_node_state(self, admin_user, patch):
assert 'Cannot change hostname.' in str(resp.data)
def test_changing_node_state(self, admin_user, patch):
"""
only allow setting to deprovisioning
"""
@@ -175,12 +369,54 @@ class TestPeers:
user=admin_user,
expect=200,
)
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "ready"},
user=admin_user,
expect=400,
)
assert "Can only change instances to the 'deprovisioning' state." in str(resp.data)
def test_changing_managed_node_state(self, admin_user, patch):
"""
cannot change node state of managed node
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', managed=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "deprovisioning"},
user=admin_user,
expect=400,
)
assert 'Cannot deprovision managed nodes.' in str(resp.data)
def test_changing_managed_peers_from_control_nodes(self, admin_user, patch):
"""
cannot change peers_from_control_nodes of managed node
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', managed=True)
ReceptorAddress.objects.create(instance=hop, address='hop', peers_from_control_nodes=True, canonical=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"peers_from_control_nodes": False},
user=admin_user,
expect=400,
)
assert 'Cannot change peers_from_control_nodes for managed nodes.' in str(resp.data)
hop.peers_from_control_nodes = False
hop.save()
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"peers_from_control_nodes": False},
user=admin_user,
expect=400,
)
assert 'Cannot change peers_from_control_nodes for managed nodes.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_control_node_automatically_peers(self, node_type):
@@ -191,9 +427,10 @@ class TestPeers:
peer to hop should be removed if hop is deleted
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
hop = Instance.objects.create(hostname='hop', node_type='hop')
hopaddr = ReceptorAddress.objects.create(instance=hop, address='hop', peers_from_control_nodes=True, canonical=True)
control = Instance.objects.create(hostname='abc', node_type=node_type)
assert hop in control.peers.all()
assert hopaddr in control.peers.all()
hop.delete()
assert not control.peers.exists()
@@ -203,26 +440,50 @@ class TestPeers:
if a new node comes online, other peer relationships should
remain intact
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop1.peers.add(hop2)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
hop1.peers.add(hop2addr)
# a control node is added
Instance.objects.create(hostname='control', node_type=node_type, listener_port=None)
Instance.objects.create(hostname='control', node_type=node_type)
assert hop1.peers.exists()
def test_group_vars(self, get, admin_user):
def test_reverse_peers(self, admin_user, get):
"""
if hop1 peers to hop2, hop1 should
be in hop2's reverse_peers list
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
hop1.peers.add(hop2addr)
resp = get(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
user=admin_user,
expect=200,
)
assert hop1.pk in resp.data['reverse_peers']
def test_group_vars(self):
"""
control > hop1 > hop2 < execution
"""
control = Instance.objects.create(hostname='control', node_type='control', listener_port=None)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
execution = Instance.objects.create(hostname='execution', node_type='execution', listener_port=6789)
control = Instance.objects.create(hostname='control', node_type='control')
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=True, port=6789, canonical=True)
execution.peers.add(hop2)
hop1.peers.add(hop2)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', peers_from_control_nodes=False, port=6789, canonical=True)
execution = Instance.objects.create(hostname='execution', node_type='execution')
ReceptorAddress.objects.create(instance=execution, address='execution', peers_from_control_nodes=False, port=6789, canonical=True)
execution.peers.add(hop2addr)
hop1.peers.add(hop2addr)
control_vars = yaml.safe_load(generate_group_vars_all_yml(control))
hop1_vars = yaml.safe_load(generate_group_vars_all_yml(hop1))
@@ -265,13 +526,15 @@ class TestPeers:
control = Instance.objects.create(hostname='control1', node_type='control')
write_method.assert_not_called()
# new hop node with peers_from_control_nodes False (no)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
# new address with peers_from_control_nodes False (no)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=False, canonical=True)
hop1.delete()
write_method.assert_not_called()
# new hop node with peers_from_control_nodes True (yes)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
# new address with peers_from_control_nodes True (yes)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=True, canonical=True)
write_method.assert_called()
write_method.reset_mock()
@@ -280,20 +543,21 @@ class TestPeers:
write_method.assert_called()
write_method.reset_mock()
# new hop node with peers_from_control_nodes False and peered to another hop node (no)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop2.peers.add(hop1)
# new address with peers_from_control_nodes False and peered to another hop node (no)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
ReceptorAddress.objects.create(instance=hop2, address='hop2', peers_from_control_nodes=False, canonical=True)
hop2.peers.add(hop1addr)
hop2.delete()
write_method.assert_not_called()
# changing peers_from_control_nodes to False (yes)
hop1.peers_from_control_nodes = False
hop1.save()
hop1addr.peers_from_control_nodes = False
hop1addr.save()
write_method.assert_called()
write_method.reset_mock()
# deleting hop node that has peers_from_control_nodes to False (no)
hop1.delete()
# deleting address that has peers_from_control_nodes to False (no)
hop1.delete() # cascade deletes to hop1addr
write_method.assert_not_called()
# deleting control nodes (no)
@@ -315,8 +579,8 @@ class TestPeers:
# not peered, so config file should not be updated
for i in range(3):
Instance.objects.create(hostname=f"exNo-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=False)
inst = Instance.objects.create(hostname=f"exNo-{i}", node_type='execution')
ReceptorAddress.objects.create(instance=inst, address=f"exNo-{i}", port=6789, peers_from_control_nodes=False, canonical=True)
_, should_update = generate_config_data()
assert not should_update
@@ -324,11 +588,13 @@ class TestPeers:
expected_peers = []
for i in range(3):
expected_peers.append(f"hop-{i}:6789")
Instance.objects.create(hostname=f"hop-{i}", node_type='hop', listener_port=6789, peers_from_control_nodes=True)
inst = Instance.objects.create(hostname=f"hop-{i}", node_type='hop')
ReceptorAddress.objects.create(instance=inst, address=f"hop-{i}", port=6789, peers_from_control_nodes=True, canonical=True)
for i in range(3):
expected_peers.append(f"exYes-{i}:6789")
Instance.objects.create(hostname=f"exYes-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=True)
inst = Instance.objects.create(hostname=f"exYes-{i}", node_type='execution')
ReceptorAddress.objects.create(instance=inst, address=f"exYes-{i}", port=6789, peers_from_control_nodes=True, canonical=True)
new_config, should_update = generate_config_data()
assert should_update

View File

@@ -2,7 +2,7 @@ from unittest import mock
import pytest
import json
from ansible_base.utils.models import get_type_for_model
from ansible_base.lib.utils.models import get_type_for_model
from awx.api.versioning import reverse
from awx.main.models.jobs import JobTemplate, Job

View File

@@ -81,6 +81,7 @@ def test_default_cred_types():
'aws_secretsmanager_credential',
'azure_kv',
'azure_rm',
'bitbucket_dc_token',
'centrify_vault_kv',
'conjur',
'controller',
@@ -100,6 +101,7 @@ def test_default_cred_types():
'satellite6',
'scm',
'ssh',
'terraform',
'thycotic_dsv',
'thycotic_tss',
'vault',

View File

@@ -60,6 +60,13 @@ def test_hashivault_client_cert_auth_no_role():
assert res == expected_res
def test_hashivault_userpass_auth():
kwargs = {'username': 'the_username', 'password': 'the_password'}
expected_res = {'username': 'the_username', 'password': 'the_password'}
res = hashivault.userpass_auth(**kwargs)
assert res == expected_res
def test_hashivault_handle_auth_token():
kwargs = {
'token': 'the_token',

View File

@@ -0,0 +1,30 @@
import pytest
from awx.main.models import Instance, ReceptorAddress, InstanceLink
from awx.main.tasks.system import inspect_established_receptor_connections
@pytest.mark.django_db
class TestLinkState:
@pytest.fixture(autouse=True)
def configure_settings(self, settings):
settings.IS_K8S = True
def test_inspect_established_receptor_connections(self):
'''
Change link state from ADDING to ESTABLISHED
if the receptor status KnownConnectionCosts field
has an entry for the source and target node.
'''
hop1 = Instance.objects.create(hostname='hop1')
hop2 = Instance.objects.create(hostname='hop2')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', port=5678)
InstanceLink.objects.create(source=hop1, target=hop2addr, link_state=InstanceLink.States.ADDING)
# calling with empty KnownConnectionCosts should not change the link state
inspect_established_receptor_connections({"KnownConnectionCosts": {}})
assert InstanceLink.objects.get(source=hop1, target=hop2addr).link_state == InstanceLink.States.ADDING
mesh_state = {"KnownConnectionCosts": {"hop1": {"hop2": 1}}}
inspect_established_receptor_connections(mesh_state)
assert InstanceLink.objects.get(source=hop1, target=hop2addr).link_state == InstanceLink.States.ESTABLISHED

View File

@@ -42,3 +42,29 @@ class TestMigrationSmoke:
final_state = migrator.apply_tested_migration(final_migration)
Instance = final_state.apps.get_model('main', 'Instance')
assert Instance.objects.filter(hostname='foobar').count() == 1
def test_receptor_address(self, migrator):
old_state = migrator.apply_initial_migration(('main', '0188_add_bitbucket_dc_webhook'))
Instance = old_state.apps.get_model('main', 'Instance')
for i in range(3):
Instance.objects.create(hostname=f'foobar{i}', node_type='hop')
foo = Instance.objects.create(hostname='foo', node_type='execution', listener_port=1234)
bar = Instance.objects.create(hostname='bar', node_type='execution', listener_port=None)
bar.peers.add(foo)
new_state = migrator.apply_tested_migration(
('main', '0189_inbound_hop_nodes'),
)
Instance = new_state.apps.get_model('main', 'Instance')
ReceptorAddress = new_state.apps.get_model('main', 'ReceptorAddress')
# We can now test how our migration worked, new field is there:
assert ReceptorAddress.objects.filter(address='foo', port=1234).count() == 1
assert not ReceptorAddress.objects.filter(address='bar').exists()
bar = Instance.objects.get(hostname='bar')
fooaddr = ReceptorAddress.objects.get(address='foo')
bar_peers = bar.peers.all()
assert len(bar_peers) == 1
assert fooaddr in bar_peers

View File

@@ -5,7 +5,7 @@ import pytest
# Django
from rest_framework.exceptions import PermissionDenied
from ansible_base.filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from awx.main.models import (
AdHocCommand,

View File

@@ -0,0 +1,32 @@
from awx.main.models import ReceptorAddress
import pytest
ReceptorAddress()
@pytest.mark.parametrize(
'address, protocol, port, websocket_path, expected',
[
('foo', 'tcp', 27199, '', 'foo:27199'),
('bar', 'ws', 6789, '', 'wss://bar:6789'),
('mal', 'ws', 6789, 'path', 'wss://mal:6789/path'),
('example.com', 'ws', 443, 'path', 'wss://example.com:443/path'),
],
)
def test_get_full_address(address, protocol, port, websocket_path, expected):
receptor_address = ReceptorAddress(address=address, protocol=protocol, port=port, websocket_path=websocket_path)
assert receptor_address.get_full_address() == expected
@pytest.mark.parametrize(
'protocol, expected',
[
('tcp', 'tcp-peer'),
('ws', 'ws-peer'),
('wss', 'ws-peer'),
('foo', None),
],
)
def test_get_peer_type(protocol, expected):
receptor_address = ReceptorAddress(protocol=protocol)
assert receptor_address.get_peer_type() == expected

View File

@@ -1085,6 +1085,27 @@ class TestJobCredentials(TestJobExecution):
assert open(env['ANSIBLE_NET_SSH_KEYFILE'], 'r').read() == self.EXAMPLE_PRIVATE_KEY
assert safe_env['ANSIBLE_NET_PASSWORD'] == HIDDEN_PASSWORD
def test_terraform_cloud_credentials(self, job, private_data_dir, mock_me):
terraform = CredentialType.defaults['terraform']()
hcl_config = '''
backend "s3" {
bucket = "s3_sample_bucket"
key = "/tf_state/"
region = "us-east-1"
}
'''
credential = Credential(pk=1, credential_type=terraform, inputs={'configuration': hcl_config})
credential.inputs['configuration'] = encrypt_field(credential, 'configuration')
job.credentials.add(credential)
env = {}
safe_env = {}
credential.credential_type.inject_credential(credential, env, safe_env, [], private_data_dir)
local_path = to_host_path(env['TF_BACKEND_CONFIG_FILE'], private_data_dir)
config = open(local_path, 'r').read()
assert config == hcl_config
def test_custom_environment_injectors_with_jinja_syntax_error(self, private_data_dir, mock_me):
some_cloud = CredentialType(
kind='cloud',

View File

@@ -12,7 +12,7 @@ from unittest import mock
from rest_framework.exceptions import ParseError
from ansible_base.utils.models import get_type_for_model
from ansible_base.lib.utils.models import get_type_for_model
from awx.main.utils import common
from awx.api.validators import HostnameRegexValidator

View File

@@ -69,7 +69,7 @@ class mockHost:
@mock.patch('awx.main.utils.filters.get_model', return_value=mockHost())
class TestSmartFilterQueryFromString:
@mock.patch(
'ansible_base.filters.rest_framework.field_lookup_backend.get_fields_from_path', lambda model, path: ([model], path)
'ansible_base.rest_filters.rest_framework.field_lookup_backend.get_fields_from_path', lambda model, path: ([model], path)
) # disable field filtering, because a__b isn't a real Host field
@pytest.mark.parametrize(
"filter_string,q_expected",

View File

@@ -161,7 +161,7 @@ class SmartFilter(object):
else:
# detect loops and restrict access to sensitive fields
# this import is intentional here to avoid a circular import
from ansible_base.filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
FieldLookupBackend().get_field_from_lookup(Host, k)
kwargs[k] = v

View File

@@ -20,7 +20,6 @@ from awx.main.analytics.broadcast_websocket import (
RelayWebsocketStats,
RelayWebsocketStatsManager,
)
import awx.main.analytics.subsystem_metrics as s_metrics
logger = logging.getLogger('awx.main.wsrelay')
@@ -54,7 +53,6 @@ class WebsocketRelayConnection:
self.protocol = protocol
self.verify_ssl = verify_ssl
self.channel_layer = None
self.subsystem_metrics = s_metrics.Metrics(instance_name=name)
self.producers = dict()
self.connected = False

View File

@@ -37,6 +37,18 @@ DATABASES = {
}
}
# Special database overrides for dispatcher connections listening to pg_notify
LISTENER_DATABASES = {
'default': {
'OPTIONS': {
'keepalives': 1,
'keepalives_idle': 5,
'keepalives_interval': 5,
'keepalives_count': 5,
},
}
}
# Whether or not the deployment is a K8S-based deployment
# In K8S-based deployments, instances have zero capacity - all playbook
# automation is intended to flow through defined Container Groups that
@@ -340,7 +352,7 @@ INSTALLED_APPS = [
'awx.ui',
'awx.sso',
'solo',
'ansible_base',
'ansible_base.rest_filters',
]
INTERNAL_IPS = ('127.0.0.1',)
@@ -1064,10 +1076,40 @@ HOST_METRIC_SUMMARY_TASK_LAST_TS = None
HOST_METRIC_SUMMARY_TASK_INTERVAL = 7 # days
# TODO: cmeyers, replace with with register pattern
# The register pattern is particularly nice for this because we need
# to know the process to start the thread that will be the server.
# The registration location should be the same location as we would
# call MetricsServer.start()
# Note: if we don't get to this TODO, then at least create constants
# for the services strings below.
# TODO: cmeyers, break this out into a separate django app so other
# projects can take advantage.
METRICS_SERVICE_CALLBACK_RECEIVER = 'callback_receiver'
METRICS_SERVICE_DISPATCHER = 'dispatcher'
METRICS_SERVICE_WEBSOCKETS = 'websockets'
METRICS_SUBSYSTEM_CONFIG = {
'server': {
METRICS_SERVICE_CALLBACK_RECEIVER: {
'port': 8014,
},
METRICS_SERVICE_DISPATCHER: {
'port': 8015,
},
METRICS_SERVICE_WEBSOCKETS: {
'port': 8016,
},
}
}
# django-ansible-base
ANSIBLE_BASE_FEATURES = {'AUTHENTICATION': False, 'SWAGGER': False, 'FILTERING': True}
ANSIBLE_BASE_TEAM_MODEL = 'main.Team'
ANSIBLE_BASE_ORGANIZATION_MODEL = 'main.Organization'
from ansible_base import settings # noqa: E402
from ansible_base.lib import dynamic_config # noqa: E402
settings_file = os.path.join(os.path.dirname(settings.__file__), 'dynamic_settings.py')
settings_file = os.path.join(os.path.dirname(dynamic_config.__file__), 'dynamic_settings.py')
include(settings_file)

View File

@@ -21,7 +21,7 @@ from split_settings.tools import optional, include
from .defaults import * # NOQA
# awx-manage shell_plus --notebook
NOTEBOOK_ARGUMENTS = ['--NotebookApp.token=', '--ip', '0.0.0.0', '--port', '8888', '--allow-root', '--no-browser']
NOTEBOOK_ARGUMENTS = ['--NotebookApp.token=', '--ip', '0.0.0.0', '--port', '9888', '--allow-root', '--no-browser']
# print SQL queries in shell_plus
SHELL_PLUS_PRINT_SQL = False

View File

@@ -68,7 +68,6 @@ class LDAPSettings(BaseLDAPSettings):
class LDAPBackend(BaseLDAPBackend):
"""
Custom LDAP backend for AWX.
"""

View File

@@ -38,7 +38,7 @@ class CompleteView(BaseRedirectView):
response = super(CompleteView, self).dispatch(request, *args, **kwargs)
if self.request.user and self.request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in".format(self.request.user.username)))
response.set_cookie('userLoggedIn', 'true')
response.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
response.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return response

View File

@@ -29,6 +29,7 @@ import Notifications from './models/Notifications';
import Organizations from './models/Organizations';
import ProjectUpdates from './models/ProjectUpdates';
import Projects from './models/Projects';
import ReceptorAddresses from './models/Receptor';
import Roles from './models/Roles';
import Root from './models/Root';
import Schedules from './models/Schedules';
@@ -79,6 +80,7 @@ const NotificationsAPI = new Notifications();
const OrganizationsAPI = new Organizations();
const ProjectUpdatesAPI = new ProjectUpdates();
const ProjectsAPI = new Projects();
const ReceptorAPI = new ReceptorAddresses();
const RolesAPI = new Roles();
const RootAPI = new Root();
const SchedulesAPI = new Schedules();
@@ -130,6 +132,7 @@ export {
OrganizationsAPI,
ProjectUpdatesAPI,
ProjectsAPI,
ReceptorAPI,
RolesAPI,
RootAPI,
SchedulesAPI,

View File

@@ -8,6 +8,7 @@ class Instances extends Base {
this.readHealthCheckDetail = this.readHealthCheckDetail.bind(this);
this.healthCheck = this.healthCheck.bind(this);
this.readInstanceGroup = this.readInstanceGroup.bind(this);
this.readReceptorAddresses = this.readReceptorAddresses.bind(this);
this.deprovisionInstance = this.deprovisionInstance.bind(this);
}
@@ -27,6 +28,17 @@ class Instances extends Base {
return this.http.get(`${this.baseUrl}${instanceId}/instance_groups/`);
}
readReceptorAddresses(instanceId) {
return this.http.get(`${this.baseUrl}${instanceId}/receptor_addresses/`);
}
updateReceptorAddresses(instanceId, data) {
return this.http.post(
`${this.baseUrl}${instanceId}/receptor_addresses/`,
data
);
}
deprovisionInstance(instanceId) {
return this.http.patch(`${this.baseUrl}${instanceId}/`, {
node_state: 'deprovisioning',

View File

@@ -0,0 +1,14 @@
import Base from '../Base';
class ReceptorAddresses extends Base {
constructor(http) {
super(http);
this.baseUrl = 'api/v2/receptor_addresses/';
}
updateReceptorAddresses(instanceId, data) {
return this.http.post(`${this.baseUrl}`, data);
}
}
export default ReceptorAddresses;

View File

@@ -12,6 +12,7 @@ import { SettingsAPI } from 'api';
import ContentLoading from 'components/ContentLoading';
import InstanceDetail from './InstanceDetail';
import InstancePeerList from './InstancePeers';
import InstanceListenerAddressList from './InstanceListenerAddressList';
function Instance({ setBreadcrumb }) {
const { me } = useConfig();
@@ -54,7 +55,12 @@ function Instance({ setBreadcrumb }) {
}, [request]);
if (isK8s) {
tabsArray.push({ name: t`Peers`, link: `${match.url}/peers`, id: 1 });
tabsArray.push({
name: t`Listener Addresses`,
link: `${match.url}/listener_addresses`,
id: 1,
});
tabsArray.push({ name: t`Peers`, link: `${match.url}/peers`, id: 2 });
}
if (isLoading) {
return <ContentLoading />;
@@ -72,6 +78,14 @@ function Instance({ setBreadcrumb }) {
<Route path="/instances/:id/details" key="details">
<InstanceDetail isK8s={isK8s} setBreadcrumb={setBreadcrumb} />
</Route>
{isK8s && (
<Route
path="/instances/:id/listener_addresses"
key="listener_addresses"
>
<InstanceListenerAddressList setBreadcrumb={setBreadcrumb} />
</Route>
)}
{isK8s && (
<Route path="/instances/:id/peers" key="peers">
<InstancePeerList setBreadcrumb={setBreadcrumb} />

View File

@@ -9,6 +9,10 @@ function InstanceAdd() {
const [formError, setFormError] = useState();
const handleSubmit = async (values) => {
try {
if (values.listener_port === undefined) {
values.listener_port = null;
}
const {
data: { id },
} = await InstancesAPI.create(values);

View File

@@ -36,6 +36,7 @@ describe('<InstanceAdd />', () => {
});
});
expect(InstancesAPI.create).toHaveBeenCalledWith({
listener_port: null, // injected if listener_port is not set
node_type: 'hop',
});
expect(history.location.pathname).toBe('/instances/13/details');

View File

@@ -183,6 +183,7 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
}
const isHopNode = instance.node_type === 'hop';
const isExecutionNode = instance.node_type === 'execution';
const isManaged = instance.managed;
return (
<>
@@ -208,33 +209,31 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
<Detail label={t`Node Type`} value={instance.node_type} />
<Detail label={t`Host`} value={instance.ip_address} />
<Detail label={t`Listener Port`} value={instance.listener_port} />
{!isManaged && instance.related?.install_bundle && (
<Detail
label={t`Install Bundle`}
value={
<Tooltip content={t`Click to download bundle`}>
<Button
component="a"
isSmall
href={`${instance.related?.install_bundle}`}
target="_blank"
variant="secondary"
dataCy="install-bundle-download-button"
rel="noopener noreferrer"
>
<DownloadIcon />
</Button>
</Tooltip>
}
/>
)}
{(isExecutionNode || isHopNode) && (
<>
{instance.related?.install_bundle && (
<Detail
label={t`Install Bundle`}
value={
<Tooltip content={t`Click to download bundle`}>
<Button
component="a"
isSmall
href={`${instance.related?.install_bundle}`}
target="_blank"
variant="secondary"
dataCy="install-bundle-download-button"
rel="noopener noreferrer"
>
<DownloadIcon />
</Button>
</Tooltip>
}
/>
)}
<Detail
label={t`Peers from control nodes`}
value={instance.peers_from_control_nodes ? t`On` : t`Off`}
/>
</>
<Detail
label={t`Peers from control nodes`}
value={instance.peers_from_control_nodes ? t`On` : t`Off`}
/>
)}
{!isHopNode && (
<>
@@ -294,7 +293,9 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
value={instance.capacity_adjustment}
onChange={handleChangeValue}
isDisabled={
!config?.me?.is_superuser || !instance.enabled
!config?.me?.is_superuser ||
!instance.enabled ||
!isManaged
}
data-cy="slider"
/>
@@ -338,31 +339,31 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
)}
</DetailList>
<CardActionsRow>
{config?.me?.is_superuser && isK8s && (isExecutionNode || isHopNode) && (
<Button
ouiaId="instance-detail-edit-button"
aria-label={t`edit`}
component={Link}
to={`/instances/${id}/edit`}
>
{t`Edit`}
</Button>
)}
{config?.me?.is_superuser &&
isK8s &&
(isExecutionNode || isHopNode) && (
{config?.me?.is_superuser && isK8s && !isManaged && (
<>
<Button
ouiaId="instance-detail-edit-button"
aria-label={t`edit`}
component={Link}
to={`/instances/${id}/edit`}
>
{t`Edit`}
</Button>
<RemoveInstanceButton
dataCy="remove-instance-button"
itemsToRemove={[instance]}
isK8s={isK8s}
onRemove={removeInstances}
/>
)}
</>
)}
{isExecutionNode && (
<Tooltip content={t`Run a health check on the instance`}>
<Button
isDisabled={
!config?.me?.is_superuser || instance.health_check_pending
!config?.me?.is_superuser ||
instance.health_check_pending ||
instance.managed
}
variant="primary"
ouiaId="health-check-button"
@@ -376,12 +377,14 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
</Button>
</Tooltip>
)}
<InstanceToggle
css="display: inline-flex;"
fetchInstances={fetchDetails}
instance={instance}
dataCy="enable-instance"
/>
{!isHopNode && (
<InstanceToggle
css="display: inline-flex;"
fetchInstances={fetchDetails}
instance={instance}
dataCy="enable-instance"
/>
)}
</CardActionsRow>
{error && (

View File

@@ -48,6 +48,7 @@ describe('<InstanceDetail/>', () => {
cpu_capacity: 32,
mem_capacity: 38,
enabled: true,
managed: false,
managed_by_policy: true,
node_type: 'execution',
node_state: 'ready',

View File

@@ -114,7 +114,8 @@ function InstanceListItem({
);
const isHopNode = instance.node_type === 'hop';
const isExecutionNode = instance.node_type === 'execution';
const isManaged = instance.managed;
return (
<>
<Tr
@@ -138,7 +139,7 @@ function InstanceListItem({
rowIndex,
isSelected,
onSelect,
disable: !(isExecutionNode || isHopNode),
disable: isManaged,
}}
dataLabel={t`Selected`}
/>

View File

@@ -0,0 +1,154 @@
import React, { useCallback, useEffect } from 'react';
import { t } from '@lingui/macro';
import { CardBody } from 'components/Card';
import PaginatedTable, {
getSearchableKeys,
HeaderCell,
HeaderRow,
// ToolbarAddButton,
} from 'components/PaginatedTable';
import useToast from 'hooks/useToast';
import { getQSConfig } from 'util/qs';
import { useParams } from 'react-router-dom';
import useRequest from 'hooks/useRequest';
import DataListToolbar from 'components/DataListToolbar';
import { InstancesAPI, ReceptorAPI } from 'api';
import useSelected from 'hooks/useSelected';
import InstanceListenerAddressListItem from './InstanceListenerAddressListItem';
const QS_CONFIG = getQSConfig('peer', {
page: 1,
page_size: 20,
order_by: 'pk',
});
function InstanceListenerAddressList({ setBreadcrumb }) {
const { id } = useParams();
const { Toast, toastProps } = useToast();
const {
isLoading,
error: contentError,
request: fetchListenerAddresses,
result: {
instance,
listenerAddresses,
count,
relatedSearchableKeys,
searchableKeys,
},
} = useRequest(
useCallback(async () => {
const [
{ data: detail },
{
data: { results },
},
actions,
] = await Promise.all([
InstancesAPI.readDetail(id),
ReceptorAPI.read(),
InstancesAPI.readOptions(),
]);
const listenerAddress_list = [];
for (let q = 0; q < results.length; q++) {
const receptor = results[q];
if (receptor.managed === true) continue;
if (id.toString() === receptor.instance.toString()) {
receptor.name = detail.hostname;
listenerAddress_list.push(receptor);
}
}
return {
instance: detail,
listenerAddresses: listenerAddress_list,
count: listenerAddress_list.length,
relatedSearchableKeys: (actions?.data?.related_search_fields || []).map(
(val) => val.slice(0, -8)
),
searchableKeys: getSearchableKeys(actions.data.actions?.GET),
};
}, [id]),
{
instance: {},
listenerAddresses: [],
count: 0,
relatedSearchableKeys: [],
searchableKeys: [],
}
);
useEffect(() => {
fetchListenerAddresses();
}, [fetchListenerAddresses]);
useEffect(() => {
if (instance) {
setBreadcrumb(instance);
}
}, [instance, setBreadcrumb]);
const { selected, isAllSelected, handleSelect, clearSelected, selectAll } =
useSelected(listenerAddresses);
return (
<CardBody>
<PaginatedTable
contentError={contentError}
hasContentLoading={isLoading}
items={listenerAddresses}
itemCount={count}
pluralizedItemName={t`Listener Addresses`}
qsConfig={QS_CONFIG}
onRowClick={handleSelect}
clearSelected={clearSelected}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
toolbarSearchColumns={[
{
name: t`Name`,
key: 'hostname__icontains',
isDefault: true,
},
]}
toolbarSortColumns={[
{
name: t`Name`,
key: 'hostname',
},
]}
headerRow={
<HeaderRow qsConfig={QS_CONFIG}>
<HeaderCell sortKey="address">{t`Address`}</HeaderCell>
<HeaderCell sortKey="port">{t`Port`}</HeaderCell>
<HeaderCell sortKey="protocol">{t`Protocol`}</HeaderCell>
<HeaderCell sortKey="canonical">{t`Canonical`}</HeaderCell>
</HeaderRow>
}
renderToolbar={(props) => (
<DataListToolbar
{...props}
isAllSelected={isAllSelected}
onSelectAll={selectAll}
qsConfig={QS_CONFIG}
additionalControls={[]}
/>
)}
renderRow={(listenerAddress, index) => (
<InstanceListenerAddressListItem
isSelected={selected.some((row) => row.id === listenerAddress.id)}
onSelect={() => handleSelect(listenerAddress)}
key={listenerAddress.id}
peerListenerAddress={listenerAddress}
rowIndex={index}
/>
)}
/>
<Toast {...toastProps} />
</CardBody>
);
}
export default InstanceListenerAddressList;

View File

@@ -0,0 +1,46 @@
import React from 'react';
import { t } from '@lingui/macro';
import 'styled-components/macro';
import { Tr, Td } from '@patternfly/react-table';
function InstanceListenerAddressListItem({
peerListenerAddress,
isSelected,
onSelect,
rowIndex,
}) {
const labelId = `check-action-${peerListenerAddress.id}`;
return (
<Tr
id={`peerListenerAddress-row-${peerListenerAddress.id}`}
ouiaId={`peerListenerAddress-row-${peerListenerAddress.id}`}
>
<Td
select={{
rowIndex,
isSelected,
onSelect,
}}
dataLabel={t`Selected`}
/>
<Td id={labelId} dataLabel={t`Address`}>
{peerListenerAddress.address}
</Td>
<Td id={labelId} dataLabel={t`Port`}>
{peerListenerAddress.port}
</Td>
<Td id={labelId} dataLabel={t`Protocol`}>
{peerListenerAddress.protocol}
</Td>
<Td id={labelId} dataLabel={t`Canonical`}>
{peerListenerAddress.canonical.toString()}
</Td>
</Tr>
);
}
export default InstanceListenerAddressListItem;

View File

@@ -0,0 +1 @@
export { default } from './InstanceListenerAddressList';

View File

@@ -16,7 +16,7 @@ import { getQSConfig, parseQueryString, mergeParams } from 'util/qs';
import { useLocation, useParams } from 'react-router-dom';
import useRequest, { useDismissableError } from 'hooks/useRequest';
import DataListToolbar from 'components/DataListToolbar';
import { InstancesAPI } from 'api';
import { InstancesAPI, ReceptorAPI } from 'api';
import useExpanded from 'hooks/useExpanded';
import useSelected from 'hooks/useSelected';
import InstancePeerListItem from './InstancePeerListItem';
@@ -24,7 +24,7 @@ import InstancePeerListItem from './InstancePeerListItem';
const QS_CONFIG = getQSConfig('peer', {
page: 1,
page_size: 20,
order_by: 'hostname',
order_by: 'pk',
});
function InstancePeerList({ setBreadcrumb }) {
@@ -47,18 +47,35 @@ function InstancePeerList({ setBreadcrumb }) {
const [
{ data: detail },
{
data: { results, count: itemNumber },
data: { results },
},
actions,
instances,
] = await Promise.all([
InstancesAPI.readDetail(id),
InstancesAPI.readPeers(id, params),
InstancesAPI.readOptions(),
InstancesAPI.read(),
]);
const address_list = [];
for (let q = 0; q < results.length; q++) {
const receptor = results[q];
if (receptor.managed === true) continue;
const host = instances.data.results.filter(
(obj) => obj.id === receptor.instance
)[0];
const copy = receptor;
copy.hostname = host.hostname;
copy.node_type = host.node_type;
address_list.push(copy);
}
return {
instance: detail,
peers: results,
count: itemNumber,
peers: address_list,
count: address_list.length,
relatedSearchableKeys: (actions?.data?.related_search_fields || []).map(
(val) => val.slice(0, -8)
),
@@ -90,15 +107,73 @@ function InstancePeerList({ setBreadcrumb }) {
useSelected(peers);
const fetchInstancesToAssociate = useCallback(
(params) =>
InstancesAPI.read(
async (params) => {
const address_list = [];
const instances = await InstancesAPI.read(
mergeParams(params, {
...{ not__id: id },
...{ not__node_type: ['control', 'hybrid'] },
...{ not__hostname: instance.peers },
})
),
[id, instance]
);
const receptors = (await ReceptorAPI.read()).data.results;
// get instance ids of the current peered receptor ids
const already_peered_instance_ids = [];
for (let h = 0; h < instance.peers.length; h++) {
const matched = receptors.filter((obj) => obj.id === instance.peers[h]);
matched.forEach((element) => {
already_peered_instance_ids.push(element.instance);
});
}
for (let q = 0; q < receptors.length; q++) {
const receptor = receptors[q];
if (already_peered_instance_ids.includes(receptor.instance)) {
// ignore reverse peers
continue;
}
if (instance.peers.includes(receptor.id)) {
// no links to existing links
continue;
}
if (instance.id === receptor.instance) {
// no links to thy self
continue;
}
if (instance.managed) {
// no managed nodes
continue;
}
const host = instances.data.results.filter(
(obj) => obj.id === receptor.instance
)[0];
if (host === undefined) {
// no hosts
continue;
}
if (receptor.is_internal) {
continue;
}
const copy = receptor;
copy.hostname = host.hostname;
copy.node_type = host.node_type;
copy.canonical = copy.canonical.toString();
address_list.push(copy);
}
instances.data.results = address_list;
return instances;
},
[instance]
);
const {
@@ -108,17 +183,15 @@ function InstancePeerList({ setBreadcrumb }) {
} = useRequest(
useCallback(
async (instancesPeerToAssociate) => {
const selected_hostname = instancesPeerToAssociate.map(
(obj) => obj.hostname
);
const new_peers = [
...new Set([...instance.peers, ...selected_hostname]),
];
const selected_peers = instancesPeerToAssociate.map((obj) => obj.id);
const new_peers = [...new Set([...instance.peers, ...selected_peers])];
await InstancesAPI.update(instance.id, { peers: new_peers });
fetchPeers();
addToast({
id: instancesPeerToAssociate,
title: t`${selected_hostname} added as a peer. Please be sure to run the install bundle for ${instance.hostname} again in order to see changes take effect.`,
title: t`Peers update on ${instance.hostname}. Please be sure to run the install bundle for ${instance.hostname} again in order to see changes take effect.`,
variant: AlertVariant.success,
hasTimeout: true,
});
@@ -133,17 +206,18 @@ function InstancePeerList({ setBreadcrumb }) {
error: disassociateError,
} = useRequest(
useCallback(async () => {
const new_peers = [];
const selected_hostname = selected.map((obj) => obj.hostname);
for (let i = 0; i < instance.peers.length; i++) {
if (!selected_hostname.includes(instance.peers[i])) {
new_peers.push(instance.peers[i]);
}
let new_peers = instance.peers;
const selected_ids = selected.map((obj) => obj.id);
for (let i = 0; i < selected_ids.length; i++) {
new_peers = new_peers.filter((s_id) => s_id !== selected_ids[i]);
}
await InstancesAPI.update(instance.id, { peers: new_peers });
fetchPeers();
addToast({
title: t`${selected_hostname} removed. Please be sure to run the install bundle for ${instance.hostname} again in order to see changes take effect.`,
title: t`Peer removed. Please be sure to run the install bundle for ${instance.hostname} again in order to see changes take effect.`,
variant: AlertVariant.success,
hasTimeout: true,
});
@@ -190,9 +264,11 @@ function InstancePeerList({ setBreadcrumb }) {
<HeaderCell
tooltip={t`Cannot run health check on hop nodes.`}
sortKey="hostname"
>{t`Name`}</HeaderCell>
<HeaderCell sortKey="errors">{t`Status`}</HeaderCell>
>{t`Instance Name`}</HeaderCell>
<HeaderCell sortKey="address">{t`Address`}</HeaderCell>
<HeaderCell sortKey="port">{t`Port`}</HeaderCell>
<HeaderCell sortKey="node_type">{t`Node Type`}</HeaderCell>
<HeaderCell sortKey="canonical">{t`Canonical`}</HeaderCell>
</HeaderRow>
}
renderToolbar={(props) => (
@@ -218,7 +294,7 @@ function InstancePeerList({ setBreadcrumb }) {
key="disassociate"
onDisassociate={handlePeersDiassociate}
itemsToDisassociate={selected}
modalTitle={t`Remove instance from peers?`}
modalTitle={t`Remove peers?`}
/>
),
]}
@@ -243,12 +319,15 @@ function InstancePeerList({ setBreadcrumb }) {
isModalOpen={isModalOpen}
onAssociate={handlePeerAssociate}
onClose={() => setIsModalOpen(false)}
title={t`Select Instances`}
title={t`Select Peer Addresses`}
optionsRequest={readInstancesOptions}
displayKey="hostname"
columns={[
{ key: 'hostname', name: t`Name` },
{ key: 'address', name: t`Address` },
{ key: 'port', name: t`Port` },
{ key: 'node_type', name: t`Node Type` },
{ key: 'protocol', name: t`Protocol` },
]}
/>
)}

View File

@@ -2,10 +2,8 @@ import React from 'react';
import { Link } from 'react-router-dom';
import { t } from '@lingui/macro';
import 'styled-components/macro';
import { Tooltip } from '@patternfly/react-core';
import { Tr, Td, ExpandableRowContent } from '@patternfly/react-table';
import { formatDateString } from 'util/dates';
import StatusLabel from 'components/StatusLabel';
import { Detail, DetailList } from 'components/DetailList';
function InstancePeerListItem({
@@ -43,29 +41,26 @@ function InstancePeerListItem({
}}
dataLabel={t`Selected`}
/>
<Td id={labelId} dataLabel={t`Name`}>
<Link to={`/instances/${peerInstance.id}/details`}>
<Link to={`/instances/${peerInstance.instance}/details`}>
<b>{peerInstance.hostname}</b>
</Link>
</Td>
<Td dataLabel={t`Status`}>
<Tooltip
content={
<div>
{t`Last Health Check`}
&nbsp;
{formatDateString(
peerInstance.last_health_check ?? peerInstance.last_seen
)}
</div>
}
>
<StatusLabel status={peerInstance.node_state} />
</Tooltip>
<Td id={labelId} dataLabel={t`Address`}>
{peerInstance.address}
</Td>
<Td id={labelId} dataLabel={t`Port`}>
{peerInstance.port}
</Td>
<Td dataLabel={t`Node Type`}>{peerInstance.node_type}</Td>
<Td id={labelId} dataLabel={t`Canonical`}>
{peerInstance.canonical.toString()}
</Td>
</Tr>
{!isHopNode && (
<Tr

View File

@@ -25,6 +25,7 @@ function Instances() {
[`/instances/${instance.id}`]: `${instance.hostname}`,
[`/instances/${instance.id}/details`]: t`Details`,
[`/instances/${instance.id}/peers`]: t`Peers`,
[`/instances/${instance.id}/listener_addresses`]: t`Listener Addresses`,
[`/instances/${instance.id}/edit`]: t`Edit Instance`,
});
}, []);

View File

@@ -1,6 +1,6 @@
import React, { useCallback } from 'react';
import React from 'react';
import { t } from '@lingui/macro';
import { Formik, useField, useFormikContext } from 'formik';
import { Formik, useField } from 'formik';
import { Form, FormGroup, CardBody } from '@patternfly/react-core';
import { FormColumnLayout } from 'components/FormLayout';
import FormField, {
@@ -9,7 +9,6 @@ import FormField, {
} from 'components/FormField';
import FormActionGroup from 'components/FormActionGroup';
import AnsibleSelect from 'components/AnsibleSelect';
import { PeersLookup } from 'components/Lookup';
import { required } from 'util/validators';
const INSTANCE_TYPES = [
@@ -23,16 +22,6 @@ function InstanceFormFields({ isEdit }) {
validate: required(t`Set a value for this field`),
});
const { setFieldValue } = useFormikContext();
const [peersField, peersMeta, peersHelpers] = useField('peers');
const handlePeersUpdate = useCallback(
(value) => {
setFieldValue('peers', value);
},
[setFieldValue]
);
return (
<>
<FormField
@@ -91,20 +80,6 @@ function InstanceFormFields({ isEdit }) {
isDisabled={isEdit}
/>
</FormGroup>
<PeersLookup
helperTextInvalid={peersMeta.error}
isValid={!peersMeta.touched || !peersMeta.error}
onBlur={() => peersHelpers.setTouched()}
onChange={handlePeersUpdate}
value={peersField.value}
tooltip={t`Select the Peers Instances.`}
fieldName="peers"
formLabel={t`Peers`}
multiple
typePeers
id="peers"
isRequired
/>
<FormGroup fieldId="instance-option-checkboxes" label={t`Options`}>
<CheckboxField
id="enabled"

View File

@@ -1,6 +1,5 @@
import React, { useCallback, useEffect, useState } from 'react';
import styled from 'styled-components';
import { t } from '@lingui/macro';
import {
Card,
@@ -47,7 +46,7 @@ function SubscriptionUsageChart() {
const [periodSelection, setPeriodSelection] = useState('year');
const userProfile = useUserProfile();
const calculateDateRange = () => {
const calculateDateRange = useCallback(() => {
const today = new Date();
let date = '';
switch (periodSelection) {
@@ -77,7 +76,7 @@ function SubscriptionUsageChart() {
break;
}
return date;
};
}, [periodSelection]);
const {
isLoading,
@@ -89,7 +88,7 @@ function SubscriptionUsageChart() {
calculateDateRange()
);
return data.data.results;
}, [periodSelection]),
}, [calculateDateRange]),
[]
);

View File

@@ -112,6 +112,12 @@ function WebhookSubForm({ templateType }) {
label: t`GitLab`,
isDisabled: false,
},
{
value: 'bitbucket_dc',
key: 'bitbucket_dc',
label: t`Bitbucket Data Center`,
isDisabled: false,
},
];
if (error || webhookKeyError) {

View File

@@ -67,6 +67,8 @@ options:
description:
- List of peers to connect outbound to. Only configurable for hop and execution nodes.
- To remove all current peers, set value to an empty list, [].
- Each item is an ID or address of a receptor address.
- If item is address, it must be unique across all receptor addresses.
required: False
type: list
elements: str
@@ -83,12 +85,24 @@ EXAMPLES = '''
awx.awx.instance:
hostname: my-instance.prod.example.com
capacity_adjustment: 0.4
listener_port: 31337
- name: Deprovision the instance
awx.awx.instance:
hostname: my-instance.prod.example.com
node_state: deprovisioning
- name: Create execution node
awx.awx.instance:
hostname: execution.example.com
node_type: execution
peers:
- 12
- route.to.hop.example.com
- name: Remove peers
awx.awx.instance:
hostname: execution.example.com
peers:
'''
from ..module_utils.controller_api import ControllerAPIModule
@@ -124,6 +138,17 @@ def main():
# Attempt to look up an existing item based on the provided data
existing_item = module.get_one('instances', name_or_id=hostname)
# peer item can be an id or address
# if address, get the id
peers_ids = []
if peers:
for p in peers:
if not p.isdigit():
p_id = module.get_one('receptor_addresses', allow_none=False, data={'address': p})
peers_ids.append(p_id['id'])
else:
peers_ids.append(p)
# Create the data that gets sent for create and update
new_fields = {'hostname': hostname}
if capacity_adjustment is not None:
@@ -139,7 +164,7 @@ def main():
if listener_port is not None:
new_fields['listener_port'] = listener_port
if peers is not None:
new_fields['peers'] = peers
new_fields['peers'] = peers_ids
if peers_from_control_nodes is not None:
new_fields['peers_from_control_nodes'] = peers_from_control_nodes

View File

@@ -54,7 +54,7 @@ extra_endpoints = {
}
# Global module parameters we can ignore
ignore_parameters = ['state', 'new_name', 'update_secrets', 'copy_from']
ignore_parameters = ['state', 'new_name', 'update_secrets', 'copy_from', 'is_internal']
# Some modules take additional parameters that do not appear in the API
# Add the module name as the key with the value being the list of params to ignore
@@ -248,7 +248,7 @@ def test_completeness(collection_import, request, admin_user, job_template, exec
singular_endpoint = '{0}'.format(endpoint)
if singular_endpoint.endswith('ies'):
singular_endpoint = singular_endpoint[:-3]
if singular_endpoint != 'settings' and singular_endpoint.endswith('s'):
elif singular_endpoint != 'settings' and singular_endpoint.endswith('s'):
singular_endpoint = singular_endpoint[:-1]
module_name = '{0}'.format(singular_endpoint)

View File

@@ -134,21 +134,17 @@ def test_export_simple(
@pytest.mark.django_db
def test_export_system_auditor(run_module, schedule, system_auditor): # noqa: F811
def test_export_system_auditor(run_module, organization, system_auditor): # noqa: F811
"""
This test illustrates that deficiency of export when ran as non-root user (i.e. system auditor).
The OPTIONS endpoint does NOT return POST for a system auditor. This is bad for the export code
because it relies on crawling the OPTIONS POST response to determine the fields to export.
This test illustrates that export of resources can now happen
when ran as non-root user (i.e. system auditor). The OPTIONS
endpoint does NOT return POST for a system auditor, but now we
make a best-effort to parse the description string, which will
often have the fields.
"""
result = run_module('export', dict(all=True), system_auditor)
assert result.get('failed', False), result.get('msg', result)
assert not result.get('failed', False), result.get('msg', result)
assert 'msg' not in result
assert 'assets' in result
assert 'Failed to export assets substring not found' in result['msg'], (
'If you found this error then you have probably fixed a feature! The export code attempts to assertain the POST fields from the `description` field,'
' but both the API side and the client inference code are lacking.'
)
# r = result['assets']['schedules'][0]
# assert r['natural_key']['name'] == 'test-sched'
# assert 'rrule' not in r, 'If you found this error then you have probably fixed a feature! We WANT rrule to be found in the export schedule payload.'
find_by(result['assets'], 'organizations', 'name', 'Default')

View File

@@ -13,39 +13,32 @@ def test_peers_adding_and_removing(run_module, admin_user):
with override_settings(IS_K8S=True):
result = run_module(
'instance',
{'hostname': 'hopnode1', 'node_type': 'hop', 'peers_from_control_nodes': True, 'node_state': 'installed', 'listener_port': 27199},
{'hostname': 'hopnode', 'node_type': 'hop', 'node_state': 'installed', 'listener_port': 6789},
admin_user,
)
assert result['changed']
hop_node_1 = Instance.objects.get(pk=result.get('id'))
hop_node = Instance.objects.get(pk=result.get('id'))
assert hop_node_1.peers_from_control_nodes is True
assert hop_node_1.node_type == 'hop'
assert hop_node.node_type == 'hop'
address = hop_node.receptor_addresses.get(pk=result.get('id'))
assert address.port == 6789
result = run_module(
'instance',
{'hostname': 'hopnode2', 'node_type': 'hop', 'peers_from_control_nodes': True, 'node_state': 'installed', 'listener_port': 27199},
admin_user,
)
assert result['changed']
hop_node_2 = Instance.objects.get(pk=result.get('id'))
result = run_module(
'instance',
{'hostname': 'executionnode', 'node_type': 'execution', 'node_state': 'installed', 'listener_port': 27199, 'peers': ['hopnode1', 'hopnode2']},
{'hostname': 'executionnode', 'node_type': 'execution', 'node_state': 'installed', 'peers': ['hopnode']},
admin_user,
)
assert result['changed']
execution_node = Instance.objects.get(pk=result.get('id'))
assert set(execution_node.peers.all()) == {hop_node_1, hop_node_2}
assert set(execution_node.peers.all()) == {address}
result = run_module(
'instance',
{'hostname': 'executionnode', 'node_type': 'execution', 'node_state': 'installed', 'listener_port': 27199, 'peers': []},
{'hostname': 'executionnode', 'node_type': 'execution', 'node_state': 'installed', 'peers': []},
admin_user,
)

View File

@@ -42,7 +42,6 @@
node_type: execution
node_state: installed
capacity_adjustment: 0.4
listener_port: 31337
register: result
- assert:
@@ -74,11 +73,9 @@
- block:
- name: Create hop node 1
awx.awx.instance:
hostname: hopnode1
hostname: "{{ hostname1 }}"
node_type: hop
node_state: installed
listener_port: 27199
peers_from_control_nodes: True
register: result
- assert:
@@ -87,11 +84,9 @@
- name: Create hop node 2
awx.awx.instance:
hostname: hopnode2
hostname: "{{ hostname2 }}"
node_type: hop
node_state: installed
listener_port: 27199
peers_from_control_nodes: True
register: result
- assert:
@@ -100,13 +95,12 @@
- name: Create execution node
awx.awx.instance:
hostname: executionnode
hostname: "{{ hostname3 }}"
node_type: execution
node_state: installed
listener_port: 27199
peers:
- "hopnode1"
- "hopnode2"
- "{{ hostname1 }}"
- "{{ hostname2 }}"
register: result
- assert:
@@ -115,10 +109,9 @@
- name: Remove execution node peers
awx.awx.instance:
hostname: executionnode
hostname: "{{ hostname3 }}"
node_type: execution
node_state: installed
listener_port: 27199
peers: []
register: result
@@ -126,4 +119,15 @@
that:
- result is changed
always:
- name: Deprovision the instances
awx.awx.instance:
hostname: "{{ item }}"
node_state: deprovisioning
with_items:
- "{{ hostname1 }}"
- "{{ hostname2 }}"
- "{{ hostname3 }}"
when: IS_K8S

Some files were not shown because too many files have changed in this diff Show More