Skip to content

Conversation

@slagle
Copy link
Contributor

@slagle slagle commented Dec 11, 2025

go was formatting the servicesOverride slice without commas between the
items, which causes the value to be interpreted as a YAML list of one
string item. Marshal'ing the value to JSON first causes proper JSON
syntax so that the resulting YAML value is properly formatted with
commas.

This was breaking the download-cache service that relies on the proper
value of edpm_services_override when using download-cache and
servicesOverride on a Deployment.

Jira: OSPRH-21737
Signed-off-by: James Slagle jslagle@redhat.com

@openshift-ci openshift-ci bot requested review from dprince and fultonj December 11, 2025 22:47
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 11, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: slagle

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@bshephar bshephar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey man,

I couldn't open the Jira to see the full context of what the error was. But my suggestion is an implementation detail rather than a fixing the reported problem suggestion.


if len(deployment.Spec.ServicesOverride) > 0 {
a.ExtraVars["edpm_services_override"] = json.RawMessage([]byte(fmt.Sprintf("\"%s\"", deployment.Spec.ServicesOverride)))
extraVarsJSON, _ := json.Marshal(deployment.Spec.ServicesOverride)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this err should probably be checked, no?

I understand that is complicated by the fact that this function doesn't return anything and changing the signature of the function isn't nice. It is only used in one place though, so probably not the end of the world to update the signature to:

func (a *EEJob) FormatAEEExtraVars(
	aeeSpec *dataplanev1.AnsibleEESpec,
	service *dataplanev1.OpenStackDataPlaneService,
	deployment *dataplanev1.OpenStackDataPlaneDeployment,
	nodeSet client.Object,
) error {
}

Which does also necessitate changing the signature of BuildAeeJobSpec to also return an error to facilitate back propagating the error. But I think it would probably be nicer if the error was captured and returned rather than silently dropped.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 14, 2026

@bshephar: changing LGTM is restricted to collaborators

Details

In response to this:

Hey man,

I couldn't open the Jira to see the full context of what the error was. But my suggestion is an implementation detail rather than a fixing the reported problem suggestion.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

go was formatting the servicesOverride slice without commas between the
items, which causes the value to be interpreted as a YAML list of one
string item. Marshal'ing the value to JSON first causes proper JSON
syntax so that the resulting YAML value is properly formatted with
commas.

This was breaking the download-cache service that relies on the proper
value of edpm_services_override when using download-cache and
servicesOverride on a Deployment.

Jira: OSPRH-21737
Signed-off-by: James Slagle <jslagle@redhat.com>
@slagle slagle force-pushed the edpm_services_override branch from 7c6ab83 to 591a484 Compare January 23, 2026 16:29
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/24dbbf32511c44aea579605a3a25aaf4

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 15m 55s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 25m 41s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 58m 18s
openstack-operator-tempest-multinode FAILURE in 1h 46m 49s

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 23, 2026

@slagle: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/openstack-operator-build-deploy-kuttl 591a484 link true /test openstack-operator-build-deploy-kuttl
ci/prow/openstack-operator-build-deploy-kuttl-4-18 591a484 link true /test openstack-operator-build-deploy-kuttl-4-18

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@bshephar
Copy link
Contributor

tempest:

    Response - Headers: {'date': 'Fri, 23 Jan 2026 17:49:28 GMT', 'server': 'Apache', 'content-length': '5208', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-cf806330-3492-4fb2-aead-86252312d718', 'x-compute-request-id': 'req-cf806330-3492-4fb2-aead-86252312d718', 'content-type': 'application/json', 'set-cookie': '0dc6017b143850df8350099417b4ec9f=1f2c4fd094e984d52c1697c4b8e2eb60; path=/; HttpOnly; Secure; SameSite=None', 'connection': 'close', 'status': '200', 'content-location': 'https://nova-public-openstack.apps-crc.testing/v2.1/servers/be39178f-6acd-4d89-a305-c0615c64ae83'}
        Body: b'{"server": {"id": "be39178f-6acd-4d89-a305-c0615c64ae83", "name": "tempest-TestServerMultinode-server-1839518245", "status": "ERROR", "tenant_id": "8a3b0ad9def04481981291cc3d72c6ed", "user_id": "0e2378e2154a41d08faef827d2cff04d", "metadata": {}, "hostId": "", "image": {"id": "9e4b0326-d2f4-48ff-a207-cb31e6195540", "links": [{"rel": "bookmark", "href": "https://nova-public-openstack.apps-crc.testing/images/9e4b0326-d2f4-48ff-a207-cb31e6195540"}]}, "flavor": {"id": "c9d0330e-b70a-452a-9525-187067a4f9ea", "links": [{"rel": "bookmark", "href": "https://nova-public-openstack.apps-crc.testing/flavors/c9d0330e-b70a-452a-9525-187067a4f9ea"}]}, "created": "2026-01-23T17:49:16Z", "updated": "2026-01-23T17:49:27Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-public-openstack.apps-crc.testing/v2.1/servers/be39178f-6acd-4d89-a305-c0615c64ae83"}, {"rel": "bookmark", "href": "https://nova-public-openstack.apps-crc.testing/servers/be39178f-6acd-4d89-a305-c0615c64ae83"}], "OS-DCF:diskConfig": "MANUAL", "fault": {"code": 500, "created": "2026-01-23T17:49:27Z", "message": "Binding failed for port b53df125-1158-4eed-9e45-ca3e6f2490e9, please check neutron logs for more information.", "details": "Traceback (most recent call last):\\n  File \\"/usr/lib/python3.9/site-packages/nova/compute/manager.py\\", line 2611, in _build_and_run_instance\\n    self.driver.spawn(context, instance, image_meta,\\n  File \\"/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py\\", line 4407, in spawn\\n    xml = self._get_guest_xml(context, instance, network_info,\\n  File \\"/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py\\", line 7538, in _get_guest_xml\\n    network_info_str = str(network_info)\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/model.py\\", line 620, in __str__\\n    return self._sync_wrapper(fn, *args, **kwargs)\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/model.py\\", line 603, in _sync_wrapper\\n    self.wait()\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/model.py\\", line 635, in wait\\n    self[:] = self._gt.wait()\\n  File \\"/usr/lib/python3.9/site-packages/eventlet/greenthread.py\\", line 181, in wait\\n    return self._exit_event.wait()\\n  File \\"/usr/lib/python3.9/site-packages/eventlet/event.py\\", line 125, in wait\\n    result = hub.switch()\\n  File \\"/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py\\", line 313, in switch\\n    return self.greenlet.switch()\\n  File \\"/usr/lib/python3.9/site-packages/eventlet/greenthread.py\\", line 221, in main\\n    result = function(*args, **kwargs)\\n  File \\"/usr/lib/python3.9/site-packages/nova/utils.py\\", line 654, in context_wrapper\\n    return func(*args, **kwargs)\\n  File \\"/usr/lib/python3.9/site-packages/nova/compute/manager.py\\", line 1982, in _allocate_network_async\\n    raise e\\n  File \\"/usr/lib/python3.9/site-packages/nova/compute/manager.py\\", line 1960, in _allocate_network_async\\n    nwinfo = self.network_api.allocate_for_instance(\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/neutron.py\\", line 1229, in allocate_for_instance\\n    created_port_ids = self._update_ports_for_instance(\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/neutron.py\\", line 1371, in _update_ports_for_instance\\n    vif.destroy()\\n  File \\"/usr/lib/python3.9/site-packages/oslo_utils/excutils.py\\", line 227, in __exit__\\n    self.force_reraise()\\n  File \\"/usr/lib/python3.9/site-packages/oslo_utils/excutils.py\\", line 200, in force_reraise\\n    raise self.value\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/neutron.py\\", line 1340, in _update_ports_for_instance\\n    updated_port = self._update_port(\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/neutron.py\\", line 585, in _update_port\\n    _ensure_no_port_binding_failure(port)\\n  File \\"/usr/lib/python3.9/site-packages/nova/network/neutron.py\\", line 294, in _ensure_no_port_binding_failure\\n    raise exception.PortBindingFailed(port_id=port[\'id\'])\\nnova.exc _log_request_full /usr/lib/python3.9/site-packages/tempest/lib/common/rest_client.py:484

Unrelated by the looks of it..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants