Skip to content

Conversation

@cgwalters
Copy link
Collaborator

Now that we're doing a "from scratch" build we don't
have the mtime issue, and so we can change our build system
to do everything in a single step.

Assisted-by: OpenCode (Opus 4.5)

@github-actions github-actions bot added the area/documentation Updates to the documentation label Jan 8, 2026
@bootc-bot bootc-bot bot requested a review from jeckersb January 8, 2026 22:57
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is an excellent refactoring of the build system, unifying the standard and sealed image build processes into a single, multi-stage Docker build. This simplifies the build logic, improves maintainability, and likely enhances build caching. The changes are well-structured, breaking down the complex sealing process into smaller, dedicated scripts and Dockerfile stages. The removal of outdated files and the consolidation of logic into a single, more powerful Dockerfile is a significant cleanup. My review includes a few suggestions to improve the robustness of some of the new shell scripts to better handle cases where file globs might match more than one file.

Comment on lines 25 to 29
kver=$(cd /usr/lib/modules && echo *)
if [ -z "$kver" ] || [ "$kver" = "*" ]; then
echo "Error: No kernel found" >&2
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The command kver=$(cd /usr/lib/modules && echo *) is not robust if multiple kernel version directories exist under /usr/lib/modules. In such a case, kver would contain a space-separated list of versions, which would break subsequent commands. It's safer to ensure exactly one kernel directory is found and fail otherwise.

Suggested change
kver=$(cd /usr/lib/modules && echo *)
if [ -z "$kver" ] || [ "$kver" = "*" ]; then
echo "Error: No kernel found" >&2
exit 1
fi
kver_paths=(/usr/lib/modules/*)
if [ "${#kver_paths[@]}" -ne 1 ]; then
echo "Error: Expected 1 kernel version directory, but found ${#kver_paths[@]}" >&2
ls -l /usr/lib/modules/
exit 1
fi
kver=$(basename "${kver_paths[0]}")

Comment on lines 24 to 28
kver=$(cd "${target}/usr/lib/modules" && echo *)
if [ -z "$kver" ] || [ "$kver" = "*" ]; then
echo "Error: No kernel found" >&2
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The command kver=$(cd "${target}/usr/lib/modules" && echo *) is not robust if multiple kernel version directories exist. In such a case, kver would contain a space-separated list of versions, which would break subsequent commands. It's safer to ensure exactly one kernel directory is found and fail otherwise.

Suggested change
kver=$(cd "${target}/usr/lib/modules" && echo *)
if [ -z "$kver" ] || [ "$kver" = "*" ]; then
echo "Error: No kernel found" >&2
exit 1
fi
kver_paths=("${target}/usr/lib/modules"/*)
if [ "${#kver_paths[@]}" -ne 1 ]; then
echo "Error: Expected 1 kernel version directory, but found ${#kver_paths[@]}" >&2
ls -l "${target}/usr/lib/modules/"
exit 1
fi
kver=$(basename "${kver_paths[0]}")

Comment on lines +99 to +113
sdboot_unsigned=$(ls ./usr/lib/systemd/boot/efi/systemd-boot*.efi)
sdboot_bn=$(basename ${sdboot_unsigned})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of ls with a glob to find the systemd-boot EFI binary can be fragile. If the glob matches more than one file, the sdboot_unsigned variable will contain a multi-line string, which could cause sbsign to behave unexpectedly. It's safer to ensure exactly one file is found.

sdboot_unsigned_files=(./usr/lib/systemd/boot/efi/systemd-boot*.efi)
if [ ${#sdboot_unsigned_files[@]} -ne 1 ]; then
    echo "Error: Expected 1 systemd-boot EFI file, but found ${#sdboot_unsigned_files[@]}" >&2
    ls -l ./usr/lib/systemd/boot/efi/
    exit 1
fi
sdboot_unsigned="${sdboot_unsigned_files[0]}"
sdboot_bn=$(basename "${sdboot_unsigned}")

Comment on lines +21 to +22
sdboot=$(ls /usr/lib/systemd/boot/efi/systemd-boot*.efi)
sdboot_bn=$(basename "${sdboot}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of ls with a glob to find the systemd-boot EFI binary can be fragile. If the glob matches more than one file, the sdboot variable will contain a multi-line string. This would cause the install command on line 24 to fail because it would be interpreted as multiple destination arguments. It's safer to ensure exactly one file is found.

Suggested change
sdboot=$(ls /usr/lib/systemd/boot/efi/systemd-boot*.efi)
sdboot_bn=$(basename "${sdboot}")
sdboot_files=(/usr/lib/systemd/boot/efi/systemd-boot*.efi)
if [ "${#sdboot_files[@]}" -ne 1 ]; then
echo "Error: Expected 1 systemd-boot EFI file, but found ${#sdboot_files[@]}" >&2
ls -l /usr/lib/systemd/boot/efi/
exit 1
fi
sdboot="${sdboot_files[0]}"
sdboot_bn=$(basename "${sdboot}")

@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch from 0ba5f48 to 4c7882c Compare January 8, 2026 23:57
@cgwalters
Copy link
Collaborator Author

error: Installing to disk: Setting up composefs boot: Setting up UKI boot: Writing 6.12.0-174.el10.x86_64.efi to ESP: The UKI has the wrong composefs= parameter (is 'sha512:1c88433c334194ecbf03356ba12fdcf7bb16c4ba075ff72d44666237136c162b35cc0bcfe923c8ace1fe27d724c18c6dc386d644d98ec5a890e63bab469b21b2', should be sha512:5c8b606415d239698de34b714935a684b38a763eddb573161213e98ed2ad91b69e10f236ded502af078744ce94ddbe353368be3dda5a182c631edd8e1ec0add0)

OK this was working for me at one point but then broke and it took me quite a while to figure out why: containers/storage#743 (comment)

@cgwalters
Copy link
Collaborator Author

OK this will depend on more composefs-rs work so we need to get back on git main, which is #1791

@jeckersb
Copy link
Collaborator

OK this will depend on more composefs-rs work so we need to get back on git main, which is #1791

Merged now, so I think this is unblocked?

@cgwalters
Copy link
Collaborator Author

I'm working on containers/composefs-rs#209 because I was hitting unexpected diffs - will flesh this out more

@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch 3 times, most recently from 182d8fe to 35cbf82 Compare January 14, 2026 16:42
@cgwalters
Copy link
Collaborator Author

CI / test-integration (fedora-44, false, composefs-sealeduki-sdboot) (pull_request)Successful in 34m

🎉 finally!!!

@cgwalters
Copy link
Collaborator Author

This requires containers/composefs-rs#209

Though it'd also be cleaner if rebased on top of the buildsys changes from #1912

@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch 2 times, most recently from e35a329 to 182f6c2 Compare January 16, 2026 14:42
@cgwalters
Copy link
Collaborator Author

                     content: error: boot data installation failed: installing component EFI: No such file or directory (os error 2)

OK it took me+agent quite a while to figure out this is caused by a combination of coreos/bootupd#995 plus us not having #1816

Effectively we had skew between the base image and the yum repos which is really https://gitlab.com/redhat/centos-stream/containers/bootc/-/issues/1174

@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch from 182f6c2 to 75b5312 Compare January 16, 2026 19:40
@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch 2 times, most recently from e0cb1da to 98e07a8 Compare January 19, 2026 18:58
Johan-Liebert1
Johan-Liebert1 previously approved these changes Jan 20, 2026
Copy link
Collaborator

@Johan-Liebert1 Johan-Liebert1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot here, but looks good. Didn't try to run the tests though

@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch 3 times, most recently from ae9ad31 to ac11acd Compare January 21, 2026 13:19
@cgwalters cgwalters marked this pull request as ready for review January 21, 2026 14:29
@cgwalters cgwalters enabled auto-merge (rebase) January 21, 2026 15:18
@cgwalters cgwalters requested review from Johan-Liebert1 and removed request for Johan-Liebert1 January 21, 2026 16:09
@cgwalters cgwalters assigned jeckersb and cgwalters and unassigned cgwalters Jan 21, 2026

# Build the kernel command line
# enforcing=0: https://github.com/bootc-dev/bootc/issues/1826
# TODO: pick up kargs from /usr/lib/bootc/kargs.d
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have the machinery now to container inspect and generate the right kargs here, but that can be handled in a followup.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although on second thought maybe we just punt on this and fix it once and for all with forthcoming container ukify command that will do that automatically.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, still a ton more to do here but let's save that one for a followup!

@cgwalters cgwalters force-pushed the sealed-scratch-rebase branch from ac11acd to 9b05026 Compare January 21, 2026 18:53
@jeckersb
Copy link
Collaborator

Hm kind of a weird failure on c10s...

---- test_container_write_derive stdout ----
Error: Importing

Caused by:
    0: Parsing layer blob sha256:740ce35691c2200ec8dfa0e8980cfb5c66f1611f7b58877ce5b92cc28a63fc84
    1: error: ostree-tar: Failed to handle file: ostree-tar: Failed to import file: Writing content object: Unexpected EOF with 18/18 bytes remaining
    2: Processing tar
    3: Failed to commit tar: ExitStatus(unix_wait_status(256))

We'll see if that is a fluke or not...

jeckersb
jeckersb previously approved these changes Jan 21, 2026
Copy link
Collaborator

@jeckersb jeckersb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, hopefully the CI thing is just some weird flake...

@cgwalters
Copy link
Collaborator Author

I think it was a flake, can't see how it's related. That said it looks like there was another bug in that the GHA state is "stuck" in that the job finished but didn't report it, so the UI won't let me rerun.

Now that we're doing a "from scratch" build we don't
have the mtime issue, and so we can change our build system
to do everything in a single step.

Assisted-by: OpenCode (Opus 4.5)
Signed-off-by: Colin Walters <walters@verbum.org>
Add support for bind-mounting an extra source directory into container
builds, primarily for developing against a local composefs-rs checkout.

Usage:
  BOOTC_extra_src=$HOME/src/composefs-rs just build

The directory is mounted at /run/extra-src inside the container. When
using this, also patch Cargo.toml to use path dependencies pointing to
/run/extra-src/crates/....

Signed-off-by: Colin Walters <walters@verbum.org>

Assisted-by: OpenCode (Opus 4.5)
Signed-off-by: Colin Walters <walters@verbum.org>
The composefs-rs PR 209 has been merged to main. This updates
bootc to use the containers/composefs-rs repository at the
merge commit.

Key API changes:
- Directory::default() -> Directory::new(Stat::uninitialized())
- read_filesystem() no longer takes stat_root parameter
- New read_container_root() for OCI containers (propagates /usr metadata to root)
- stat_root CLI flag renamed to no_propagate_usr_to_root with inverted logic

See containers/composefs-rs#209

Signed-off-by: Colin Walters <walters@verbum.org>
We changed how composefs digests are computed to ensure that
mounted filesystem via --mount=type=image and install-time view
(OCI tar layer processing from containers-storage) match.

There were various problems like differing metadata for `/`
among other things.

Signed-off-by: Colin Walters <walters@verbum.org>
Add comprehensive documentation for building sealed bootc images,
focusing on the core concepts and the key command:
`bootc container compute-composefs-digest`.

Key additions:
- Document how sealed images work (UKI + composefs digest + Secure Boot)
- Explain the build workflow abstractly without distribution-specific details
- Document the compute-composefs-digest command and its options
- Add section on generating/signing UKIs with ukify
- Document developer testing commands (just variant=composefs-sealeduki-sdboot)
- Add validation tooling documentation

This provides the foundation for distribution-specific documentation
to build upon with concrete Containerfile examples.

Assisted-by: OpenCode (Claude Sonnet 4)
Signed-off-by: Colin Walters <walters@verbum.org>
Justfile changes:
- Organize targets into groups (core, testing, docs, debugging, maintenance)
- Add `list-variants` target to show available build variants
- Simplify comments to be concise single-line descriptions
- Move composefs targets (build-sealed, test-composefs) into core group

CONTRIBUTING.md changes:
- Reference `just --list` and `just list-variants` instead of duplicating
- Remove tables that duplicate Justfile information
- Fix broken link to cli.rs

The Justfile is now self-documenting via `just --list` (grouped targets)
and `just list-variants` (build configuration options).

Assisted-by: OpenCode (Claude Sonnet 4)
Signed-off-by: Colin Walters <walters@verbum.org>
This is useful when debugging issues with stale cached layers,
such as package version skew between base images and repos.

Signed-off-by: Colin Walters <walters@verbum.org>
@jeckersb
Copy link
Collaborator

I think it was a flake, can't see how it's related. That said it looks like there was another bug in that the GHA state is "stuck" in that the job finished but didn't report it, so the UI won't let me rerun.

I had already restarted the failed job, but now it just says it's queued. Maybe an outage or some general backlog and it will eventually get around to running?

@jeckersb
Copy link
Collaborator

I think it was a flake, can't see how it's related. That said it looks like there was another bug in that the GHA state is "stuck" in that the job finished but didn't report it, so the UI won't let me rerun.

I had already restarted the failed job, but now it just says it's queued. Maybe an outage or some general backlog and it will eventually get around to running?

Yeah it eventually transitioned into Startup failure state and in the annotation has the message:

Error
An unexpected error has occurred and we've been automatically notified. Errors are sometimes temporary, so please try again.

If the problem persists, please check whether the Actions service is operating normally at https://githubstatus.com/. If not, please try again once the outage has been resolved.

Should you need to contact Support, please visit https://support.github.com/contact and include request ID: B712:16F17F:4198E92:5B25070:697145D4

I shall blindly try it again now...

This is a nicer way to check for the kernel version.

Signed-off-by: Colin Walters <walters@verbum.org>
@cgwalters
Copy link
Collaborator Author

Just pushed another small commit to hopefully unblock again

@cgwalters cgwalters merged commit c68e2b4 into bootc-dev:main Jan 22, 2026
30 of 36 checks passed
@jeckersb
Copy link
Collaborator

🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/documentation Updates to the documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants