![]() |
![]() |
![]() |
![]() |
OSTree is best summarized in a single sentence as "git for operating system binaries". At its core architecture is a userspace content-addressed filesystem, and layered on top of that is an administrative layer that is designed to atomically parallel install multiple bootable Unix-like operating systems.
While it takes over some of the roles of tradtional "package managers" like dpkg and rpm, it is not a package system; nor is it a tool for managing full disk images. Instead, OSTree sits between those levels, offering a blend of the advantages (and disadvantages) of both.
Because OSTree is designed for deploying core operating systems, a comparison with traditional "package managers" such as dpkg and rpm is illustrative. Packages are traditionally composed of partial filesystem trees with metadata and scripts attached, and these are dynamically assembled on the client machine, after a process of dependency resolution.
In contrast, OSTree only supports recording and deploying complete (bootable) filesystem trees. It has no built-in knowledge of how a given filesystem tree was generated or the origin of individual files, or dependencies, descriptions of individual components.
The OSTree core emphasizes replicating read-only trees via HTTP. It is designed for the model where a build server assembles one or more trees, and these are replicated to clients, which can choose between fully assembled (and hopefully tested) trees.
However, it is entirely possible to use OSTree underneath a package system; For example, when installing a package, rather than mutating the currently running filesystem, the package manager could assemble a new filesystem tree that includes the new package, record it in the local OSTree repository, and then set it up for the next boot. To support this model, OSTree provides an (introspectable) C shared library.
OSTree shares some similarity with "dumb" replication and stateless deployments, such as the model common in "cloud" deployments where nodes are booted from an (effectively) readonly disk, and user data is kept on a different volumes. The advantage of "dumb" replication, shared by both OSTree and the cloud model, is that it's reliable and predictable.
But unlike many default image-based deployments, OSTree
supports a persistent, writable /etc
that
is preserved across upgrades.
Because OSTree operates at the Unix filesystem layer, it works on top of any filesystem or block storage layout; it's possible to replicate a given filesystem tree from an OSTree repository into both a BTRFS disk and an XFS-on-LVM deployment. Note: OSTree will transparently take advantage of some BTRFS features if deployed on it.
Another deeply fundamental difference between both package
managers and image-based replication is that OSTree is
designed to parallel-install multiple
versions of multiple
independent operating systems. OSTree
relies on a new toplevel ostree
directory; it can in fact
parallel install inside an existing OS or distribution
occupying the physical /
root.
On each client machine, there is an OSTree repository stored
in /ostree/repo
, and a
set of "deployments" stored in /ostree/deploy/
.
Each deployment is primarily composed of a set of hardlinks
into the repository. This means each version is deduplicated;
an upgrade process only costs disk space proportional to the
new files, plus some constant overhead.
OSNAME
/CHECKSUM
The model OSTree emphasizes is that the OS read-only content
is kept in the classic Unix /usr
; it comes with code to
create a Linux read-only bind mount to prevent inadvertent
corruption. There is exactly one /var
writable directory shared
between each deployment for a given OS. The OSTree core code
does not touch content in this directory; it is up to the code
in each operating system for how to manage and upgrade state.
Finally, each deployment has its own writable copy of the
configuration store /etc
. On upgrade, OSTree will
perform a basic 3-way diff, and apply any local changes to the
new copy, while leaving the old untouched.