Posts for: #containers

Incus 0.5.1 Release: Bug Fixes and Compatibility Updates for CentOS, AlmaLinux and Rocky Linux VMs

Incus 0.5.1 Release: Bug Fixes and Compatibility Updates for CentOS, AlmaLinux and Rocky Linux VMs

Incus 0.5.1 has been released. This release includes important bugfixes and a minor feature addition that caters to those running CentOS, AlmaLinux and Rocky Linux virtual machines.

One of the highlights of this release is the alternative way to get the VM agent. In the previous version, there was a single share named config that included both the instance-specific agent configuration and the incus-agent binary. However, this approach was wasteful and required a copy of the large incus-agent for every VM. With Incus 0.5.1, a separate share was introduced just for the binaries to avoid copying them for every VM. This change reduces resource usage on the host system.

Another important fix in this release is the handling of stopped instances during evacuation. In Incus 0.5, a bug caused stopped instances to be relocated to other systems during evacuation, even if they were configured to remain where they were. This bug has been corrected in Incus 0.5.1, ensuring that instances using stopped, force-stop, or stateful-stop will remain on their current server.

There are also some database performance fixes in this release. Improvements in Incus 0.5 unintentionally caused nested database transactions when fetching network information details for a large number of instances. This issue became visible when using an Incus cluster that serves DNS zones and has its metrics scraped by Prometheus. The fix removes the nested transactions and optimizes database interactions during command API interactions.

Here is the complete changelog for Incus 0.5.1:

  • Translated using Weblate (German)
  • Translated using Weblate (Dutch)
  • incus/action: Fix resume
  • Translated using Weblate (Japanese)
  • Translated using Weblate (Japanese)
  • Translated using Weblate (Japanese)
  • doc: Remove net_prio
  • incusd/cgroup: Fully remove net_prio
  • incusd/warningtype: Remove net_prio
  • incusd/cgroup: Look for full cgroup controllers list at the root
  • incusd/dns: Serialize DNS queries
  • incusd/network: Optimize UsedByInstanceDevices
  • incusd/backups: Simplify missing backup errors
  • tests: Update for current backup errors
  • incusd/cluster: Optimize ConnectIfInstanceIsRemote
  • incusd/instance/qemu/agent-loader: Fix to work with busybox
  • doc/installing.md: add a gentoo-wiki link under Gentoo section
  • Translated using Weblate (French)
  • Translated using Weblate (Dutch)
  • incusd/device/disk: Better cleanup cloud-init ISO
  • incusd/instance/qemu/qmp: Add Eject command
  • incusd/instance/qemu/qmp: Handle eject requests
  • api: agent_config_drive
  • doc/devices/disk: Add agent:config drive
  • incusd/device/disk: Add agent config drive
  • incusd/project: Add support for agent config drive
  • incusd/instance/qemu/agent-loader: Handle agent drive
  • incusd/db/warningtype: gofmt
  • incusd/loki: Sort lifecycle context keys
  • incusd/instance/qemu/agent-loader: Don’t hardcode paths
  • incusd/cluster: Fix evacuation of stopped instances

For more information, you can refer to the Incus documentation.

Linux Containers: Introducing Incus 0.5

Linux Containers: Introducing Incus 0.5

The Incus team has announced the release of Incus 0.5, the first release of 2024. This release brings several improvements to the Incus CLI, new virtual machine features, additional options for handling cluster evacuations and host shutdowns, and various bugfixes and performance improvements.

Highlights of the release include:

Ansible, Terraform/OpenTofu, and Packer
Incus now has support for Ansible, Terraform/OpenTofu, and Packer. This means that users can now find a connection plugin for Incus in Ansible, an official provider for Terraform and OpenTofu, and a Packer plugin for Incus.

Linux distribution packages Additional packages for Incus are now available for Arch Linux, Debian (testing/unstable), Ubuntu (noble), and Void Linux. Detailed installation instructions can be found in the Incus documentation.

Translations
The Incus team has spent time cleaning up translations and setting up Weblate for Incus. This makes it easier than ever for users to log into Weblate and translate the Incus CLI into their language.

New features
Some of the new features introduced in Incus 0.5 include:

  • New incus file create command: This command allows users to create empty files, symlinks, and directories without transferring an existing local directory tree.
  • New incus snapshot show command: This command allows users to view the configuration data included in an Incus instance snapshot.
  • More shell completion options: Incus is transitioning to a more dynamic way of handling shell completion, and users can now retrieve initial shell completion profiles for Bash, Fish, PowerShell, and Zsh.
  • Support for multiple VM agent binaries: Incus now supports providing multiple agent binaries to virtual machines, which is useful for handling multiple operating systems and architectures.
  • Support for virtio-blk as a disk io.bus: After adding NVME support in Incus 0.2, Incus now offers virtio-blk as a disk I/O bus option in virtual machines.
  • Support for USB network device pass-through in VMs: Incus now detects when the parent network device of a virtual machine is connected over the USB bus and converts it into a USB device pass-through.
  • New cluster evacuation options: Two new cluster evacuation options, force-stop and stateful-stop, have been added to Incus. These options can be selected on a per-instance basis and provide different ways to handle the evacuation of instances in a cluster.
  • Ability to configure the host instance shutdown action: Users can now configure the action to be taken when the host instance shuts down. The options include stop, force-stop, and stateful-stop.
  • Ability to start instances as part of creation: Instances can now be started as part of the creation request, saving an API call and making it easier for users scripting the Incus API.
  • Configurable Loki instance name: Incus now allows users to provide a cluster name to be used as the Loki event source instance, making it easier to filter events from multiple clusters using the same Loki instance.
  • Extended HEAD support on files: The HEAD method on the Incus instance file API now returns the file size, allowing for the display of file sizes in addition to names and types.
  • Use of /run/incus for runtime data: Incus now stores runtime data in /run/incus, keeping /var/log/incus only for actual log files.

For the complete list of changes in Incus 0.5, refer to the changelog.

To try Incus for yourself, visit the Incus documentation for installation instructions and more information.

Linux Containers Release Incus 0.4

Linux Containers Release Incus 0.4

The Incus team has announced the release of Incus 0.4, the latest version of their system container and virtual machine manager. This release is particularly significant as it marks the last release of Incus to feature changes coming from LXD, as Incus has now become fully independent.

Incus 0.4 introduces several exciting new features, including a built-in keep-alive mode in the client tool, improvements to certificate/trust store management, new OVN configuration keys, and the ability to directly create CephFS filesystems. Additionally, Incus 0.4 brings significant improvements to both OpenFGA and OVN handling, setting the infrastructure in place for upcoming new features.

One of the standout features of Incus 0.4 is the new keep-alive support in the CLI client. Users can set a keepalive configuration key on a remote in ~/.config/incus/config.yml, defining how long to keep a background connection with the Incus server. This feature significantly reduces latency and provides up to a 30% performance improvement for use cases that involve a lot of incus commands, such as Ansible.

Another notable addition in Incus 0.4 is the description field for certificate entries. This brings certificate entries in line with other Incus objects and enhances the overall user experience.

The incus config trust list command has also been reworked in this release to show more useful columns by default, including the description column. These columns are now configurable, providing users with more control over their configurations.

In terms of infrastructure improvements, Incus 0.4 introduces OVN SSL keys as server configuration. This allows users to specify SSL certificates and keys to access OVN, taking precedence over any keys found in /etc/ovn/.

Additionally, CephFS filesystems can now be directly created in Incus. Users can set the cephfs.create_missing config key to true and specify the OSD pool to consume, allowing Incus to create a new CephFS filesystem.

Users of LXD are also advised that access to the community image server (images: remote) will be phased out over a period of around 5 months. It is recommended that LXD users running non-Ubuntu images start planning their migration to Incus.

For more details on this release, including the complete changelog, documentation, and available packages, please visit the Incus website.

Kubernetes v1.29: Introducing Mandala

Kubernetes v1.29: Introducing Mandala

Kubernetes has announced the release of version 1.29, named Mandala (The Universe). This release introduces new stable, beta, and alpha features, continuing the tradition of delivering top-notch releases. The v1.29 release includes 49 enhancements, with 11 graduating to Stable, 19 entering Beta, and 19 graduating to Alpha.

Some of the stable improvements in v1.29 include:

  • ReadWriteOncePod PersistentVolume access mode, which allows multiple pods on the same node to read from and write to the same volume.
  • Node volume expansion Secret support for CSI drivers, which allows secrets to be sent as part of the node expansion process.
  • KMS v2 encryption at rest, which provides improvements in performance, key rotation, health check & status, and observability for encrypting persisted API data.

Beta improvements in v1.29 include:

  • QueueingHint feature for optimizing the efficiency of requeueing in the scheduler.
  • Separation of node lifecycle from taint management, allowing for more granular control over taint-based pod eviction.
  • Clean up for legacy Secret-based ServiceAccount tokens, marking them as invalid if they have not been used for a long time.

Alpha features in v1.29 include:

  • Defining Pod affinity or anti-affinity using matchLabelKeys, improving calculation accuracy during rolling updates.
  • nftables backend for kube-proxy, providing a new backend based on nftables for packet filtering and processing.
  • APIs to manage IP address ranges for Services, allowing for dynamic allocation and resizing of IP ranges.
  • Support for image pull per runtime class in containerd/kubelet/CRI, enabling the pulling of different images based on the runtime class specified.
  • In-place updates for Pod resources for Windows Pods, allowing for changes to the desired resource requests and limits without restarting the Pod.

The release also includes the graduation of 11 enhancements to Stable, the deprecation of in-tree integrations with cloud providers, the removal of the v1beta2 flow control API group, the deprecation of the status.nodeInfo.kubeProxyVersion field for Node objects, and the removal of legacy Linux package repositories.

Kubernetes v1.29 is available for download on GitHub, and users can get started with Kubernetes using interactive tutorials or by running local clusters using minikube. The release team, consisting of dedicated community volunteers, has worked hard to deliver this release, with contributions from 888 companies and 1422 individuals during the 14-week release cycle.

For more details about the v1.29 release, including the full list of enhancements and graduations, users can refer to the release notes.

Portainer: Embracing GitOps for a Streamlined Workflow

Portainer has published an article titled “GitOps - The Path Forward” that explores the concept of GitOps and how it can be implemented using the Portainer platform. The article begins by discussing the importance of adhering to compliance standards like GDPR and the need for secure cloud environments. GitOps is presented as a recommended operational framework for implementing infrastructure and development methodologies that ensure compliance and effective infrastructure management.

The article goes on to explain the fundamental concepts of GitOps, including automation, version control, continuous integration/continuous delivery, auditing, compliance, version rollback, and collaboration. It highlights the requirements for implementing GitOps, such as Infrastructure as Code (IaC), pull request reviews, CI/CD pipelines, automation, version control, auditability, rollback and forward capabilities, and collaboration.

The article then focuses on how Portainer facilitates the implementation of GitOps. It mentions that Portainer offers a suite of tools designed specifically for GitOps, including RBAC, automation, and visibility. It highlights the role-based access control (RBAC) feature of Portainer, which provides precise access control to Kubernetes platforms and container runtime environments. Portainer also integrates with authentication providers like LDAP and Microsoft AD. The article further explains how Portainer enables GitOps automation by connecting with Git repositories and allowing for automated application deployment to Kubernetes clusters and container environments. It also mentions how Portainer provides updates and monitoring solutions for GitOps operations through container logs, authentication logs, and event lists.

In conclusion, the article emphasizes that GitOps is a contemporary methodology for managing infrastructure and applications, and leveraging GitOps strategies like auditing, rollback, and roll forward can enhance operational agility, reliability, and compliance. The article highlights the benefits of using the Portainer platform for implementing GitOps, including RBAC, automation, and monitoring capabilities.

K0s Releases Version v1.28.2+k0s.0

k0s has released version v1.28.2+k0s.0. This all-inclusive Kubernetes distribution is designed for building Kubernetes clusters and comes packaged as a single binary for easy use. It can be used in various environments, including cloud, IoT gateways, Edge, and Bare metal deployments, thanks to its simple design, flexible deployment options, and modest system requirements.

The latest release, 1.28.2, includes several updates and improvements. Some of the highlights include:

  • Kubernetes 1.28.2: The release builds with Kubernetes 1.28.2, and all the Kubernetes components are updated to the same version.
  • Enhanced autopilot: The autopilot now allows the cluster to follow a specific update channel on an update server, making it easier to stay up-to-date with patch updates.
  • SBOM generation: The release now generates a full signed SBOM (Software Bill of Materials) for each release, providing greater transparency and security.
  • Extended OS testing matrix: The OS testing matrix now covers 22 OS and version combinations, including Alpine, CentOS, Debian, Fedora, Fedora CoreOS, Flatcar, Oracle, RHEL, Rocky, and Ubuntu.
  • Updated component versions: Various components have been updated, including ContainerD, RunC, Etcd, Kine, Konnectivity, Kube-router, Calico, and CoreDNS.

For a detailed list of changes, you can refer to the release notes. This release also includes contributions from new contributors who made their first contribution to the project.

Overall, this release of k0s brings important updates and improvements, making it a reliable choice for building Kubernetes clusters in various environments.