Skip to content

Add helm charts for nydus-snapshotter for convenient K8s deploytment #643

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

changweige
Copy link
Member

@changweige changweige commented Jun 5, 2025

Address #612

Currently nydus-snapshotter does not have an elegant method to be deployed to a productive Kubernetes cluster. Nydus-snapshotter only release binary artifacts w/o consideration for Kubernetes. So this PR tries to pack nydus-snapshotter together with necessary nydus node components and its configurations into a Helm Chart. It makes the nydus-snapshotter installation and deployment convenient for users even if the users only intends to give a quick trial and evaluation if nydus suites their business. It's totally a standard K8s fashion. The users don't have to scan tons of documents. Yes it is just packed as a whole.

The challenge we have to solve first is that K8s applications/components(on top of DaemonSet) can be killed, rollout to upgrade, the progress should not affect any running nydusd as the user space fuse service. So we cant let nydus-snapshotter directly spawn a nydusd which is also running in the same Linux namespaces and cgroups with nydus-snapshotter. By design, nydus-snapshotter is a privileged container with host PID namespace, this means nydus-snapshotter can nsenter the host namespace and delegate nydusd creation to systemd by its utility systemd-run. Nydus-snapshotter will enter host mnt namespace and calls systemd-run to start nydusd as a transient service. So deleting nydus-snapshotter daemonset and helm release, rollout daemonset, killing nydus-snapshotter won't affect running nydusd, which means user's business is continuous across those processes.

As for the details of the deployment architecture:
DaemonSet:

  • nydus-snapshotter
  • nydus node components: nydusd, nydus-image, nydusctl

ConfigMap:

  • nydus-snapshotter config.toml

Besides, nydus-snapshotter will modify containerd config.toml to integrate with nydus-snapshotter and restart containerd if necessary.

To test and try, just at the root of the code project and perform

helm install nydus-snapshotter charts

Copy link

codecov bot commented Jun 5, 2025

Codecov Report

Attention: Patch coverage is 0% with 92 lines in your changes missing coverage. Please review.

Project coverage is 21.28%. Comparing base (6542a89) to head (834e7f9).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
pkg/manager/daemon_adaptor.go 0.00% 90 Missing ⚠️
pkg/manager/manager.go 0.00% 1 Missing ⚠️
snapshot/snapshot.go 0.00% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #643      +/-   ##
==========================================
- Coverage   21.39%   21.28%   -0.12%     
==========================================
  Files         122      122              
  Lines       13614    13687      +73     
==========================================
  Hits         2913     2913              
- Misses      10380    10453      +73     
  Partials      321      321              
Files with missing lines Coverage Δ
config/config.go 31.69% <ø> (ø)
pkg/manager/manager.go 0.00% <0.00%> (ø)
snapshot/snapshot.go 0.00% <0.00%> (ø)
pkg/manager/daemon_adaptor.go 0.00% <0.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@changweige changweige requested review from imeoer and sctb512 June 5, 2025 09:39
@lukasmrtvy
Copy link

lukasmrtvy commented Jun 8, 2025

@changweige can we make the nydus-snapshotter-config optional? or make it available as a string in values.yaml

for example ( values.yaml ):

config: |
    version = 1
    # Snapshotter's own home directory where it stores and creates necessary resources
    root = "/var/lib/containerd/io.containerd.snapshotter.v1.nydus"
    # The snapshotter's GRPC server socket, containerd will connect to plugin on this socket
    address = "/run/containerd-nydus/containerd-nydus-grpc.sock"
    # The nydus daemon mode can be one of the following options: multiple, dedicated, shared, or none. 
    # If `daemon_mode` option is not specified, the default value is multiple.
    daemon_mode = "dedicated"
    ...

or

config: ""
configFrom: "external-configmap"
configFromKey: "myconfig.toml"

Thanks

@changweige
Copy link
Member Author

changweige commented Jun 9, 2025

@lukasmrtvy

@changweige can we make the nydus-snapshotter-config optional? or make it available as a string in values.yaml

for example ( values.yaml ):

config: |
    version = 1
    # Snapshotter's own home directory where it stores and creates necessary resources
    root = "/var/lib/containerd/io.containerd.snapshotter.v1.nydus"
    # The snapshotter's GRPC server socket, containerd will connect to plugin on this socket
    address = "/run/containerd-nydus/containerd-nydus-grpc.sock"
    # The nydus daemon mode can be one of the following options: multiple, dedicated, shared, or none. 
    # If `daemon_mode` option is not specified, the default value is multiple.
    daemon_mode = "dedicated"
    ...

or

config: ""
configFrom: "external-configmap"
configFromKey: "myconfig.toml"

Thanks

Looks reasonable for users who has their own nydus-snapshotter config. I can improve the PR like still keep the configmap template as part of the Helm Chart, but we add a value to the value.yaml to take the custom nydus-snapshotter. If configin thevalues.yaml` is not set, the default nydus-snapshotter config will be mounted to each K8s node. In this way, unexperienced user can quickly have a sane nydus system running in the K8s cluster.

Does it look good to you?

@lukasmrtvy
Copy link

Yes, default config should be defined in values.yaml to get started with, and if configFrom is defined, use it instead.

I guess the ultimate solution would be to merge TOML with tool like https://github.com/TomWright/dasel, this way one could edit single property like address

# will be merged with default config
overrideConfig: |
    address = "my/path/ontainerd-nydus/containerd-nydus-grpc.sock"

I also noticed the RBAC is missing, for example this https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/nydus-snapshotter-rbac.yaml could improve the experience with adding the additional node label after nydus installation.

WDYT?

@changweige
Copy link
Member Author

changweige commented Jun 9, 2025

@lukasmrtvy

Yes, default config should be defined in values.yaml to get started with, and if configFrom is defined, use it instead.

I guess the ultimate solution would be to merge TOML with tool like https://github.com/TomWright/dasel, this way one could edit single property like address

Good idea. I am not sure Helm gives us a chance to merge those two when rendering templates, however, at least, I can merge both the default nydus-snapshotter toml config and user's (maybe a set of override values) when performing setup.sh before exec containerd-nydus-grpc. In fact, dasel has been utilized in the setup.sh to modify containerd's config.toml.

# will be merged with default config
overrideConfig: |
    address = "my/path/ontainerd-nydus/containerd-nydus-grpc.sock"

I also noticed the RBAC is missing, for example this https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/nydus-snapshotter-rbac.yaml could improve the experience with adding the additional node label after nydus installation.

Yes, this PR does not contain any role definition. What kind of node labels do you think is helpful?

WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants