Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capacity limit is ignored #345

Open
phlegx opened this issue Jun 22, 2023 · 11 comments
Open

Capacity limit is ignored #345

phlegx opened this issue Jun 22, 2023 · 11 comments

Comments

@phlegx
Copy link

phlegx commented Jun 22, 2023

Hi there!

On the README I read:

Cons
No support for the volume capacity limit currently.
The capacity limit will be ignored for now.

This means the Local path Provisioner does not respect the limits atm, right?

Are there any plans to implement this and if yes, when could we expect this? This would be a really needed feature. Otherwise one has no control - at least not within Kubernetes - for to avoid potential disk overflow.

best
Martin

@phlegx phlegx changed the title s Capacity limit is ignored Jun 22, 2023
@jcox10
Copy link

jcox10 commented Jul 14, 2023

This is one of the main reasons we switched to longhorn. Local path really isn't useful beyond testing.

@phlegx
Copy link
Author

phlegx commented Jul 17, 2023

This is one of the main reasons we switched to longhorn. Local path really isn't useful beyond testing.

Hi @jcox10 ! Thanks for the info! Can Longhorn offer same as Local path provisioner in regard of local volumes. Meaning creating local volumes on the hard disk of every server?

Ans second question if this holds true: Is there an easy switch from Local Path provisioner volumes to Longhorn ones?

thanks
Martin

@derekbit
Copy link
Member

This is one of the main reasons we switched to longhorn. Local path really isn't useful beyond testing.

Hi @jcox10 ! Thanks for the info! Can Longhorn offer same as Local path provisioner in regard of local volumes. Meaning creating local volumes on the hard disk of every server?

Ans second question if this holds true: Is there an easy switch from Local Path provisioner volumes to Longhorn ones?

thanks Martin

@phlegx

Can you provide more information on your environment including the network env (1Gbps or 10 Gbps) and the underlying storage (SSD or HDD)?

@phlegx
Copy link
Author

phlegx commented Jul 17, 2023

Hi @derekbit

Rancher 2.6.8
Provider: RKE1
Kubernetes Version: v1.23.8
SSD NVME on each of our 3 Server in the Kubernetes Cluster
Network: 1Gbps

We are curently using lpp to create volumes directly on our SSD disks, which works perfect. Only thing is missing is the limit.

@derekbit
Copy link
Member

Hi @derekbit

Rancher 2.6.8 Provider: RKE1 Kubernetes Version: v1.23.8 SSD NVME on each of our 3 Server in the Kubernetes Cluster Network: 1Gbps

We are curently using lpp to create volumes directly on our SSD disks, which works perfect. Only thing is missing is the limit.

I see. In Longhorn, each volume has one or multiple replicas. The network is only 1Gbps which is a bit slow for Longhorn. However, in your use case, you use a volume with a local replica (set volume dataLocality to strict-local), ss the network is not an issue.
You can try Longhorn v1.4.3 first and see if everything is ok.
https://longhorn.io/docs/1.4.3/

@ATP3530
Copy link

ATP3530 commented Oct 6, 2023

is there any future plan to support capacity limit?

@RipperSK
Copy link

RipperSK commented Dec 4, 2023

is there any future plan to support capacity limits - for example by using underlying FileSystem quotas?

@derekbit
Copy link
Member

derekbit commented Dec 4, 2023

@ATP3530 @RipperSK
I don't have time for the capacity limit currently because of the development of Longhorn project.
Any contribution is appreciated.

Copy link

github-actions bot commented Jun 6, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Jun 6, 2024
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

@yuvipanda
Copy link

Can this be re-opened? I think this would be a wonderful feature to add. It's listed in the README as a drawback

@derekbit derekbit reopened this Jan 25, 2025
@github-actions github-actions bot removed the stale label Jan 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants