Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-processing leaks memory #58

Open
pushred opened this issue Aug 22, 2013 · 6 comments
Open

Pre-processing leaks memory #58

pushred opened this issue Aug 22, 2013 · 6 comments
Labels

Comments

@pushred
Copy link
Member

pushred commented Aug 22, 2013

memory

@pushred
Copy link
Member Author

pushred commented Aug 23, 2013

Here's overnight without any preprocessors running:

no-preprocessors

@Fauntleroy
Copy link
Contributor

I'm assuming that this is some sort of problem involving dying vms. I'll have to make some test cases and see what's up.

@pushred
Copy link
Member Author

pushred commented Sep 17, 2013

Looks like a preprocessor need not even do anything to leak memory, just merely be referenced from the page. Below is the contents of such a preprocessor and it's memory consumption after several hours (the drop at the end is from a restart):

/* player */
//{ url: insta_vid_list[random].url, username: insta_vid_list[random].username}

memory

@pushred
Copy link
Member Author

pushred commented Jan 8, 2014

2 days running:

screen shot

@pushred pushred closed this as completed Jan 8, 2014
@joanniclaborde joanniclaborde reopened this Jul 7, 2015
@joanniclaborde
Copy link
Contributor

After increasing the VM's memory size to 1GB, the memory usage on Carrie peaked at 557MB after 5 hours. Also, I was never able to experience a memory leak on my machine, after a full day of playing around and trying different configurations.

I think Solidus is just hungry. Each worker can take up to 70mb of memory, from what I could tell. The rest of the system would need to be tested, to see where that memory is used.

@joanniclaborde
Copy link
Contributor

After digging through the code, I can't find any real problem. The memory grows simply because memory is available, V8 doesn't bother releasing it. For example, when I trace the memory usage:

$ NODE_ENV=production node start.js
118.02 after server is started
243.63 after templates are loaded
252.02 after page is rendered
... // many many requests
518.34 after page is rendered

And do the same thing, but I force a garbage collection after the templates are loaded (big drop in memory):

$ NODE_ENV=production node --expose-gc start.js 
118.36 after server is started
243.83 after templates are loaded
115.39 after forcing garbage collection
133.93 after page is rendered
... // many many requests
563.52 after page is rendered

Forcing garbage collection is not really a solution, we should let V8 take care of that. But, we can tell it the maximum memory it can use:

$ NODE_ENV=production node --max_old_space_size=256 start.js 
118.05 after server is started
191.21 after templates are loaded
192.64 after page is rendered
... // many many requests
277.50 after page is rendered

I'll send an email to Modulus about this. It doesn't make sense to start a Node process on a 396mb machine, without changing the default max_old_space_size, which is much higher.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants