Use ulimit to limit process written bytes
Problem
You want to limit a process to only use limited resources in your server.
Solution
The ulimit can help with that. You can check this with the ulimit --help command in linux:
1 | $ ulimit --help |
Some options won’t work in certain platform. For example, if you use ulimit -T in linux, it will show “invalid option” error. If you further wonder how do you then limit “number of threads” in linux, you can use the -u option, which limit the maximum number of user processes, because in linux a process and a thread are essentially the same, both called “task” inside the kernel.
You can set the soft limit and hard limit for a certain resource with the -S or -H command. If you want to set both, could just use -SH, for example, if you want to set the soft and hard limit of “maximum open FDs” to be 100, you could use ulimit -SHn 100. Internally, it is using this struct:
1 |
|
Example
1) Set open files number
To get the current ulimit value, you can use ulimit -a. To see specific values, for example, the “maximum open FD”, you can use ulimit -Sn for the soft limit, and ulimit -Hn for the hard limit.
Anothe example to get the limit with clang:
1 |
|
Compile and run:
1 | $ gcc main.c -o main |
You can set the limit with an extra paramter, like ulimit -Sn 2000, now if you use ulimit -Sn, it will show 2000.
2) Set written file size
To see the current written file size, use ulimit -Sf, it shows “unlimited”. If you use ulimit -a, it shows things like this:
1 | file size (blocks, -f) unlimited |
So if we set the number for this, e.g. 10, the limited written file size will be 10 blocks. Now the question is, what is the size of “a block”? To test this out, let’s have a python script first:
1 | #!/usr/bin/env python3 |
Here we will write 1024 bytes to a file. Now let’s limit the “file size” to 1 block and run this file:
1 | $ ulimit -Sf 1 |
No error. Now let’s change 1024 to 1025 in the above file, and run python3 writefile.py again:
1 | $ python3 writefile.py |
Ooops, now we have an error, apparently it exceeded the “max file size” limit. So now we know the “block” size is 1024 bytes.
The block size is operation system dependant. The above code is tested in Ubuntu 22.04 x86_64. I also tested the same code in macOS Monterey (version 12.5), in which the block size is actually 512 bytes.
To set the limit back to unlimited, use ulimit -Sf unlimited.
An unsolved problem
I tried the Getrlimit with Golang (version 1.19.1), in the same computer:
1 | package main |
Run:
1 | $ go run main.go |
While the C code above gave:
1 | soft: 1024 |
I don’t know why this happened 😟, here is the source code >> of Golang, but didn’t see a problem. I posted a question in stackoverflow, see if we can get an answer.
Update: that question has been answered, it turns out since 1.19, Go will raise the soft NOFILE limit to the hard limit at start up, see:
Quote:
Some systems set an artificially low soft limit on open file count, for compatibility with code that uses select and its hard-coded maximum file descriptor (limited by the size of fd_set). Go does not use select, so it should not be subject to these limits. On some systems the limit is 256, which is very easy to run into, even in simple programs like gofmt when they parallelize walking a file tree. After a long discussion on go.dev/issue/46279, we decided the best approach was for Go to raise the limit unconditionally for itself, and then leave old software to set the limit back as needed. Code that really wants Go to leave the limit alone can set the hard limit, which Go of course has no choice but to respect.