Posted by Steve on Thu 27 Sep 2007 at 06:32
When a GNU/Linux machine runs out of physical memory it will start to use any configured swap-space. This is usually a sign of trouble as swap files and partitions are significantly slower to access than physical memory, however having some swap is generally better than having none at all. The size of swap allocated to files, or partitions, is usually chosen arbitrarily with many people adopting the "double the memory size" rule of thumb. Using a dynamic system can ease the maintainance of this size.
The relatively unknown dphys-swapfile package contains a simple script which will create and activate a swapfile at boottime which is sized appropriately for your system.
The advantage of this dynamic creation is that the swap will be resized automatically if you upgrade your memory and don't remember to do it.
Upon recent kernels there doesn't appear to be a significant penalty to using swap files as opposed to swap partitions. With this in mind I'd recommend files rather than partitions, to give yourself more flexibility.
To get started first remove any existing swap you have allocated. You can view any swap space which is in use by running:
skx@vain:~$ /sbin/swapon -s Filename Type Size Used Priority /dev/md1 partition 2931768 557428 -1
Here we see that there is swap allocated to the physical raid volume /dev/md1. We can disable that by running:
root@vain:~# /sbin/swapoff /dev/md1
Once it is gone we can now install the package upon installation the system will create and activate then new swap:
root@vain:~# apt-get install dphys-swapfile Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed dphys-swapfile 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 9572B of archives. After unpacking 111kB of additional disk space will be used. Get: 1 http://apt-cache sid/main dphys-swapfile 20061020-1 [9572B] Fetched 9572B in 0s (60.5kB/s) Selecting previously deselected package dphys-swapfile. (Reading database ... 116206 files and directories currently installed.) Unpacking dphys-swapfile (from .../dphys-swapfile_20061020-1_all.deb) ... Setting up dphys-swapfile (20061020-1) ... Starting dphys-swapfile swapfile setup ... computing size, want /var/swap=1876MByte, generating swapfile ... of 1876MBytes done.
Now whenever you boot you'll have /var/swap created at a size of twice your amount of physical memory, automatically.
You can verify this yourself with the swapon command we demonstrated earlier:
skx@vain:~$ /sbin/swapon -s Filename Type Size Used Priority /var/swap file 1921016 0 -3
If you wish to change the location, or size, of the generated swapfile please create the file /etc/dphys-swapfile and give it contents such as this:
# /etc/dphys-swapfile - user settings for dphys-swapfile package
# author Neil Franklin, last modification 2006.09.15
# copyright ETH Zuerich Physics Departement
# use under either modified/non-advertising BSD or GPL license
# this file is sourced with . so full normal sh syntax applies
# where we want the swapfile to be, this is the default
CONF_SWAPFILE=/swap.file
# size we want to force it to be, default (empty) gives 2*RAM
CONF_SWAPSIZE=2048
##
# Give yourself three times the memory size of swap?
#
# mem=$(grep MemTotal /proc/meminfo |awk '{print $2}')
# CONF_SWAPSIZE=$(expr $mem \* 3)
#
[ Parent ]
[ Parent ]
[ Parent ]
[ Parent ]
Depends on your kernel, your load, and your configuration. Like I said the general rule of thumb has been double-your-memory. By the time you're swapping IO will kill performance, so in some cases it might be best to have none - and just add real memory..
[ Parent ]
[ Parent ]
[ Parent ]
[ Parent ]
In fact, I got the same idea: now my /var partition must have room for 2*my RAM to host my swapfile... and what happens if it get bigger? it'll fill my partition and i'll run in trouble.
I really don't see the interest, but nice tip.
:eric:
http://blog.sietch-tabr.com
[ Parent ]
[ Parent ]
You're not missing much, but having the resize be done automatically if you ever increase physical memory isn't a bad thing.
Really swapping is to be avoided if at all possible, and if you wanted to do it manually via files/LVM/Whatever it is just as good.
I just was suprised I'd never heard of the package and figured a brief introduction might be useful to somebody ..
[ Parent ]
[ Parent ]
I have a hard enough time writing for this site, let alone another one.
I'm happy for people to resubmit things elsewhere if they leave credit in tact, but submitting to multiple places is probably beyond me..
[ Parent ]
Russell Coker made an insightful comment about swap space in http://planet.debian.net.
Basically he says nowadays the 'double-your-ram' rule of thumb is entirely wrong, and it was only right for some early systems derived from 4.3 BSD, such as Ultrix.
Let me paste his comments here:
There is a wide-spread myth that swap space should be twice the size of RAM. This might have provided some benefit when 16M of RAM was a lot and disks had average access times of 20ms. Now disks can have average access times less than 10ms but RAM has increased to 1G for small machines and 8G or more for large machines. Multiplying the seek performance of disks by a factor of two to five while increasing the amount of data stored by a factor of close to 1000 is obviously not going to work well for performance.
A Linux machine with 16M of RAM and 32M of swap MIGHT work acceptably for some applications (although when I was running Linux machines with 16M of RAM I found that if swap use exceeded about 16M then the machine became so slow that a reboot was often needed). But a Linux machine with 8G of RAM and 16G of swap is almost certain to be unusable long before the swap space is exhausted. Therefore giving the machine less swap space and having processes be killed (or malloc() calls fail - depending on the configuration and some other factors) is probably going to be a better situation.
There are factors that can alleviate the problems such as RAID controllers that implement write-back caching in hardware, but this only has a small impact on the performance requirements of paging. The 512M of cache RAM that you might find on a RAID controller won't make that much impact on the IO requirements of 8G or 16G of swap.
I often make the swap space on a Linux machine equal the size of RAM (when RAM is less than 1G) and be half the size of RAM for RAM sizes from 2G to 4G. For machines with more than 4G of RAM I will probably stick to a maximum of 2G of swap. I am not convinced that any mass storage system that I have used can handle the load from more than 2G of swap space in active use.
The reason for the myths about swap space size are due to some old versions of Unix that used to allocate a page of disk space for every page of virtual memory. Therefore having swap space less than or equal to the size of RAM was impossible and having swap space less than twice the size of RAM was probably a waste of effort (see this reference [1]). However Linux has never worked this way, in Linux the virtual memory size is the size of RAM plus the size of the swap space. So while the 'double the size of RAM' rule of thumb gave virtual memory twice the size of physical RAM on some older versions of Unix it gave three times the size of RAM on Linux! Also swap spaces smaller than RAM have always worked well on Linux (I once ran a Linux machine with 8M of RAM and used a floppy disk as a swap device).
As far as I recall some time ago (I can't remember how long) the Linux kernel would by default permit overcommitting of memory. For example if a program tried to malloc() 1G of memory on a machine that had 64M of RAM and 128M of swap then the system call would succeed. However if the program actually tried to use that memory then it would end up getting killed.
The current policy is that /proc/sys/vm/overcommit_memory determines what happens when memory is overcommitted, the default value 0 means that the kernel will estimate how much RAM and swap is available and reject memory allocation requests that exceed that value. A value of 1 means that all memory allocation requests will succeed (you could have dozens of processes each malloc 2G of RAM on a machine with 128M of RAM and 128M of swap). A value of 2 means that a different policy will be followed, incidentally my test results don't match the documentation for value 2.
Now if you run a machine with /proc/sys/vm/overcommit_memory set to 0 then you have an incentive to use a moderately large amount of swap, safe in the knowledge that many applications will allocate memory that they don't use, so the fact that the machine would deliver unacceptably low performance if all the swap was used might not be a problem. In this case the ideal size for swap might be the amount that is usable (based on the storage speed) plus a percentage of the RAM size to cater for programs that allocate memory and never use it. By 'moderately large' I mean something significantly less than twice the size of RAM for all machines less than 7 years old.
If you run a machine with /proc/sys/vm/overcommit_memory set to 1 then the requirements for swap space should decrease, but the potential for the kernel to run out of memory and kill some processes is increased (not that it's impossible to have this happen when /proc/sys/vm/overcommit_memory is set to 0).
The debian-administration.org site has an article about a package to create a swap file at boot [2] with the aim of making it always be twice the size of RAM. I believe that this is a bad idea, the amount of swap which can be used with decent performance is a small fraction of the storage size on modern systems and often less than the size of RAM. Increasing the amount of RAM will not increase the swap performance, so increasing the swap space is not going to do any good.
* [1] http://wombat.san-francisco.ca.us/faqomatic/cache/53.html
[ Parent ]
Very interesting!
I wonder if the choice of size and the partition-vs-file argument should be revised if we also want suspend-to-disk.
IIRC, many (laptop) systems copy the whole RAM content to swap before going to sleep. In that case, a swap smaller than RAM would not be appropriate. And at wake-up time, I do not know if resuming from file instead of partition is possible at all.
Can someone advise?
[ Parent ]
Well, I dont know the theory, but I can give some empirical evidence.
For 2 years, I was using a laptop with
swap size = size of RAM + size of video RAM
and suspend-to-disk worked with no problems ( IIRC, suspend-to-disk also copies the video RAM to swap )
[ Parent ]
With a swap partition, but I can see no reason why the same wouldn't work with a swap file.
[ Parent ]
Actually, it is possible to use a swap file for suspend-to-disk, at least with the Suspend2 implementation by Nigel Cunningham.
It is tricky, because you have to tell the kernel where to resume from. If you use a swap partition, it is as easy as adding
resume=/dev/hda2to your grub's menu.lst file.
But with a swap file, you have to specify something like:
resume=/dev/hda2:0x4f7810where the hex number is the disk sector where the swap file begins.
[ Parent ]
I wrote an article advocating the use of swap files over swap partitions and utilized some of the information in this article to build my case. I hope it answers some of the questions raised in the comments here.
[ Parent ]
[ Parent ]
[ Parent ]
[ View Weblogs ]
[ Parent ]