Weblog entry #82 for dkg

the bleeding edge: btrfs (poor performance, alas)
Posted by dkg on Tue 10 May 2011 at 18:17
I'm playing with btrfs to get a feel for what's coming up in linux filesystems. To be daring, i've configured a test machine using only btrfs for its on-disk filesystems. I really like some of the ideas put forward in the btrfs design. (i'm aware that btrfs is considered experimental-only at this point).

I'm happy to report that despite several weeks of regular upgrade/churn from unstable and experimental, i have yet to see any data loss or other serious forms of failure.

Unfortunately, i'm not impressed with the performance. The machine feels sluggish in this configuratiyon, compared to how i remember it running with previous non-btrfs installations. So i ran some benchmarks. The results don't look good for btrfs in its present incarnation.

UPDATE: see the comments section for revised statistics from a quieter system, with the filesystems over the same partition (btrfs is still much slower).

The simplified test system i'm running has Linux kernel 2.6.39-rc6-686-pae (from experimental), 1GiB of RAM (no swap), and a single 2GHz P4 CPU. It has one parallel ATA hard disk (WDC WD400EB-00CPF0), with two primary partitions (one btrfs and one ext3). The root filesystem is btrfs. The ext3 filesystem is mounted at /mnt

I used bonnie++ to benchmark the ext3 filesystem against the btrfs filesystem as a non-privileged user.

Here are the results on the test ext3 filesystem:

consoleuser@loki:~$ cat bonnie-stats.ext3 
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M   331  98 23464  11 10988   4  1174  85 39629   6 130.4   5
Latency             92041us    1128ms    1835ms     166ms     308ms    6549ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  9964  26 +++++ +++ 13035  26 11089  27 +++++ +++ 11888  24
Latency             17882us    1418us    1929us     489us      51us     650us
1.96,1.96,loki,1,1305039600,2264M,,331,98,23464,11,10988,4,1174,85,39629,6,130.4,5,16,,,,,9964,26,+++++,+++,13035,26,11089,27,+++++,+++,11888,24,92041us,1128ms,1835ms,166ms,308ms,6549ms,17882us,1418us,1929us,489us,51us,650us
consoleuser@loki:~$ 
And here are the results for btrfs (on the main filesystem):
consoleuser@loki:~$ cat bonnie-stats.btrfs 
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M    43  99 22682  17 10356   6  1038  79 28796   6  86.8  99
Latency               293ms     727ms    1222ms   46541us     504ms   13094ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1623  33 +++++ +++  2182  57  1974  27 +++++ +++  1907  44
Latency             78474us    6839us    8791us    1746us      66us   64034us
1.96,1.96,loki,1,1305040411,2264M,,43,99,22682,17,10356,6,1038,79,28796,6,86.8,99,16,,,,,1623,33,+++++,+++,2182,57,1974,27,+++++,+++,1907,44,293ms,727ms,1222ms,46541us,504ms,13094ms,78474us,6839us,8791us,1746us,66us,64034us
consoleuser@loki:~$ 
As you can see, btrfs is significantly slower in several categories:
  • writing character-at-a-time is *much* slower: 43K/sec vs. 331K/sec
  • reading block-at-a-time is slower: 28796K/sec vs. 39629K/sec
  • all forms of file creation and deletion are nearly an order of magnitude slower
  • Random seeks are almost as fast, but they swamp the CPU
I'm hoping that i just configured the test wrong somehow, or that i've done something grossly unfair in the system setup and configuration. (or maybe i'm mis-reading the bonnie++ output?) Maybe someone can point out my mistake, or give me pointers for what to do to try to speed up btrfs.

I like the sound of the features we will eventually get from btrfs, but these performance figures seem like a pretty rough tradeoff.

 

Comments on this Entry

Posted by Anonymous (87.194.xx.xx) on Tue 10 May 2011 at 19:25
I'm not suggesting that it will help a lot, but you need to test on the same partition. Format once as btrfs and test, then format as ext3 and test. Different sections of the disk will have different performance characteristics.

[ Parent | Reply to this comment ]

Posted by Anonymous (84.153.xx.xx) on Tue 10 May 2011 at 19:43
More to the point, the farther outside a sector it, the faster will it be to read/write it.

I just tested a bunch of disks today and confirmed this, once again.

[ Parent | Reply to this comment ]

Posted by Anonymous (108.57.xx.xx) on Wed 11 May 2011 at 02:43
O.K., I do agree.
However where can one(I), get this information for modern hard drives?
Does anyone publish disc sector maps?
Can this be done with LBA?
I know that I can run lots of automated speed tests on very small groups of sectors, which should show me the locations of inner vs outer tracks.
But does any manufacturer actually publish this for just this purpose of having really fast filesystems?
Does anybody really care anymore?, besides me or you that is.
Thanks for an interesting comment.

[ Parent | Reply to this comment ]

Posted by etbe (118.209.xx.xx) on Wed 11 May 2011 at 11:32
[ Send Message ]
The program zcav (which is part of Bonnie++) will show you the speeds for contiguous IO. The speed for contiguous IO is related to the average access time (contiguous IO speed is directly related to the number of sectors per track which is inversely related to the number of tracks the heads move for a seek and therefore correlated with a reduction in average access time).

[ Parent | Reply to this comment ]

Posted by dkg (216.254.xx.xx) on Wed 11 May 2011 at 22:13
[ Send Message | View dkg's Scratchpad | View Weblogs ]
Is this information guaranteed to remain constant over the lifetime of the disk? I had the impression that modern disks might remap logical blocks to different physical blocks without ever consulting or informing the upper layers; this seems to imply that there's no guarantee that a "fast address" would remain fast.

[ Parent | Reply to this comment ]

Posted by etbe (118.209.xx.xx) on Thu 12 May 2011 at 01:13
[ Send Message ]
If you have any significant number of sectors being remapped then you should replace the disk. I've just checked a few systems, smartctl didn't show any sensible data for the number of sectors remapped, so I guess I just have to rely on errors that result in drives being removed from RAID sets as an indication.

The number of remapped sectors since the time of purchase will be small when compared to the total disk capacity. Presumably some sectors are remapped before the disk leaves the factory, zcav isn't accurate enough to measure the impact of such sectors on overall performance.

I presume that manufacturers have spare zones that are distributed throughout the disk to avoid significant seek penalties for remapped sectors.

While there's no guarantee that a single block will remain fast (it could end up having one of it's sectors being somewhere that requires a seek) as a general rule you can rely on a range of 1GB of data that's located at a fast part of the disk (usually the low sector numbers) remaining relatively fast - even if it has a few remapped sectors.

[ Parent | Reply to this comment ]

Posted by dkg (2001:0xx:0xx:0xxx:0xxx:0xxx:xx) on Tue 10 May 2011 at 21:29
[ Send Message | View dkg's Scratchpad | View Weblogs ]
That's a good point. I also realized that maybe other activity on the system was confounding things, so i shut down everything including the graphical session, cron, etc, and re-ran the tests over filesystems built atop the same partition. The results changed a bit, but not a lot. again, ext3 first:
consoleuser@loki:~$ cat bonnie-stats.2.ext3  
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M   290  98 24403  12 12580   5  1308  99 44483   7 140.1   5
Latency               116ms    1123ms    1692ms   26977us     168ms    6488ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 15663  40 +++++ +++ 21690  39 16036  39 +++++ +++ 20955  39
Latency             22223us    1415us    2046us     355us      56us     902us
1.96,1.96,loki,1,1305056984,2264M,,290,98,24403,12,12580,5,1308,9 9,44483,7,140.1,5,16,,,,,15663,40,+++++,+++,21690,39,16036,39,+++ ++,+++,20955,39,116ms,1123ms,1692ms,26977us,168ms,6488ms,22223us, 1415us,2046us,355us,56us,902us
and here's btrfs:
consoleuser@loki:~$ cat bonnie-stats.2.btrfs
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M    94  99 26102  18 12079   9   422  94 36563   9 126.6  53
Latency               202ms     420ms     523ms     135ms     138ms   12659ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4844  80 +++++ +++  3615  92  5132  78 +++++ +++  7089  89
Latency              2926us    6800us    7076us    1209us      61us    1256us
1.96,1.96,loki,1,1305054222,2264M,,94,99,26102,18,12079,9,422,94, 36563,9,126.6,53,16,,,,,4844,80,+++++,+++,3615,92,5132,78,+++++,+ ++,7089,89,202ms,420ms,523ms,135ms,138ms,12659ms,2926us,6800us,70 76us,1209us,61us,1256us
consoleuser@loki:~$ 

[ Parent | Reply to this comment ]

Posted by toobuntu (208.252.xx.xx) on Tue 10 May 2011 at 21:18
[ Send Message ]
At present zfs is more mature. Have you checked out the native zfs (CDDL-licensed) port to linux (i.e., not FUSE)? It now has a POSIX layer, and the latest Solaris 10 features). Still free software, just not redistributable. See:
How-tos for native zfs root:
[0]Ubuntu + ZFS native + root filesystem
Excellent guide showing real-world zfs usage examples in addition to the zfs installation and zpool set up.
[1]HOWTO install Ubuntu to a Native ZFS Root Filesystem
By Darik Horn, of DBAN fame!
Re: licensing:
[3]http://zfsonlinux.org/faq.html#WhatAboutTheLicensingIssue
"modified to build as a CDDL licensed kernel module which is not distributed as part of the Linux kernel. This makes a Native ZFS on Linux implementation possible if you are willing to download and build it yourself."
Source:
[4]https://github.com/behlendorf/zfs
[5]https://github.com/behlendorf/spl
PPA [maverick packages are said to be binary compatible with squeeze] :
[6]https://launchpad.net/~dajhorn/+archive/zfs
[7]https://launchpad.net/~dajhorn/+archive/zfs-grub
PPA maintained by Darik Horn, of DBAN fame!
Apt-clone:
Nifty: the folks at Nexenta have integrated zfs snapshotting with apt. See the perl script for apt-clone (and man page) within their modified apt source, usage is identical to apt-get:
[8]http://apt.nexenta.org/wip/dists/unstable/main/source/admin/apt_0 .8.0nexenta8.tar.gz

[ Parent | Reply to this comment ]

Posted by dkg (2001:0xx:0xx:0xxx:0xxx:0xxx:xx) on Tue 10 May 2011 at 21:24
[ Send Message | View dkg's Scratchpad | View Weblogs ]
Sorry, but i'd rather not rely on non-redistributable code for something as crucial as my filesystem. That sounds like a recipe for having very few testers other than myself, which sounds like trouble :/

I'd be very happy if zfs was licensed to work with the linux kernel and distributed by upstream, though.

[ Parent | Reply to this comment ]

Posted by jmtd (128.240.xx.xx) on Thu 12 May 2011 at 11:35
[ Send Message ]
"Still free software, just not redistributable."

Consider: you are violating the copyright of the software authors. When this is done in the context of musicians, it's called "stealing music" and is treated rather seriously.

[ Parent | Reply to this comment ]

Posted by dkg (2001:0xx:0xx:0xxx:0xxx:0xxx:xx) on Thu 12 May 2011 at 14:37
[ Send Message | View dkg's Scratchpad | View Weblogs ]
I think you might be misinterpreting some of the common free software licenses. It's worth reading the free software definition if you haven't read it. In particular, the licenses in question all respect freedom 0 -- the freedom of the user to run however they wish, and freedom 1 -- the freedom to modify however they wish. This includes modification by combining with proprietary software.

The "keep it free" (or "viral") provisions of the GPL have specific triggers -- for example, if you redistribute the software (modified or not), you must ensure that the redistributed software is itself under the terms of the GPL. This isn't doable if the modifications make it impossible to satisfy the licenses.

I'd actually argue that the combined software that is not redistributable is not ultimately free software, since you no longer have freedom 2 -- the ability to redistribute copies. This is the same reason that debian wouldn't include such a beast in the debian repositories.

But none of this means that a user who uses a privately-made combination of mutually-incompatibly-licensed software is in fact violating the copyright of the software authors; they've already granted the user the freedom to do what they like with the tools. But the user is no longer using free software, because redistribution (or other triggering actions) are now impossible without a copyright violation.

[ Parent | Reply to this comment ]

Posted by etbe (118.209.xx.xx) on Wed 11 May 2011 at 11:36
[ Send Message ]
When comparing different bonnie++ runs you might want to consider the output of the bon_csv2html program. It produces a web page using a HTML TABLE that contains the data and uses background colors to indicate the relative performance of items in a column.

Including the plain text output as well is good because last time I checked there was no web browser that displayed such tables in a manner that was fit for a Braille reader. But for those of us who use graphical browsers the output of bon_csv2html is much more useful.

[ Parent | Reply to this comment ]

Posted by dkg (216.254.xx.xx) on Wed 11 May 2011 at 21:54
[ Send Message | View dkg's Scratchpad | View Weblogs ]
Thanks for the pointer, etbe. Here's the bon_csv2html output of my second (cleaner) run of tests:
Version 1.96Sequential OutputSequential InputRandom
Seeks
Sequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
ext32264M331982346411109884117485396296130.4516996426++++++++13035261108927++++++++1188824
Latency92041us1128ms1835ms166ms308ms6549msLatency17882us1418us1929us489us51us650us
btrfs2264M4399226821710356610387928796686.89916162333++++++++218257197427++++++++190744
Latency293ms727ms1222ms46541us504ms13094msLatency78474us6839us8791us1746us66us64034us

[ Parent | Reply to this comment ]

Posted by etbe (118.209.xx.xx) on Thu 12 May 2011 at 01:22
[ Send Message ]
To avoid the ++++ sections in the small file tests you could run it with "-n 64:4096:4096".

Also tests with much larger values for -n would be interesting, -n1024 (a million files) would be a good test if you have plenty of time.

[ Parent | Reply to this comment ]

Posted by dkg (2001:0xx:0xx:0xxx:0xxx:0xxx:xx) on Fri 13 May 2011 at 05:31
[ Send Message | View dkg's Scratchpad | View Weblogs ]
etbe, i should have said this before, but: Thank you for bonnie++! It's great to have a benchmarking tool which is easy to run and can still report pretty sophisticated data.

Below are the results for -n 1024 (it took about 8 hours on this machine to run it against ext3, ext4, and btrfs). Interestingly, with the larger numbers (and compared against ext4 as well), the results don't seem as clear cut.

Version 1.96Sequential OutputSequential InputRandom
Seeks
Sequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
ext32264M29198254911212152590099430786140.651024892130212634421882130297453081
Latency127ms1022ms1964ms37681us283ms8986msLatency4363ms477ms58147ms3235ms289ms35291ms
ext42264M28898279349131195129797424036143.5410249554341889430509196703411519203681
Latency31764us422ms429ms30167us356ms8491msLatency3516ms333ms39248ms3758ms416ms34514ms
btrfs2264M939927444912186727096358139124.552102480115183128256217569111925190945
Latency101ms826ms823ms32888us379ms11605msLatency26820ms64847us23235ms29521ms23745us32222ms
And here are the same results in plain text:
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ext3          2264M   291  98 25491  12 12152   5   900  99 43078   6 140.6   5
Latency               127ms    1022ms    1964ms   37681us     283ms    8986ms
Version  1.96       ------Sequential Create------ --------Random Create--------
ext3                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
               1024  8921  30  2126   3   442   1  8821  30  2974   5   308   1
Latency              4363ms     477ms   58147ms    3235ms     289ms   35291ms
1.96,1.96,ext3,1,1305221427,2264M,,291,98,25491,12,12152,5,900,99 ,43078,6,140.6,5,1024,,,,,8921,30,2126,3,442,1,8821,30,2974,5,308 ,1,127ms,1022ms,1964ms,37681us,283ms,8986ms,4363ms,477ms,58147ms, 3235ms,289ms,35291ms
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ext4          2264M   288  98 27934   9 13119   5  1297  97 42403   6 143.5   4
Latency             31764us     422ms     429ms   30167us     356ms    8491ms
Version  1.96       ------Sequential Create------ --------Random Create--------
ext4                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
               1024  9554  34 18894  30   509   1  9670  34 11519  20   368   1
Latency              3516ms     333ms   39248ms    3758ms     416ms   34514ms
1.96,1.96,ext4,1,1305224451,2264M,,288,98,27934,9,13119,5,1297,97 ,42403,6,143.5,4,1024,,,,,9554,34,18894,30,509,1,9670,34,11519,20 ,368,1,31764us,422ms,429ms,30167us,356ms,8491ms,3516ms,333ms,3924 8ms,3758ms,416ms,34514ms
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
btrfs         2264M    93  99 27444   9 12186   7   270  96 35813   9 124.5  52
Latency               101ms     826ms     823ms   32888us     379ms   11605ms
Version  1.96       ------Sequential Create------ --------Random Create--------
btrfs               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
               1024   801  15 18312  82   562  17   569  11 19251  90    94   5
Latency             26820ms   64847us   23235ms   29521ms   23745us   32222ms
1.96,1.96,btrfs,1,1305234245,2264M,,93,99,27444,9,12186,7,270,96, 35813,9,124.5,52,1024,,,,,801,15,18312,82,562,17,569,11,19251,90, 94,5,101ms,826ms,823ms,32888us,379ms,11605ms,26820ms,64847us,2323 5ms,29521ms,23745us,32222ms

[ Parent | Reply to this comment ]

Posted by mcortese (20.142.xx.xx) on Wed 15 Jun 2011 at 11:40
[ Send Message | View Weblogs ]
Am I wrong, or the %CPU filed doesn't always get correctly coloured? Take for example the "Random Create / Create" test: 11% is red, although 30% and 34% are worse.

[ Parent | Reply to this comment ]

Posted by dkg (216.254.xx.xx) on Wed 15 Jun 2011 at 16:11
[ Send Message | View dkg's Scratchpad | View Weblogs ]
Hmmm, it's 11% of the CPU for only 569 creations per second, instead of 30% of the CPU for 8K creations/sec. So in terms of CPU cycles per creation, the 30% should be "better". Maybe etbe can explain the rationale for the coloring better?

[ Parent | Reply to this comment ]

Posted by etbe (203.122.xx.xx) on Wed 15 Jun 2011 at 16:22
[ Send Message ]
From memory, the issue is NOT the raw CPU score, but the ratio of %CPU to performance. So if you have two tests that give the same amount of CPU but one gives 10* the performance of the other then you can have one be green and the other red.

This algorithm makes more sense for the per-char tests than for some of the other ones.

[ Parent | Reply to this comment ]

Posted by Anonymous (66.165.xx.xx) on Fri 13 May 2011 at 08:05
What btrfs mount options are you setting? Are you using the 'compress=lzo' and the 'space_cache' option? These provide a large improvement over the defaults.

Google 'Phoronix btrfs lzo' and check out the first link.

[ Parent | Reply to this comment ]

Posted by dkg (216.254.xx.xx) on Fri 13 May 2011 at 16:20
[ Send Message | View dkg's Scratchpad | View Weblogs ]
Do you mean this writeup on LZO performance in btrfs? You're aware that google is under no obligation to keep their results page static (or even to display the same results to two users simultaneously), right? direct URLs are more useful than search terms.

That article doesn't show convincing gains to me for LZO over gzip except for one or two benchmarks (some of which seem dubious, like massive write tests -- i wonder if the data being written in that particular benchmark happens to compress well with LZO), and it shows a pretty convincing failure in "multithreaded random writes", which sounds like the closest to real-world activity :/

I haven't read anything about space_cache yet; got any pointers? Maybe you want to try to run btrfs with these different options and report back your results?

[ Parent | Reply to this comment ]