B Workload Dependent Performance Evaluation of the Btrfs and

B. Workload Dependent Performance Evaluation of the Btrfs and ZFS Filesystems

The target of this paper [6] was the assessment of perfor- mance of ZFS and BTRFS filesystems under workload en- vironment. The author measured throughput and efficiency as well. The efficiency was calculated by the result of throughput divided by CPU-utilization.

1) Experimental setup: The experiment involved the setup of single SSA disk (having 4 concurrent IO threads) and RAID 10 of 8 disks (having 8 concurrent IO threads). The CFQ scheduler was opted for the experiment.

ZFS has a feature of pooling the devices together and names it as zpool. The ZFS zpool contained four virtual devices, two mirrored disk each that formed the RAID 10 configuration. BTRFS, ZFS, EXT4 filesystems were created with default options and mounted. ZFS on OpenSolaris and Nexenta core platform-2 was tested against BTRFS and EXT4 on Linux. The experiment used Flexible File System Benchmark (FFSB) for testing purpose. The following tests were conducted: Sequential Read, 40MB files, 4KB read requests, Sequential Write, 100MB files, 4KB write requests, Random Read, 40MB files, 4KB read requests, 1MB random read per IO thread, Random Write, 40MB files, 4KB write requests, 1 MB random write per IO thread, Mixed, 60% read, 30% write, 10% delete, file sizes 4KB to 1MB, 4KB requests

2) Results: Tests were repeated 10 times and mean val- ues were presented as result.

The coefficient of variance for the experiment was less than 4% hence result could be trusted. Also, efficiency was calculated as throughput divided by CPU-utilization. The efficiency illustrates the throughput potential and corresponding CPU-demand/consumption of the filesystem.

Get quality help now
Bella Hamilton
Verified

Proficient in: 5G Wireless Technology

5 (234)

“ Very organized ,I enjoyed and Loved every bit of our professional interaction ”

+84 relevant experts are online
Hire writer

In the case of single disk, for the sequential read and random IO, ZFS outperformed BTRFS, but not in sequential write. The overall efficiency of ZFS was better than BTRFS in sequential and random IO. ZFS also beat EXT4 in efficiency and latency behaviour. BTRFS and EXT4 performed similar in sequential IO but BTRFS generated a lot of CPU demand. Additional benchmarks with BTRFS’ mount options– nodatacow and nodatasum were used, the throughput got better but without them BTRFS becomes less featured. On the other hand with RAID disk configuration, BTRFS could not perform well enough to compete with ZFS or EXT4, that is, had lowest efficiency values. The reason for this behaviour given by author is that BTRFS spent more time in Linux Bio layer and also was affected by the higher lock contention. ZFS here also performed well with random IO whereas EXT4 was well with sequential IO and mixed IO.

The author suggested that as ZFS and BTRFS both generate heavy CPU demand, they should not be used on the server which encounters CPU intensive workload. Also they are memory intensive due to checksum, compression, decompres- sion etc features that they bring with them. So filesystem choice should be made more carefully according to the work- load.

C. BTRFS: The Linux B-tree Filesystem

This paper [8] explains the core concepts of BTRFS filesystem like CoW mechanism, creating snapshot ,use of reference counting, defragmentation algorithm and checksum for strong data security.And tests its performance under variety of workloads keeping EXT4 and XFS for reference.

1) Experimental setup: The experiment included three stor- age configurations– a hard disk, an SSD and multiple disks in a RAID10. The tests for hard disk were making Linux kernel, the FFSB test to mimic mail server and sequentially & randomly writing with varying number of threads using Tiobench tool. Where as, for SSD, they ran the test to create 32 million files of variable metadata sizes on two kernel versions– kernel 3.3 and kernel 3.4. Filebench toolkit was used for testing of different workloads like mail server, web server, file server and OLTP for the above two configurations. Each workload was run five times for more accurate results. Author also included BTRFS with nodatacow mount option for OLTP workload testing. For RAID10 configuration, four 3GB SATA disks were used. The tests involved FIO tool which created two 32GB file sequentially and then read two 32GB files sequentially with two threads. Here the RAID was created in three ways– native BTRFS, BTRFS over Linux MD layer and XFS over MD.

2) Results: In the first two test of hard disk– creating Linux kernel and mail server workload, EXT4 outperformed BTRFS and XFS, while BTRFS performed comparable to EXT4. The writing test using Tiobench involves writing 2000MB file sequentially and randomly. In both cases BTRFS was faster but it dominated writing files randomly. The author asserted about this dominating behaviour as this happened due to BTRFS’s write anywhere nature. Although, as the number of threads increased, the performance reduced due to contention on shared inode mutex. In case of SSD on kernel 3.3 using 4K metadata file size, BTRFS did not noticeably perform well but when switched to kernel 3.4 and using 4K and 16K metadata size, it performed well– the random reads reduced to few & occurred mostly during check-pointing only and write were smooth. Author explains that there was a mismatch between BTRFS and Linux virtual memory, on how the pages were handled, which in turn was reducing the effectiveness of cache, hence increased the number of reads. The Filebench results for HDD were similar for file and OLTP. BTRFS suffered in mail and web workload both, as its fsync functionality was slower, also HTree data structure of EXT4 is only two level deep so performs faster, while BTRFS is much deeper hence requires more random IO. For SSD, file and web workloads were similar. Again it lacked in mail workload for the very same reason. And for OLTP workload again it was affected as its CoW mechanism was not much compatible with this type of workload. In both HDD & SSD cases, BTRFS with nodatacow option performed well than default BTRFS.In RAID10 tests, native BTRFS was a clear winner but author stated that in more complex workloads MD might perform better and faster than BTRFS.

III. ASSESSMENT INVOLVING HIERARCHY OF VOLUME MANAGERS AND DISK ACCESSING PROTOCOLS

A. A Performance comparison of ZFS and BTRFS on Linux

The purpose of this project [2] was to check the IO performance of BTRFS and ZFS filesystems using their own volume managers.

1) Experimental setup: The experiment took different stor- age configurations, RAID 5 & RAID 10 and put the single disk as baseline for the experiment. Prior to the experiment, BTRFS’ performance on RAID 5 was not stable so including this configuration and analyzing performance was necessary. The author used Filebench for workloads: file server, web server & OLTP (relational database server) and IOZone for op- erations: write, re-write, read, re-read, random-write, random- read, strided-read. The chosen record size are from 64KB to 16MB. XFS and EXT4 filesystems were taken as baseline for better comparisons.

The Ubuntu server 14.04.02 LTS and Kernel version 3.19.4 were setup with 2GB RAM and 4GB test file size. RAID configurations for XFS and EXT4 were created with multiple device driver. The author ran the tests twenty times discarding the first for steady state and measured throughput.

2) Results: The result indicated that, BTRFS did well in most of the experiments. It had improved greatly over the years than ZFS. Despite of having stability issues with RAID 5, it did better than ZFS. It had checksumming feature but not self- healing feature as ZFS, used out-band deduplication and was proven better for file server workloads. On the other hand ZFS did well in OLTP workloads. It could expand but not shrink volume, used in-band deduplication, had self-healing data as it stores the checksumming result in parent block and while reading a block verifies it. It featured transaction-based storage where the probability of bad write was eliminated. Both of these filesystems had implemented the copy-on-write(CoW) mechanism in different ways.

B. Performance Comparison of Btrfs and Ext4 Filesystems

The thesis [4] studied the BTRFS filesystem keeping EXT4 as baseline. The study gathered the IO throughput, compres- sion efficiency and effectiveness of the defragmentation tool of BTRFS.

1) Experimental setup: The experiment was done on a single disk and a volume of two disk. The tools included were IOZone, Guassian09, Blktrace, Blkparse and Seekwatcher. IO- Zone organized tests for sequential read/write/re-write, random read/write, strided read for record size of 64K-16384K & file size of 8192K-8GB and their compression tests. The file and directory sequential read/write tests for 11GB file and 499MB & 2254 subdirectory was also made for both the configurations. The Guassian09 computational chemistry package indicated the IO throughput for very large sequential IO and elapsed time for it. A check for the efficiency of BTRFS’ inbuilt defragmentation tool was also conducted using a Perl script. The script created and deleted files to have a fragmented scenario at disk. The compression efficiency

for files of 6GB, 15GB, 200MB and directories of 489MB, 1019MB was tested. The Bzip2 software was used for EXT4 compression. And the BTRFS’ transparent compression al- gorithm LZO and ZLIB were chosen along with BTRFS’ compress-force mount option. The unmounting-mounting was done for consecutive IOZone tests to avoid cache impact. For each file and directory read/write reboot is performed. Default mount options for both filesystems was opted. The utility tool Blktrace ran independently for IOZone and file-directory tests. On single machine, three hard disks of 80GB size: one disk for OS, second and third for experiment were reserved. Both experimental disks have three partitions each- one partition for individual single disk for the respective filesystems, and other two partitions to make volume for second configuration.

2) Results: The tests were repeated, 10 times for IOZone, 10 times for File directory copy and 5 times for File directory compression. The IOZone’s IO tests showed that BTRFS performs better in sequentially reading large files, randomly writing larger files and sequentially writing small files. The Seekwatcher asserted that EXT4 performed similar when read and write to single file. EXT4 also made fewer seeks than BTRFS and was better with volumes too. The IOZone’s compression test revealed that LZO performed well for both small and large file-write operations whereas Zlib was better for large files only. The file-directory read/write test provided the information that BTRFS was overall better but EXT4 wrote even better to single disk. They both read similar in volume. Also that, BTRFS wrote large files better (in contract to the IOZone result) and EXT4 wrote directory better. The file-directory compression test pointed out that Bzip2 gave highest space saving but was longest to run. BTRFS with compress-force mount gave better amount of space saving than default mount option and that too in average time, which was acceptable. The Guassian09 discovered that EXT4 gave better throughput, with fewer seeks even for IO intensive application. And the defragmentation tool of BTRFS affirmed that it was very efficient in both the aspects of speed and reduction of fragmentation in filesystem.

IV. PERFORMANCE EVALUATION OVER HYPERVISOR

A. Competition of Virtualized Ext4, Xfs and Btrfs Filesystems Under Type-2 Hypervisor

The paper aims to find the read and write throughput for EXT4, XFS, BTRFS on the type-2 hypervisor [3].

1) Experimental setup: The authors made use of VMware workstation. They took a host machine which had three VMs of 2GB RAM running Linux OS, 200GB capacity. The filesystems were created of 150GB size and mounted. Only one VM ran at a time. Postmark tool was used for Mail server tests of varying file sizes 1B-1KB, 1-100KB, 1-300KB. And Bonnie++ for sequential output & input, random read & write and random seeks (mix of read & write operations). The output and input tests provided results for effectiveness of filesystem cache & data transfer speed while the seeks gave the number of seeks occurred per second.

Cite this page

B Workload Dependent Performance Evaluation of the Btrfs and. (2019, Dec 16). Retrieved from https://paperap.com/b-workload-dependent-performance-evaluation-of-the-btrfs-and-best-essay/

B Workload Dependent Performance Evaluation of the Btrfs and
Let’s chat?  We're online 24/7