[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron!

/g/ - Technology

View post   

[ Toggle deleted replies ]
File: 14 KB, 384x398, XFS.png [View same] [iqdb] [saucenao] [google] [report]
70699251 No.70699251 [Reply] [Original] [archived.moe] [rbt]

Use XFS now.

>> No.70699261

Reiserfs is better
Lead developer was based

>> No.70699291

>XFS is best FS
that's an odd way to spell btrfs

>> No.70699293

XFS is not exactly resistant to error, and there is not much in the way of recovery tools. Ext4 or BTRFS, depending on your use case, are the only real choices.

>> No.70699308

I've had my XFS partition survive two power outages, and I'm not using any kind of UPS system. It's pretty fucking sturdy. You don't need recovery tools for any file system if you make regular backups.

>> No.70699318

Reiserfs murdered my data

>> No.70699333

XFS dedupe is awesome

>> No.70699350

I had recent power failure just before I got new batteries for my UPS. Feels fucking corrupted man.

>> No.70699373

I already do anon. I would never use xfs for my / and other system partitions (you *do* seperate var and tmp, right), that should always be ext4, and for home I use btrfs, however xfs finds it's place on my 2TB HDD full of important media and other things I don't want to lose. It's better for storing media than ext4 and is also still more reliable than btrfs. Maybe in the future I'll switch over to btrfs when it's as tried and tested but for now xfs is a perfect fit for this drive.

>> No.70699377
File: 19 KB, 270x180, stratis_sidebar.png [View same] [iqdb] [saucenao] [google] [report]

Don't just use XFS. Use Stratis!

>> No.70699392
File: 195 KB, 681x1029, Puffystock.gif [View same] [iqdb] [saucenao] [google] [report]

What is a choice in file systems?

>> No.70699393
File: 995 KB, 500x250, 1521774717129.gif [View same] [iqdb] [saucenao] [google] [report]

Just use ext4 you tryhard snowflakes.

>> No.70699403

Another thing some people give up on to claim "security".

>> No.70699404

>Not using RHEL defaults

>> No.70699414

I made this thread to shill XFS. Fuck off and make your own, BSD cuck.

XFS is faster on SSDs. EXT4 is a slow, buggy, piece of shit that should be depreciated.

>> No.70699416

>not using OpenSUSE defaults

>> No.70699418

no thanks i already upgraded to zfs (endgame for you cucks)

>> No.70699420

Your filesystem is shit and you are shit

>> No.70699423

Objectively false. ZFS is the only filesystem that doesn't suffer from bit rot and it's the most feature complete file system that exists.

>> No.70699442

>doesn't suffer from bit rot
Knock, knock, it's non-ECC RAM and it would like to have a word with you.

>> No.70699445
File: 27 KB, 1639x1045, Btrfs_logo.png [View same] [iqdb] [saucenao] [google] [report]

>ZFS is the only filesystem that doesn't suffer from bit rot
imagine actually believing this.

>> No.70699447

ZFS isn't an option on Linux because of licensing faggotry on Oracle's part. It's nice on systems like OpenIndiana that actually support it.

>> No.70699459

btrfs doesn't have any of the bit rot protection that ZFS has

>> No.70699460

You can install it in like one command on Ubuntu Server.

>> No.70699462

ZFS on Linux works great now.

Btrfs is memeware that was created because ZFS wasn't available on Linux at the time. Red Hat dropped support as soon as ZFS became stable on Linux.

>> No.70699464


>> No.70699472

Yea why don't I just wipe my entire hard drive so I can use a different file system and perceive absolutely no difference between my previous file system and my current one but its OK because now the lifespan of my drive has decreased from having to backup and restore all my files and not only that my fucking time is totally wasted and I will never be able to get it back good idea retard why don't I just stick my shit in my nose and take a deep whiff and piss all over my motherboard while I'm at it just because you said I should on the internet, fucking retard

>> No.70699475

I'm not going to argue why you're dumb but there's a reason CERN uses ZFS and not Btrfs, and I can assure you they've put more thought into the matter than you.

>> No.70699479

>Red Hat dropped support as soon as ZFS became stable on Linux.
No. They dropped support because they didn't like it. They don't and aren't going to officially support running some off-license thing on their systems. Somehow Ubuntu gets away with having it in the repos, but whatever.
The point is, Redhat's actually using neither, and developing their own Stratis storage system mentioned earlier in this thread.

>> No.70699484

lol u mad bro?

>> No.70699492

>there's a reason CERN uses ZFS
Because they probably started using it back when it was a Solaris thing so why stop now?
I'm not saying that's the reason, but i'm not saying that it's not the reason.

>> No.70699519

If he ever needs cataract surgery, I will get to see him at my office--assuming they let him out

>> No.70699528

No, they use ZFS on Linux. And they use it because ZFS + Ceph is fucking crazy scaleable with zero data degredation. Nothing else comes close.

>> No.70699531

youre a shill and you dont use zfs fuck off

>> No.70699605

xfs_repair works great, the catch is you have to run it manually. It's not automatic on boot like fsck.ext4.

XFS has been more reliable than ext4 since Linux 3.x era.

>> No.70699615

Yea but the actual GNU/Linux operating system is tested primarily against ext4. If you want things to 'just werk' it's the best option for root.

>> No.70699636

Is ext4 finally able to create filesystems over 16TB? I ran into this limit back in 2011 and was really pissed so i used xfs since then.

>> No.70699669

>In Red Hat Enterprise Linux 7, XFS is the default file system and is supported on all architectures.
In the enterprise world xfs is the standard.

>> No.70700950
File: 163 KB, 684x684, zoe-cramond-1216871.jpg [View same] [iqdb] [saucenao] [google] [report]


>> No.70701025
File: 152 KB, 466x492, 1411811137441.png [View same] [iqdb] [saucenao] [google] [report]

How do I diagnose my ext4 systems for bit rot if I don't have hashes of the files to compare to? The thought of backing up bitrotted files is scaring me.
But in general, ext4 has always been reliable, never had an issue even though I used to power off my system by unplugging power cord for years.

>> No.70701111

Dunno about creation, but expanding goes fine, I've 20TB.
You can test its limits by creating a thinly provisioned volume.

>> No.70701513

I really like JFS.
Also you can't shrink XFS.

>> No.70701541

>licensing faggotry on Oracle's part.

It's incredible how many problems have this as a source.

>> No.70701932
File: 83 KB, 900x900, dxl2ui5v2r611.jpg [View same] [iqdb] [saucenao] [google] [report]

You're a fucking idiot. "MUH CERN" does not change the fact that btrfs writes checksums to the disk and DOES use them to protect against bitrot.

>> No.70702480

The truly intelligent choice is making your whole drive swap and then mounting everything as tempfs.

>> No.70702894

exFAT is better
Werks on everything
And doesn't have size limit

>> No.70703024

Why would you want to use exFAT?

>> No.70703220

I seriously hope you guys don't do this.

>> No.70703264

Found your problem.

>> No.70703302

And that problem is...?

>> No.70703820

I've had a 12TB RAID5 btrfs array for a year now and it's fine. I've used "normal" btrfs since 2014 and haven't lost a single piece of data yet. I can understand not wanting to use BTRFS's raid as it's still rather experimental but normal btrfs is fine and has been for a while now.

>> No.70704187

Mobile phones and card readers
And being able to access it on any OS
Including Windows linux macOS

>> No.70704424

I don't want to defend oracle, they are worse than niggers, but the license is from Sun.

>> No.70704444
File: 95 KB, 547x435, 1323006610943.png [View same] [iqdb] [saucenao] [google] [report]


>> No.70704481

You're very wrong. BTRFS doesn't even compare, and ext4 is at most equal to xfs in terms of robustness.

XFS is an amazing filesystem overall. Quite "boring" except for tunables that let you turn shit off, but so very reliable and predictable overall.

>> No.70704522

Sure, you can't. OTOH unless you have a backup that is still intact, just *detecting* bit rot is rather useless.

If it's on the backup side (like with borg or restic, which you should use anyhow), you can actually use these if you feel like it.

But generally it's much nicer if you have erasure coding (e.g. with snapraid or par2 or ceph) to actually detect changes and fix stuff.

>> No.70704588


>> No.70704598

Not yet.
But soon™

>> No.70704609

will be upstreamed into the Linux kernel 2019

>> No.70704618

What's wrong with ext4?

>> No.70704645

Ext4 is also predictable and reliable, so XFS being reliable isn't enough to get me to switch to it, it has to be clearly, obviously better than ext4 in some way. And as far as I can tell, it isn't. They're both filesystems, they both work. Why switch to it?

>> No.70704651

oh nice, I have been using ZFS on my servers for years, but for a new storage server I am building (for a hypervisor cluster for about 200 VMs) I was planning on using ceph. But I thought I was not going to be able to use ceph with zfs, I guess I was wrong. So thanks for the info.

>> No.70704700

does linux have support for both xfs and zfs?

>> No.70704702

Xfs is the most reliable filesystem and extremely robust.

That doesn't mean that the also reliable ext4 or the surely quite okay btrfs isn't reliable enough *for you*. I really don't care much if you switch or not. Switch as needed.

>> No.70704708

That would be really cool. Do you have a source on that?
Also, there's no need to say "Linux kernel" as Linux is always a kernel.

>> No.70704726

Yes, but I caution against ZFS.

That thing is pretty slow, it scales/manages poorly and so on. For example, unlike with mdadm / snapraid / ceph ... you can't even add more drives to the RAIDZ arrays. And it takes a massive amount of hardware to get ~5-6 drive's IO performance out of a 12 drive RAIDZ3 array. And it gets worse if you intended to use deduplication and such, then the hardware requirements and slowdown gets pretty extreme.

>> No.70704755

ZFS isn't even terribly scalable, that it has a huge address space doesn't mean it works well. Even xfs and ext4 scale better.

Ceph on the other hand is performing quite well and has cool features, but it's not really stable software. Tons of issues overall, and a pretty confusing messy CLI on top of that. There's no generally better alternative, but it probably won't be a smooth ride. Lizard/Moosefs or something may still be more comfy if you want a distributed filesystem with less features but more stable.

>> No.70705024


>> No.70705122

You will hate yourself when it inevitably errors, corrupts all your data and no recovery tool works.
Woo, at least it was 3MB/s faster on your 500MB/s SSD. What a deal.

>> No.70705143

>You will hate yourself when it inevitably errors, corrupts all your data and no recovery tool works.
It's fucking XFS, the filesystem where this is more or less the LEAST likely to happen.

>> No.70705155
File: 36 KB, 547x509, ZFS.png [View same] [iqdb] [saucenao] [google] [report]

I beg to differ.

Name (leave empty)
Comment (leave empty)
Password [?]Password used for file deletion.