Hi all!

I will soon acquire a pretty beefy unit compared to my current setup (3 node server with each 16C, 512G RAM and 32T Storage).

Currently I run TrueNAS and Proxmox on bare metal and most of my storage is made available to apps via SSHFS or NFS.

I recently started looking for “modern” distributed filesystems and found some interesting S3-like/compatible projects.

To name a few:

  • MinIO
  • SeaweedFS
  • Garage
  • GlusterFS

I like the idea of abstracting the filesystem to allow me to move data around, play with redundancy and balancing, etc.

My most important services are:

  • Plex (Media management/sharing)
  • Stash (Like Plex 🙃)
  • Nextcloud
  • Caddy with Adguard Home and Unbound DNS
  • Most of the Arr suite
  • Git, Wiki, File/Link sharing services

As you can see, a lot of download/streaming/torrenting of files accross services. Smaller services are on a Docker VM on Proxmox.

Currently the setup is messy due to the organic evolution of my setup, but since I will upgrade on brand new metal, I was looking for suggestions on the pillars.

So far, I am considering installing a Proxmox cluster with the 3 nodes and host VMs for the heavy stuff and a Docker VM.

How do you see the file storage portion? Should I try a full/partial plunge info S3-compatible object storage? What architecture/tech would be interesting to experiment with?

Or should I stick with tried-and-true, boring solutions like NFS Shares?

Thank you for your suggestions!

    • MajorSauce@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      You are 100% right, I meant for the homelab as a whole. I do it for self-hosting purposes, but the journey is a hobby of mine.

      So exploring more experimental technologies would be a plus for me.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Most of the things you listed require some very specific constraints to even work, let alone work well. If you’re working with just a few machines, no storage array or high bandwidth networking, I’d just stick with NFS.

        • mitchty@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          As a recently former hpc/supercomputer dork nfs scales really well. All this talk of encryption etc is weird you normally just do that at the link layer if you’re worried about security between systems. That and v4 to reduce some metadata chattiness and gtg. I’ve tried scaling ceph and s3 for latency on 100/200g links. By far NFS is easier than all the rest to scale. For a homelab? NFS and call it a day, all the clustering file systems will make you do a lot more work than just throwing hard into your nfs mount options and letting clients block io while you reboot. Which for home is probably easiest.

  • nesc@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Gluster is shit really bad, garage and minio are great. If you want something tested and insanely powerful go with ceph, it has everything. Garage is fine for smaller installations, and it’s very new and not that stable yet.

    • MajorSauce@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Darn, Garage is the only one that I successfully deployed a test cluster.

      I will dive more carefully into Ceph, the documentation is a bit heavy, but if the effort is worth it…

      Thanks.

      • nesc@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I had great experience with garage at first, but it crapped itself after a month, it was like half a year ago and the problem was fixed, still left me with a bit of anxiety.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    What are you hosting the storage on? Are you providing this storage to apps, containers, VMs, proxmox, your desktop/laptop/phone?

    • MajorSauce@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Currently, most of the data in on a bare-metal TrueNAS.

      Since the nodes will come with each 32TB of storage, this would be plenty for the foreseeable future (currently only using 20TB across everything).

      The data should be available to Proxmox VMs (for their disk images) and selfhosted apps (mainly Nextcloud and Arr apps).

      A bonus would be to have a quick/easy way to “mount” some volume to a Linux Desktop to do some file management.

    • forbiddenlake@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      By default, unencrypted, and unauthenticated, and permissions rely on IDs the client can fake.

      May or may not be a problem in practice, one should think about their personal threat model.

      Mine are read only and unauthenticated because they’re just media files, but I did add unneeded encryption via ktls because it wasn’t too hard to add (I already had a valid certificate to reuse)

        • 486@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          If someone compromises the host system you are in trouble.

          Not only the host. You have to trust every client to behave, as @forbiddenlake already mentioned, NFS relies on IDs that clients can easily fake to pretend they are someone else. Without rolling out all the Kerberos stuff, there really is no security when it comes to NFS.