I’ve recently been looking at options to upgrade (completely replace) my current NAS, as it’s currently more than a little bit jank and frankly kinda garbage. I have a few questions about that and about migrating my current TrueNAS scale installation or at least it’s settings over.

Q1: Does the physical order of the drives matter? I.E. The order they are plugged into the SATA ports.

Q2: Since I have TrueNAS scale installed on a USB flash drive (yeah, ik you’re not supposed to but it is what it is), how bad of an idea would it be to just… unplug it from my current NAS and plug it into the new one?

Q3: If all else fails, how reliable is TrueNAS scale’s importing of ZFS Pools and are there any gotchas with it?

Q4: Would moving to a virtualized solution like proxmox and installing TrueNAS scale on top of that in a VM make more sense on a beefier server?

E: Thank you all for the replies, the migration went smoothly :)

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 months ago

    Don’t try to move TrueNAS to a new host. That’s not going to work well. You should setup the new NAS and then do a ZFS send and receive to move data.

    If you are reusing the disks you can just export and then import the pool.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    Don’t forget to ‘export’ the zpool before moving the disks. Afterwards, you ‘import’ it on the new system. That’s all it needs.

    If you use proxmox, then Truenas is kinda redundant, since proxmox can manage your zpool as well.

  • pete_the_cat@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    Q1: No it shouldn’t matter as long as you didn’t import the pool using device names (sda, sdb, etc…). If you’re using labels or UUIDs (the better option for portability sake). If they do happen to use device names, just export the pool and then reimport it on the same system using labels or UUIDs.

    Q2: It should work just fine assuming you’re not using device names for your pools

    Q3: it’s just as robust as FreeBSD’s implementation. Once again, see the answer to Q1.

    Q4: IMO virtualizing your NAS just adds more headaches and performance overhead compared to running it on bare metal.

    Out of my years running TrueNAS on and off, I’ve always had issues with it when doing anything other than using it purely as a storage box. I tried 24.04 a few weeks ago, thinking that most of the issues I had originally when SCALE was launched would be resolved. They weren’t. So I went back to Arch w/OpenZFS…again

    • Presi300@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 months ago

      I’ve been running TrueNAS scale for a while and my only issue with it has been having to create a virtual bridge so that my VMs can ping the host and vice versa, been a pretty smooth experience other than that. As for the performance overhead… my replacement server is VERY beefy, compared to my old one so I couldn’t care less lol.

      • pete_the_cat@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I agree, the VM management could be easier. I don’t understand why I can’t have two NICs in the same subnet as long as they have different IPs.

        The bigger annoyance for me was there was no way to tell what disk is attached where in the VM device listings since it only shows the boot order and not labels or paths.

          • pete_the_cat@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Yeah, but I should be able to have them separate as well like I can in every other Linux distro. In TrueNAS they force you to have them in separate subnets for some reason.

  • Shdwdrgn@mander.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    I’ve never used TrueNAS, but my experience with ZFS is that it could care less what order the drives are detected by the operating system. You could always shut down the machine, swap two drives around, boot back up, and see if the pool comes back online. If it fails, shut it back down and put the drives in their original locations.

    If you are moving your data to new (larger) drives, before anything else you should take the opportunity to play with the new drives and find the ZFS settings that work well. I think recordsize is autodetected these days, but maybe for your use things like dedup, atime, and relatime can be turned off, and do you need xattr? If you’re using 4096 block sizes did you partition the drives starting at sector 2048? Did you turn off compression if you don’t need it? Also consider your hardware, like if you have multiple connection ports, can you get a speed increase by spreading out the drives so you don’t saturate any particular channel?

    Newer hardware by itself can make a huge difference too. My last upgrade took me from PCIe x4 to x16 slots, allowing me to upgrade to SAS3 cards, and overall went from around 70MB/s to 460MB/s transfer speeds with enough hardware to manage up to 40 drives. Turns out the new configuration also uses much less power, so a big win all around.

    • Presi300@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I am not gonna be changing my drives, just the server itself and just wanna make sure I don’t screw up my ZFS pools, as that does not sound fun. As for the hardware I’m looking at itself, it’s not new ( a used server) but it’s sure as hell better than what I have now I’m

      • Shdwdrgn@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Nothing wrong with used servers, that’s the only thing I’ve ever run. Ebay has provided a ton of equipment to me.

  • AreaKode@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    I can answer Q1. The order definitely does not matter. All the drives are aware of who they are, so when you import, as long as they are all present, you’re good.

  • xyguy@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago
    1. The order doesnt matter as long as they are the same drives, you dont have a usb dock or raid card in front of them (ie sata/sas/nvme only)and you have enough of them to rebuild the array. Ideally all of them but in a dire situation you can rebuild based on 2 out of 3 of a Raid Z1

    2. You can do that, you shouldn’t but you can. I’ve done something similar before in a nasty recovery situation and it worked but don’t do it unless you have no other option. I highly recommend just downloading the config file from your current truenas box and importing it into a fresh install on a proper drive on your new machine.

    3. Sort of already mentioned it but you can take your drives, plug them into your new machine. Install a fresh Truenas scale and then just import the config file from your current setup and you should be off to the races. Your main gotcha is if the pool is encrypted. If you lose access to the key you are donezo forever. If not, the import has always been pretty straightforward and ive never had any issues with it.

    4. Lots of people virtualize truenas and lots of people virtualize firewalls too. To me, the ungodly amount of stupid edge cases, especially with consumer hardware that break hardware passthrough on disks (which truenas/zfs needs to work properly) is never worth it.