I’ve returned after taking into consideration everyone’s advice on what I should do with my basic VPS and…listened to none of it for reasons. Well, I had a good reason. I wanted to diversify my internet consumption with all this Reddit API mess and have gone back in part to RSS. In that vein I have stood up Minflux and Wallabag on said VPS. Both are excellent if you need a RSS and a read it later app. And they can tightly integrate with one another which is rad.

So now that I’ve got it set up the way I want, what is the recommended method for backing it up in case of failure/data loss? It’s running Ubuntu 20.04, if that helps. I have Google drive space as well as Backblaze B2 I could leverage. I just need to know which direction to look for solutions. VPS is rented through Racknerd and I confirmed they don’t have snapshot function unfortunately.

  • Freeman@lemmy.pub
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Personally i would use rsync for config files and schedule database dumps. rsync those too.

    in fact thats exactly how I do it.

  • dillydogg@lemmy.one
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    On my VPS, every night I shut down the docker containers, then backup everything (including postgres & mariadbs) with borg using borgmatic, upload to backblaze b2, then restart the containers.

  • flatbield@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    You might look at what your hosting provider will do for you two. I am at Linode, and pay them $2/month and just turn on backup and they do it. Plus I can take one of my own snapshots any time. Like someone else said, if state matters think about that too. I.E. dumping databases, or shutting the VM down or services down and snapshotting it yourself.

    I like the Backblaze idea too but have not done that yet.

  • paperemail@links.rocks
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You should do application level backups and put those in backblaze b2:

    • for postgres look here.
    • look at all the software you’re running and what they say about making backups.
    • for files that don’t change often, making a an archive (with tar) is probably good enough. But if it changes during making the archive, the backup will be inconsistent.
    • think about your RPO: how much data are you willing to loose in case of a crash? 1 day? 2 hours? 15min? Schedule your backups to be at least as frequent.
    • Don’t forget to test your backups! Otherwise you’ll only find out that the backup is unusable when you need it most…