Good for her
Physics is like sex: sure, it may give some practical results, but that’s not why we do it.
— Richard P. Feynman
I think the same is true for a lot of folks and self hosting. Sure, having data in our own hands is great, and yes avoiding vendor lock-in is nice. But at the end of the day, it’s nice to have computers seem “fun” again.
At least, that’s my perspective.
99% of people want computers to serve them, not to be fun. My SO couldn’t care less how much fun I have setting up home assistant. They just want to turn on the lights.
Sure, but did your SO set up home assistant?
No. They just want to buy an Apple home thingy 🥹
Yeah that kinda enforces their point.
Well, yes, most people want computers to be unnoticable and boring. I agree, we need more boring tech that just does a job and doesn’t bother us. That said, plenty of people find self-hosting to be fun - your SO and mine excepted, of course.
most people want computers to be unnoticable and boring. I agree, we need more boring tech
professional UI designers don’t seem to agree. they always feel the urge to come up with the next worst design
For me it’s not even about better or worse, but about different. For them it’s a nice iteration after many years, but for be it is one of the dozens of apps I use irregularly that suddenly behaves and works different and forces me to relearn things I don’t have any gain from. Since each of the different apps get that treatment every once in a while, I end up having to adjust all the damn time for something else.
I would really like we could go back to functional applications being sold as is without forced updates. I do not need constant changes all the time. WinAmp hasn’t changed in 20 years and still does exactly what it is supposed to. I could probably spin up an old MS Word 2000 and it would work just like it did 20 years ago.
Many modern apps however change constantly. No wonder they all lean towards subscriptions if they “have to” work on it all the time. But I, as a user, don’t even want that. I want to buy the thing that does what it’s supposed to and then I want it to stay that way.
My SO watches free tier youtube.
Escaping vendor lock-in. It’s why people hate the cloud when it used to be the answer for everything. You make a good product that can only be used with your hardware/software, whatever, and people run from that shit because it’s abused more often than not.
Apple is the biggest example of this. Synology is getting worse and worse. Plex not far behind either.
I recently discovered that Plex no longer works over local network, if you lose internet service. A) you can’t login without internet access. B) even if you’re already logged in, apps do not find and recognize your local server without internet access. So, yeah, Plex is already there.
KODI is calling.
A lot of people that run Plex have a Jellyfin container on standby, or they’ll use Plex for friends and family and use JF at home.
even if you’re already logged in, apps do not find and recognize your local server without internet access.
You set your server in those app’s settings to not use direct connect and thus they are being routed through Plex’s servers
When you select your Plex libraries from the drop-down there are usually 2 options, one will be the local IP and say (direct), that’s always the best choice if you’re able
I just turned off my Internet connection to my Chromecast and tested, no issues with accessing my media
Nice! How are you using a chromecast without internet? Mine screams at me to get a google account.
Chromecast w/ google TV, sorry, so like a fire stick but different branding. Once you’re signed into apps on it it’ll remember you cuz it’s a full android device
What!?! Damn. I didn’t know it got that enshitty already.
I’d say plex is up there. “Want to use your hardware and bandwidth to view your own files? Pay us!”
Nothing wrong with having to pay for software if the prices are reasonable. It’s a product like any other, with real people working on it.
I’m down for paying for a piece of software. I bought a lifetime subscription back in the day I feel like until recently it served me pretty well. And to be fair they are caching the movie database, providing SSL keys, epg, low speed proxy through cgnat for people, there’s quite a bit too there cloud operations that they do deserve money for.
What pisses me off is the mining of my watch habits, and the slow and enshitification of features.
14 years of lifetime Plex pass for $75, they don’t really owe me anything, But I am moving on.
I’m slowly digging my way out of sights with algorithms, clawing my way out of Google is particularly difficult. I’m considering spinning my own Alexa with whisper
People are looking to reclaim their agency and autonomy, we over relied on corpos and they used that as opportunity to price gouge us.
I wanted to ask where the border of selfhosting is. Do I need to have the storage and computing at home?
Is a cheap VPS on hetzner where I installed python, PieFed and it’s Postgres database but also nginx and letsencrpt manually by mydelf and pointed my domain to it, selfhosting?
I would say yes, it’s still self-hosting. It’s probably not “home labbing”, but it’s still you responsible for all the services you host yourself, it’s just the hardware which is managed by someone else.
Also don’t let people discourage you from doing bare-metal.
That’s actually a good point, self hosting and home lab are similar things but don’t necessarily mean the same thing
Self hosting just means maintaining your own Instance of a web service instead of paying for someone else‘s
As long as you dont pay hetzner for an explicit fully maintained Nextcloud server, it dosent matter if the OS you‘re running it on is a VM or a bare bones server
It’s self hosting as long as you are in control of the data you’re hosting.
I would say there’s no value in assigning such a tight definition on self-hosting–in saying that you must use your own hardware and have it on premise.
I would define selfhost as setting up software/hardware to work for you, when turn-key solutions exist because of one reason or another.
Netflix exists. But we selfhost Jellyfin. Doesn’t matter if its not on our hardware or not. What matters is that we’re not using Netflix.
It depends who you ask (which we can already tell hehe), but I’d say YES, because you’re the one running the show – you’re free to grab all of your bits and pieces at any time, and move to a different provider. That flexibility of not being locked into one specific cloud service (which can suddenly take a bad turn) is what’s precious to me.
And on a related note, I also make sure that this applies to my software-stack too – I’m not running anything that would be annoying to swap out if it turns bad.
Is a cheap VPS on hetzner where I installed python, PieFed and it’s Postgres database but also nginx and letsencrpt manually by mydelf and pointed my domain to it, selfhosting?
I don’t get hung up on the definitions and labels. I run a hybrid of 3 vps and one rack in the closet. I’m totally fine with you thinking that is not selfhosting or homelabbing. LOL I have a ton of fun doing it, and that’s the main reason why I do it; to learn and have fun. It’s like producing music, or creating bonsai, or any of the other many hobbies I have.
I’d say you need storage. Once you get storage, use cases start popping up into view over time.
Your stuff is still in the cloud, so I would say no. It’s better than using the big tech products, but I wouldn’t say it’s fully “self hosted”. Not that that really makes much of a difference. You’re still pretty much in control of everything, so you should be fine.
Where is the tipping point though? If I have a server at my parents house, they live in Germany and I in Korea, does my dad host it then because he is paying for the electricity and the access to the internet and makes sure those things work?
Your parents’ house isn’t the cloud, so yeah, it’s self hosted. The “tipping point” is whether you’re using a hosting provider.
They are using a hosting provider - their dad.
“The cloud” is also just a bunch of machines in a basement. Lots of machines in lots of “basements”, but still.
“hosting provider” in this instance I think means “do you pay them (whoever has the hardware in their possession) a monthly/quarterly/yearly fee”
otherwise you can also say “well ACTUALLY your isp is providing the ability to host on the wan so they are the real hosting provider” and such…
Their dad is not a hosting provider. I mean, maybe he is, but that would be really weird.
Isn’t my dad the hosting provider? I ordered the hardware, he connected it to his switch and his electricity and pressed the button to start it the first time. From there on I logged in to his VPN and set up the server like I would at Hetzner.
But you’re right it doesn’t really make a difference. I feel the only difference it makes for me where I post my questions on Lemmy if it is in a !selfhosting community or a !linux community.
From a feeling perspective, even if I use Hetzners cloud, I feel I self host my single user PieFed instance (and matrix, my other websites, mastodon, etc.) because I have to preform basically the same steps as for things I’m really hosting at home like open-webui, immich, peertube.
A hosting provider is a business. If your dad is a business and you are buying hosting services from him, then yes, he is a hosting provider and you are not self hosting. But that’s not what you’re doing. You’re hosting on your own hardware on your family’s internet. That’s self hosting.
When you host on Hetzner, you’re hosting on their hardware using their internet. That’s not self hosting. It’s similar, cause like you said, you have to do a lot of the same administration work, but it’s not self hosting.
Where it gets a little murky is rack space providers. Then you’re hosting on your own hardware, but it’s not your own internet, and there’s staff there to help you… kinda iffy whether you’re self hosting, but I’d say yeah, since you own the hardware.
Personally, I’d say no. At that point you are administering it, not hosting it yourself.
Why wouldn’t you just use Docker or Podman
Manually installing stuff is actually harder in a lot of cases
Yeah why wouldn’t you want to know how things work!
I obviously don’t know you, but to me it seems that a majority of Docker users know how to spin up a container, but have zero knowledge of how to fix issues within their containers, or to create their own for their custom needs.
That’s half the point of the container… You let an expert set it up so you don’t have to know it on that level. You can manage fast more containers this way.
OK, but I’d rather be the expert.
And I have no troubling spinning up new services, fast. Currently sitting at around ~30 Internet-facing services, 0 docker containers, and reproducing those installs from scratch + restoring backups would be a single command plus waiting 5 minutes.
I’d rather be the expert
Fair, but others, unless they are getting paid for it, just want their shit to work. Same as people who take their cars to a mechanic instead of wrenching on it themselves, or calling a handyman when stuff breaks at home. There’s nothing wrong with that.
I literally get paid to do this type of work and there is no way for me to be an expert in all the services that our platform runs. Again, that’s kind of the point. Let the person who writes the container be the expert. I’ll provide the platform, the maintenance, upgrades, etc… the developer can provide the expertise in their app.
A lot of times it is necessary to build the container oneself, e.g., to fix a bug, satisfy a security requirement, or because the container as-built just isn’t compatible with the environment. So in that case would you contract an expert to rebuild it, host it on a VM, look for a different solution, or something else?
reproducing those installs from scratch + restoring backups would be a single command plus waiting 5 minutes.
Is that with Ansible or your own tooling or something else?
NixOS :)
Maybe I should have clarified that liking bare-metal does not imply disliking abstraction
I’ve been wanting to tinker with NixOS. I’ve stuck in the stone ages automating VM deployments on my Proxmox cluster using ansible. One line and about 30 minutes (cuda install is a beast) to build a reproducible VM running llama.cpp with llama-swap.
30, that’s cute. I currently have 70 containers running on my home server. That doesn’t include any lab I run or the stuff I use at work. Containers make life much easier. I also guarantee you don’t know those apps as well as you think you do either. Just being able to install and configure something doesn’t mean you know the inner workings of them. I used to do the same thing you do. Eventually, I would rather spend my time doing other things or learning certain things more in-depth and be okay with a working knowledge of others. It can be fun and rewarding to do things the hard way but don’t kid yourself and think you’re somehow superior for doing it that way.
Containers != services.
I don’t think I am better than anyone. I jumped into these comments because docker was pushed as superior, unprompted.
Installing and configuring does not an expert make, agreed; but that’s not what I said.
I would say I’m pretty knowledgeable about the things I host though, seeing as I am a contributor and / or package maintainer for a number of them…
Correct, not all containers are for services. I would never say that docker is superior. I would however say that containers are (I can be pedantic too). They’re version-controlled, they come with the correct dependencies, etc… There are many reasons why developing with containers is superior and I’m sure you’re aware of them already. Everyone is moving to do exactly that. There are always edge cases, but those are few and far between these days.
You can customize or build custom containers with a Dockerfile
Also, I want to know how containers work. That’s way more useful.
I use apps on my phone, but have no clue how to troubleshoot them. I have programs on my computer that I hardly know how to use, let alone know the inner workings of. How is running things in Docker any different? Why put down people who have an interest in running things themselves?
I know you’re just trying to answer the above question of “why do it the hard way”, but it struck me as a little condescending. Sorry if I’m reading too much into it!
No, I actually think that is a good analogy. If you just want to have something up and running and use it, that’s obviously totally fine and valid, and a good use-case of Docker.
What I take issue with is the attitude which the person I replied to exhibits, the “why would anyone not use docker”.
I find that to be a very weird reaction to people doing bare metal. But also I am biased. ~30 Internet facing services, 0 docker in use 😄
This is interesting to me. I run all of my services, custom and otherwise, in docker. For my day job, I am the sole maintainer of all of our docker environment and I build and deploy internal applications to custom docker containers and maintain all of the network routing and server architecture. After years of hosting on bare metal, I don’t know if I could go back to the occasional dependency hell that is hosting a ton of apps at the same time. It is just too nice not having to think about what version of X software I am on and to make sure there isn’t incompatibility. Just managing a CI/CD workflow on bare metal makes me shudder.
Not to say that either way is wrong, if it works it works imo. But, it is just a viewpoint that counters my own biases.
I did that first but that always required much more resources than doing it yourself because every docker starts it’s own database and it’s own nginx/apache server in addition to the software itself.
Now I have just one Postgresql database instance running with many users and databases on it. Also just one Nginx which does all the virtual host stuff in one central place. And both the things which I install with apt and manually are set up similarly.
I use one docker setup for firefox-sync but only because doing it manually is not documented and even the docker way I had to research for quite some time.
What? No it doesn’t… You could still have just one postgresql database if you wanted just one. It is a big antithetical to microservices, but there is no reason you can do it.
But then you can’t just use the containers provided by the service developers and have to figure out how to redo their container which in the end is more work than just run it manually.
I have very rarely ran into such issues. can you give an example of something that works like that? it sounds to be very half-assed by the developer. only pihole comes to mind right now (except for the db part, because I think it uses sqlite)
edit: I now see your examples
Typically, the container image maintainer will provide environment variables which can override the database connection. This isn’t always the case but usually it’s as simple as updating those and ensuring network access between your containers.
Some examples:
- Lemmy: https://github.com/LemmyNet/lemmy-ansible/blob/main/templates/docker-compose.yml#L81
- Firefox Sync: https://github.com/mozilla-services/syncstorage-rs/blob/master/docker-compose.mysql.yaml#L13
- TinyTinyRSS: https://gitlab.tt-rss.org/tt-rss/tt-rss/-/blob/master/docker-compose.yml?ref_type=heads#L10
- Mastodon: https://github.com/mastodon/mastodon/blob/main/docker-compose.yml#L5
- PeerTube: https://github.com/Chocobozzz/PeerTube/blob/develop/support/docker/production/docker-compose.yml#L71
and many more.
all of these run the database in a separate container, not inside the app container. the latter would be hard to fix, but the first is just that way to make documentation easier, to be able to give you a single compose file that is also functional in itself. none of them use their own builds of the database server (though lemmy with its postgres variant may be a bit of an outlier), so they are relatively easy to configure for an existing db server.
all I do in cases like this is look up the database initialization command (in the docker compose file), run that in my primary postgres container, create a new docker network and attach it to the postgres stack and the new app’s stack (stack: the container composition defindd by the docker compose file). and then I tell the app container, usually through envvars or command line parameters embedded in the compose file, that the database server is at xy hostname, and docker’s internal DNS server will know that for xy hostname it should return the IP address of the container that is named xy, through the appropriate docker network. and also the user and pass for connection. from then, from the app’s point of view, my database server in that other container is just like a dedicated physical postgres machine you put on the network with its own cable going to a switch.
unless very special circumstances, where the app needs a custom build of postgres, they can share a single instance just fine. but in that case you would have to run 2 postgres instances even without docker, or migrate to the modified postgres, which is an option with docker too.
Well, yes that’s best practice. That doesn’t mean you have to do it that way.
You absolutely can. It’s not like the developers of postgresql maintain a version of postgresql that only allows one db. You can connect to that db and add however many things you want to it.
Truly awesome that this hobby is getting coverage! I’m very very lazy when it comes to self-hosting, by far my largest project was moving off Spotify and archiving all my playlists.
Rotating 3 API keys for spotdl and a YTP free trial for that sweet sweet 256kbps AAC then Musicbrainz Picard to label correctly all the music (automatic was nearly almost always wrong), then automating rebuilding the m3u8 playlists followed by the insane work of correcting all the little imperfections. Must’ve taken me like 2-3 weeks of just working on it most of the day.
But the result? A proper offline music library with all my main playlists with each song at the proper position and order in my playlists with the correct (Spotify) metadata using correct versions of the songs in at least 256kbps AAC (and many cases FLAC and where available non-vinyl hi-res).
Tossed on an old dell workstation I got for £50. Hosting navidrome where my JF, Qbittorrent-nox and Immich live. Using symfonium on my phone. Can access remotely via OpenVPN. Couldn’t be happier.
Dude Navidrome is so great. I hooked my my decades worth of music collection up to it and now I can stream b-side tracks and indie bands that weren’t on Spotify. Plus when I hit random I know it’s actually random and not some algo to sell the newest slop that Spotify is pushing.
I didn’t think I’d get as much of a kick out of knowing that my random shuffle is truly random, but I do.
Self hosting music that I purchased is a really liberating feeling
I grew out of just about everything in my old digital library so it’s been long gone, but I didn’t realize just how much stuff I had on my old bandcamp account already. Grabbed all of that, bought a bunch more, obtained everything else from my Tidal rotations and slapped it all into Navidrome.
The initial setup is definitely a pain but the payoff has been tremendous. Not financially though - I spent more buying new shit from small artists than I would spend on a streaming service in a year. But that goes so much further for them than streaming does anyway.
Do you need to buy it again next year? Great investment!
I’m curious if this community would do a community survey.
If it didn’t ask me irrelevant personal information
What’s you favorite color?
Not relevant
I refuse to answer that or any other question.
What’s your favorite color?
Ethen Sholly has done surveys before on his website selfh.st
edit: I’m an idiot
Don’t be hard on yourself.
Learn Podman since Docker has some licensing restrictions in some cases.
Quadlet is a game changer
It is less user friendly but theoretically more powerful and secure
The learning curve can be steep but if you have ever worked with config files it isn’t bad.
The worst part about quadlets, IMO, is that they don’t use the same key words as podman run does. So turning a working podman container into a quadlet can be challenging.