So after we’ve extended the virtual cloud server twice, we’re at the max for the current configuration. And with this crazy growth (almost 12k users!!) even now the server is more and more reaching capacity.
Therefore I decided to order a dedicated server. Same one as used for mastodon.world.
So the bad news… we will need some downtime. Hopefully, not too much. I will prepare the new server, copy (rsync) stuff over, stop Lemmy, do last rsync and change the DNS. If all goes well it would take maybe 10 minutes downtime, 30 at most. (With mastodon.world it took 20 minutes, mainly because of a typo :-) )
For those who would like to donate, to cover server costs, you can do so at our OpenCollective or Patreon
Thanks!
Update The server was migrated. It took around 4 minutes downtime. For those who asked, it now uses a dedicated server with a AMD EPYC 7502P 32 Cores “Rome” CPU and 128GB RAM. Should be enough for now.
I will be tuning the database a bit, so that should give some extra seconds of downtime, but just refresh and it’s back. After that I’ll investigate further to the cause of the slow posting. Thanks @[email protected] for assisting with that.
How to contribute? Do you have Patreon?
Like many others, I came from Reddit and was initially hesitant to try it out, but I love this place so much! It really feels like the “worse” parts of Reddit have been skimmed off, and that definitely shows with how nice people seem here! Thank you so much!
how nice people seem here
yes! I love the culture of this place so far
Truth is for me as someone who used Reddit for about the last 16 years, it very much feels like the early days of Reddit again.
Which is a very good thing, because that’s what I originally signed up for compared to a metric fuckton of karma farming spam bots.
I just hope it gains enough traction to be sustainable in the long run, especially considering that it’s relying on donations for funding, I believe?
undefined> metric fuckton of karma farming spam bots.
People are hard at work writing bots for lemmy so don’t worry, you’ll be able to enjoy your regular hogwash again really soon.
Personally I think lemmy should go as far out of its way as possible to make bots in any and all forms just about impossible.
Yeah, we can enjoy while it lasts, because with more users more questionable content will come
Found one russian troll already. Oh well…
Edit: lol, was not referring to OP, it was some world news post comment with chiese username that spread misinformation about russian war in ukraine. I just added my thoughts on the community.
you can easily block any user by click on the 🚫 sign under their comment, and never have to deal with their bs again
Lesson learned today: never take anything for granted—if there’s a chance to be massively misunderstood, it will eventually happen lol
what about that post made you think they were a russian troll?
I think they meant they’ve seen one Russian troll on Lemmy already, not that skidface is a Russian troll.
I … Have to assume so, anyway
Can confirm I am not a russian troll ;)
Wow that was fast.
this is very Reddit-y of you
Redditors made such memes a thing, we’re taking them with us where we go.
When a volunteer can run a server better then a big tech company
unsurprising pikachu face
To be fair the volunteer isn’t trying to squeeze value out of the users to inflate his IPO.
Won’t somebody think about the corporations!
This is literally a Wikipedia moment for social media, thank you @Ruud
Ruud is rad!!
Thanks for the awesome work!
Just donated $10! Appreciate all the work you all are doing to keep up with the growth.
Hello, i still doesn’t quite grasp about the concept of federation and about how fediverse works.
But does it means that one instance can only run from one server?
Say lemmy.world running on Server A lemmy.ml running on Server B
User can register on whichever they want and can see the post from server A and Server B
But when Server A reach maximum capacity, can Server A scale up or distribute the load to multiple instances?
How can we solve the issue of computing power when more and more users migrate to using this services
Thank you 😀
Sorry if its a dumb question, but the whole Federation concept is still new to me. I created multiple account to log in to beehaw, mastodon, lemmy.world, lemmy.ml at first because i dont know that with one user, i can see other communities from another instances
I’m trying to figure out why I even saw this post! I’ve never been to lemmy.world - I’m logged in to (and currently browsing) sh.itjust.works. Not sure why it’s showing me this post.
Gonna take a while to wrap my redditor brain around this stuff!
That’s what we mean when we talk about federation!
All the instances are interconnected (unless they block each other). You can post, vote, comment, and even become a moderator of a community on any other instance.
In many ways, it’s all one big site. In many ways it’s also not, but to the end user who just wants to browse around, it’s not as important as people make it out to be.
There’s some rough edges around community discovery, cross-instance linking, etc. But the devs are working hard on fixing those issues.
I understood that I would see remote (is that the right word?) communities to which I had subscribed. Am I also seeing communities to which my local users have subscribed? I don’t think I’d want that.
There’s a few tabs at the top of the feed (on the site, apps might be different)
“Local” shows all content from communities on your instance.
“All” shows content from all communities on all instances that your instance has “discovered”. Your instance will discover a remote community once at least one member of your instance searches for it and subscribes.
“Subscribed” shows content from communities you’ve subscribed to, both local and remote.
Thank you! That’s tremendously helpful.
Optimal would be if users would spread over many servers, instead of all coming to Lemmy.world. But most users don’t fully understand the Federation concept so they think they need to register here so they can see local content?
I think the current server can handle a lot of users. It’s just the software that isn’t ready for it… but that will improve. If ever this server gets too small, next step would be to scale using Kubernetes, but also that requires the software to be better prepared for that.
You’re seeing mostly CPU bottlenecks I assume?
What’s your RAM and storage situation looking like?
Hello, after reading all the comments, I realized that I share the same questions (sort of) with the others.
Thank you for replying and clarify things
Cheers Ruud. And thank you 😊
It does matter where you call home though because beehaw just defederated and there was quite a lot of good content there.
Perhaps having the lemmy main site suggest servers with less load in a dynamic way would help with this. Instance xyz is now recommended on the main page due to having less users. The main problem I see with that is that there are different “themed” and what is suggested may not match up with the user’s preferences and tastes.
Thanks for setting up and managing the instances.
This is already the case I think. But the server must meet certain requirements, including specifically opting into being recommended.
I’m not too familiar with Lemmy’s codebase, but I am a devops engineer. Is the software written in any way to support horizontal scaling? If so, I’d be happy to consult/help to get the instance onto an autoscaling platform eventually.
Doesn’t support HA or horizontal scaling yet from what I read. Unsure if kbin does. Probably would have to add support for horizontal scaling to have that auto scaling do anything.
Yeah, that’s what I was afraid of. Understandable though, since horizontal scaling/HA usually isn’t a priority when developing a new application.
The code is open source on GitHub and the backend is written in Rust.
I have no idea how it goes in terms of scaling…
Apparently it’s not ideal at Horizontal scaling (that’s what I’ve picked up from reading stuff here, could be wrong)
I think they can horizontally scale the Postgres maybe? Postgres is probably the biggest performance bottleneck.
Have they implemented the postgres? Last I read they were still using websockets (I think I’m not a programmer and don’t know what all that means lmfao)
Came here from Reddit and I already love it so much more! :)
For less tech-savvy newbies (like me), in case there is some confusion affecting your urge to engage/donate… My friend gave me a great explanation:
-
Lemmy the platform is planet Earth
-
“Instances” like lemmy.world, lemmy.ml, beehaw.org, etc. are like the different countries on Earth
-
When someone signs up, the user picks one instance to be a part of, like how an Earthling becomes a citizen of a country
-
If you register at lemmy.world, that means your home instance/ “home country” is lemmy.world, but you can “travel” to lemmy.ml, another instance / “country”, to check out and subscribe to their community
-
When you subscribe to a different instance that’s not your home instance, you can still participate in their content, and other people will be able to see which instance / “country” you’re from
-
Each instance can have its own version of the same “subreddit”, so you can have a c/Memes in your home instance that is different from a c/Memes in another instance. But you can subscribe to both separately
-
c/[community name] is the naming convention used here I think like r/[subreddit name] on Reddit. If talking about a community in a different instance, it’s c/[community name]@[instance name] so like c/[email protected]
-
Donations will help with the cost of running lemmy.world only and not lemmy.ml, beehaw.org, etc.
Someone please correct any of this if any of it is wrong, I’ll happily edit
Is there a way to view C/Memes in all instances at once in aggregate? I don’t want to miss out on what other instances are doing.
Not yet, although there is ongoing discussion about this
-
What kind of server configuration are you guys running? A single instance?
Does it work on water now that it has MORE POWA?
Thank you. Feels super responsive.
Just curious, what sort of hardware is lemmy.world using/moving to? Wondering if there’s a good way to predict load based on number of users.
Yes. It’s called performance testing. Basically an engineer would need to setup test user transactions to simulate live traffic and load test the system to see how everything scales, where it breaks, etc. Then you can use the results of the tests to figure out how big of an instance you should use for your projected number of users.
Jmeter, and locust.io are the two biggest open source performance test tools.
The alternative is take a wild guess. See how the system behaves, and make adjustments in real time… like what @[email protected] is currently doing.