

For VR apparently AMD is the way to go, but they don’t have HDR for hdmi 2.1 so no matter what there are tradeoffs currently, at least on my setup
For VR apparently AMD is the way to go, but they don’t have HDR for hdmi 2.1 so no matter what there are tradeoffs currently, at least on my setup
To be honest, a lot of my issues are probably just getting used to plasma over gnome. The atomic part so far hasn’t really been inconvenient
Yeah apparently the current fix is to just get an AMD card or a wireless headset. I’m trying to get a steam vr alternative running which is non trivial on bazzite, but it just “avoids” the bugs in the nvidia wired drivers
Bazzite for the past two days has not been as easy as everyone makes of sound, and I say this as a software engineer that works with Linux 5 days a week. Some of the UI choices are just weird and VR support with Nvidia is so horrible I may end up having to dual boot.
My wife’s pixel, I think a 7a, had the battery issue. It started going from 20% to 0 instantly then the back expanded and smelled like burning. Google replaced the battery but the cracked rear case was not covered… It was less than 1.5 years old
We do already know about model collapse though, genai is essentially eating its own training data. And we do know that you need a TON of data to do even one thing well. Even then it only does well on things strongly matching training data.
Most people throwing around the word agents have no idea what they mean vs what the people building and promoting them mean. Agents have been around for decades, but what most are building is just using genai for natural language processing to call scripted python flows. The only way to make them look coherent reliably is to remove as much responsibility from the llm as possible. Multi agent systems are just compounding the errors. The current best practice for building agents is “don’t use a llm, if you do don’t build multiple”. We will never get beyond the current techniques essentially being seeded random generators, because that’s what they are intended to be.
It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms
I tried mint and Ubuntu on a jail broken Chromebook and it had no audio, fedora worked out of the box