cross-posted from: https://lemmy.dbzer0.com/post/50693956
Transcript
A post by [object Object] (@[email protected]) saying: courtesy of @[email protected], Proton is now the only privacy vendor I know of that vibe codes its apps: In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure! I am once again begging anyone who will listen to get off of Proton as soon as reasonably possible, and to avoid their new (terrible) apps in any case. https://circumstances.run/@davidgerard/114961415946154957
It has a reply by the author saying: in an unsurprising update for those familiar with how Proton operates, they silently rewrote their monorepo’s history to purge .cursor and hide that they were vibe coding: https://github.com/ProtonMail/WebClients/tree/2a5e2ad4db0c84f39050bf2353c944a96d38e07f
given the utter lack of communication from Proton on this, I can only guess they’ve extracted .cursor into an external repository and continue to use it out of sight of the public
Mastodon at it again with pitchforks and torches for the slightest inconvenience.
Using Cursor doesn’t prove anything. Many people use Cursor as an advanced autocomplete, nothing else. It’s not like they’re hammering random AI-generated code and merging it without thinking. “Vibe coding” means generating barely-working code you don’t understand to try and get thinks working.
This shit is why I hate the mastodon community, it’s always strawmen and “you’re one of THEM” style witchhunts with them
Here I am just thinking I’m a better programmer without AI (LLMs).
For me it’s just glorified autocomplete. I haven’t tried it in any real capacity, but my colleagues did and I’ve seen some examples. It’s all basic shit I already know. In no way I felt compelled or even seen anything really useful. It can give you a head start, but I already have the knowledge to have a head start.
Some colleagues are using it for SQL, because they’re unfamiliar with it, and I’m like, it’s all good if it works for you, but you’re not gonna learn properly if you don’t try to write stuff yourself.
This touches on another point I don’t see too often — I code because I like solving problems. If I outsource that, then what’s the point? And it’s exactly this that makes me a competent, and dare I say, good programmer.
Another issue for me is this chat bot format. I don’t what a chat bot! If I have to go out of my way to try and coerce a fucking chat bot into being a useful tool then it already lost its usefulness. The only acceptable format for AI coding is better autocomplete, i. e. ability to autofill boilerplate more, better and, most importantly, as seamlessly as current solutions in modern IDEs.
In general I don’t feel threatened by AI and when the tools catch up I’ll gladly use them or even retire and code just for fun.
See my comment here.
The anti-AI circlejerk even here on lemmy is now just about as bad as the pro-AI circlejerk in the general public, no room for nuance or rational thinking, just dunking on everyone who say anything remotely positive about AI, like when I said I like the autocomplete feature of copilot.
I’m a pretty big generative AI hater when it comes to art and writing. I don’t think generative AI can make meaningful art because it cannot come up with new concepts. Art is something that AI should be freeing up time in our lives for us to do. But that’s not how it’s shaping up.
However, AI is very helpful for understanding codebases and doing things like autocompletion. This is because code is less expressive than human language and it’s easier for AI to approximate what is necessary.
You’re not alone. Nuance is just harder to convey, takes more effort to post something nuanced. And so people do it less, myself included. But I think truthfully that many people are not so stuck in one or the other circlejerks. It’s lovely to see people in this thread who are annoyed by both.
Natural language processing makes TTS way more usable for people with reading disabilities. But there are absolutely no good uses of AI.
What about cancer research? AI is bad when it’s being used to find cures?
People refer to generative AI when they just say “AI” nowadays.
There are a ton of small, single purpose neural networks that work really well, but the “general purpose” AI paradigm has wiped those out in the public consciousness. Natural language processing and modern natural sounding text to speech are by definition AI as they use neural networks, but they’re not the same as ChatGPT to the point that a lot of people don’t even consider them AI.
Also AI is really good at computing protein shapes. Not in a “ChatGPT is good enough that it’s not worth hiring actual writers to do it better” way, in a “this is both faster and more accurate than any other protein folding algorithm we had” way.
Yeah, people don’t realize how huge this kind of thing is. We’ve been trying for YEARS to figure out how to correctly model protein structures of novel proteins.
Now, people have trained a network that can do it and, using the same methods to generate images (diffusion models), they can also describe an arbitrary set of protein properties/shapes and the AI will generate a string of amino acids which are most likely to create it.
The LLMs and diffusion models that generate images are neat little tech toys that demonstrate a concept. The real breakthroughs are not as flashy and immediately obvious.
For example, we’re starting to see AI robotics, which have been trained to operate a specific robot body in dynamic situations. Manually programming robotics is HARD and takes a lot of engineers and math. Training a neural network to operate a robot is, comparatively, a simple task which can be done without the need for experts (once there are Pretrained foundational models).
I’m personally scared of AI (not angry or hateful, actually scared by just how fast it’s advancing) and that definitely clouds my judgement of it and makes nuance difficult.
It’s like a deal with the devil. You see all these amazing benefits but you just know you’re the one being taken advantage of, because, like the devil, AI corporations by definition only think about how you can be of use to them.
Yep, anyone who assumes that the presence of a .cursor directory automatically means that:
Is either arguing in bad faith or has no idea what they’re talking about.
It could be something as simple as one dev trying out cursor (an editor thats literally just a vscode fork with ai features) and accidentally committing their .cursor directory (really easy to do).
Also I don’t think most people understand just how ineffective true vibe coding is. I tried it a few times and could barely get something slightly more complex than a demo todo app working, and even if it was working it was barely prototype level quality of user experience, there is zero chance somebody is deploying vibe coded features into a large, serious production system and not suffering major and immediate consequences because shit just didn’t work at all.
The best you’re going to get out of it is it shortens the amount of time wasted on tiny adjustment to the UI or something.
This gets into the question of what, if anything, AI “should” be used for.
I’ve heard responses to this go both ways. Some people argue that saving time on repetitive simple tasks is what AI “should” be used for; but other people say that if you can’t even do something as simple and repetitive as a tiny adjustment to the UI, you shouldn’t be in a development job to begin with; or that you’re stealing the work of other programmers who had their code scraped for training data who are not being paid while you are, and that maybe you should be fired and the people who had their code scraped be hired instead.
IDK what the right answer is, I think this is something I will struggle with for ages while the unscrupulous people use AI for everything and anything.
Seriously, WTF is this elitism?
Do these people also walk everywhere because they think a bike, train, or car is somehow disingenuous? What hypocrites.