• neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    4
    ·
    18 hours ago

    Explain this too me AI. Reads back exactly what’s on the screen including comments somehow with more words but less information Ok…

    Ok, this is tricky. AI, can you do this refactoring so I don’t have to keep track of everything. No… Thats all wrong… Yeah I know it’s complicated, that’s why I wanted it refactored. No you can’t do that… fuck now I can either toss all your changes and do it myself or spend the next 3 hours rewriting it.

    Yeah I struggle to find how anyone finds this garbage useful.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      13 hours ago

      You shouldn’t think of “AI” as intelligent and ask it to do something tricky. The boring stuff that’s mostly just typing, that’s what you get the LLMs to do. “Make a DTO for this table <paste>” “Interface for this JSON <paste>”

      I just have a bunch of conversations going where I can paste stuff into and it will generate basic code. Then it’s just connecting things up, but that’s the fun part anyway.

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 minutes ago

        Most ides do the boring stuff with templates and code generation for like a decade so that’s not so helpful to me either but if it works for you.

    • Damaskox@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      14 hours ago

      I have asked questions, had conversations for company and generated images for role playing with AI.

      I’ve been happy with it, so far.

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 minutes ago

        That’s kind of outside the software development discussion but glad you’re enjoying it.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      14 hours ago

      Sounds like you just need to find a better way to use AI in your workflows.

      Github Copilot in Visual Studio for example is fantastic and offers suggestions including entire functions that often do exactly what you wanted it to do, because it has the context of all of your code (if you give it that, of course).

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    6
    ·
    14 hours ago

    “Using something that you’re not experienced with and haven’t yet worked out how to best integrate into your workflow slows some people down”

    Wow, what an insight! More at 8!

    As I said on this article when it was posted to another instance:

    AI is a tool to use. Like with all tools, there are right ways and wrong ways and inefficient ways and all other ways to use them. You can’t say that they slow people down as a whole just because some people get slowed down.

  • (des)mosthenes@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    23 hours ago

    no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.

    And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      4
      ·
      1 day ago

      Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

      In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 day ago

        Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.

        The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

        On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

        Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

        It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          13 hours ago

          The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

          Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then “knows” how things tie together and work together, so it can get more context for its suggestions and created code.

          • kescusay@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            5 hours ago

            That’s still not actually knowing anything. It’s just temporarily adding more context to its model.

            And it’s always very temporary. I have a yarn project I’m working on right now, and I used Copilot in VS Code in agent mode to scaffold it as an experiment. One of the refinements I included in the prompt file to build it is reminders throughout for things it wouldn’t need reminding of if it actually “knew” the repo.

            • I had to constantly remind it that it’s a yarn project, otherwise it would inevitably start trying to use NPM as it progressed through the prompt.
            • For some reason, when it’s in agent mode and it makes a mistake, it wants to delete files it has fucked up, which always requires human intervention, so I peppered the prompt with reminders not to do that, but to blank the file out and start over in it.
            • The frontend of the project uses TailwindCSS. It could not remember not to keep trying to downgrade its configuration to an earlier version instead of using the current one, so I wrote the entire configuration for it by hand and inserted it into the prompt file. If I let it try to build the configuration itself, it would inevitably fuck it up and then say something completely false, like, “The version of TailwindCSS we’re using is still in beta, let me try downgrading to the previous version.”

            I’m not saying it wasn’t helpful. It probably cut 20% off the time it would have taken me to scaffold out the app myself, which is significant. But it certainly couldn’t keep track of the context provided by the repo, even though it was creating that context itself.

            Working with Copilot is like working with a very talented and fast junior developer whose methamphetamine addiction has been getting the better of it lately, and who has early onset dementia or a brain injury that destroyed their short-term memory.

            • FreedomAdvocate@lemmy.net.au
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              Adding context is “knowing more” for a computer program.

              Maybe it’s different in VS code vs regular VS, because I never get issues like what you’re describing in VS. Haven’t really used it in VS Code.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      1 day ago

      I have limited AI experience, but so far that’s what it means to me as well: helpful in very limited circumstances.

      Mostly, I find it useful for “speaking new languages” - if I try to use AI to “help” with the stuff I have been doing daily for the past 20 years? Yeah, it’s just slowing me down.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        24 hours ago

        I like the saying that LLMs are good at stuff you don’t know. That’s about it.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      13 hours ago

      I’ve found it to be great at writing unit tests too.

      I use github copilot in VS and it’s fantastic. It just throws up suggestions for code completions and entire functions etc, and is easily ignored if you just want to do it yourself, but in my experience it’s very good.

      Like you said, using it to get the meat and bones of an application from scratch is fantastic. I’ve used it to make some awesome little command line programs for some of my less technical co-workers to use for frequent tasks, and then even got it to make a nice GUI over the top of it. Takes like 10% of the time it would have taken me to do it - you just need to know how to use it, like with any other tool.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 day ago

      Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.

    • lIlIlIlIlIlIl@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      25
      ·
      1 day ago

      Exactly what you would expect from a junior engineer.

      Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

      Something something craftsmen don’t blame their tools

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        59
        arrow-down
        2
        ·
        1 day ago

        AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          13 hours ago

          They’re tools that can help a junior engineer and a senior engineer with their job.

          Given a database, AI can probably write a data access layer in whatever language you want quicker than a junior developer could.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          14
          ·
          1 day ago

          AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

          Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

          Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

          The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

          • AA5B@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 hours ago

            I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.

            So far ai overall creates more mess faster.

            Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.

            My company wants metrics on how much time we’re saving with ai, but

            • I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative
            • it’s just another tool. There’s not really a defined task or set time. If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?
            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              28 minutes ago

              I’ve always had problems with junior engineers (self included) going down bad paths, since before there was Google search - let alone AI.

              So far ai overall creates more mess faster.

              Maybe it is moving faster, maybe they do bother the senior engineers less often than they used to, but for throw-away proof of concept and similar stuff, the juniors+AI are getting better than the juniors without senior support used to be… Is that a good direction? No. When the seniors are over-tasked with “Priority 1” deadlines (nothing new) does this mean the juniors can get a little further on their own and some of them learn from their own mistakes? I think so.

              Where I started, it was actually the case that the PhD senior engineers needed help from me fresh out of school - maybe that was a rare circumstance, but the shop was trying to use cutting edge stuff that I knew more about than the seniors. Basically, everything in 1991 was cutting edge and it made the difference between getting something that worked or having nothing if you didn’t use it. My mentor was expert in another field, so we were complimentary that way.

              My company (now) wants metrics on a lot of things, but they also understand how meaningless those metrics can be.

              I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative

              https://clip.cafe/monsters-inc-2001/all-right-mr-bile-it/

              Shame. There was a time that people dug out of their own messes, I think you learn more, faster that way. Still, I agree - since 2005 I have spend a lot of time taking piles of Matlab, Fortran, Python that have been developed over years to reach critical mass - add anything else to them and they’ll go BOOM - and translating those into commercially salable / maintainable / extensible Qt/C++ apps, and I don’t think I ever had one “mentee” through that process who was learning how to follow in my footsteps, the organizations were always just interested in having one thing they could sell, not really a team that could build more like it in the future.

              it’s just another tool.

              Yep.

              If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?

              Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?___

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            3
            ·
            1 day ago

            Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

            LOL sure

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              1 day ago

              LOL sure

              I’m not talking about the ones that get hired in your 'leet shop, I’m talking about the whole damn crop that’s just graduated.

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            4
            ·
            1 day ago

            It is based on my experience, which I trust immeasurably more than rigged “studies” done by the big LLM companies with clear conflict of interest.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              22 hours ago

              Okay, but like-

              You could just be lying.

              You could even be a chatbot, programmed to hype AI in comments sections.

              So I’m going to trust studies, not some anonymous commenter on the internet who says “trust me bro!”

              • Feyd@programming.dev
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                22 hours ago

                Huh? I’m definitely not hyping AI. If anything it would be the opposite. We’re also literally in the comment section for an a study about AI productivity which is the first remotely reputable study I’ve even seen. The rest have been rigged marketing stunts. As far as judging my opinion about the productivity of AI against junior developers against studies, why don’t you bring me one that isn’t “we made an artificial test then directly trained our LLM on the questions so it will look good for investors”? I’ll wait.

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          9
          ·
          1 day ago

          Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            17
            arrow-down
            2
            ·
            1 day ago

            The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

          • finalarbiter@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 day ago

            This line of thought is short sighted. Your senior engineers will eventually retire or leave the company. If everyone replaces junior engineers with ai, then there will be nobody with the experience to fill those empty seats. Then you end up with no junior engineers and no senior engineers, so who is wrangling the ai?

            • errer@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              22 hours ago

              This isn’t black and white. There will always be some junior hires. No one is saying replace ALL of them. But hiring 1 junior engineer instead of 3? Maybe…and that’s already happening to some degree.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                English
                arrow-up
                3
                ·
                22 hours ago

                And when the current senior programmers retire the field of juniors that are coming to replace them will be much smaller.

                • bitwize01@reddthat.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  22 hours ago

                  Not that I agree, but if you believe that the LLMs will continuously improve, then in 5-10 years you may only need 1/3rd the seniors, to oversee and prompt. Again, that’s what these CEOs are relying on.

          • lIlIlIlIlIlIl@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            1 day ago

            Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.

            You can see why companies are tripping over themselves to push this new modality.

            • errer@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 day ago

              I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        1 day ago

        The difference being junior engineers eventually grow up into senior engineers.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        1 day ago

        Exactly what you would expect from a junior engineer.

        Except junior engineers become seniors. If you don’t understand this … are you HR?

        • lIlIlIlIlIlIl@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          edit-2
          1 day ago

          They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted

          • FreedomAdvocate@lemmy.net.au
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            13 hours ago

            Interesting downvotes, especially how there are more than there are upvotes.

            Do people think “junior” and “senior” here just relate to age and/or time in the workplace? Someone could work in software dev for 20 years and still be a junior dev. It’s knowledge and skill level based, not just time-in-industry based.

  • Feyd@programming.dev
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    1 day ago

    Fun how the article concludes that AI tools are still good anyway, actually.

    This AI hype is a sickness

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      For some of us that’s more useful. I’m currently playing a DevSecOps role and one of the defining characteristics is I need to know all the tools. On Friday, I was writing some Java modules, then some groovy glue, then spent the after writing a Python utility. While im reasonably good about jumping among languages and tools, those context switches are expensive. I definitely want ai help with that.

      That being said, ai is just a step up from search or autocomplete, it’s not magical. I’ve had the most luck with it generating unit tests since they tend to be simple and repetitive (also a major place for the juniors to screw up: ai doesn’t know whether the slop it’s pumping out is useful. You do need to guide it and understand it, and you really need to cull the dreck)

  • astronaut_sloth@mander.xyz
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 day ago

    I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.

    This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    • 5too@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      1 day ago

      My fear for the software industry is that we’ll end up replacing junior devs with AI assistance, and then in a decade or two, we’ll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        That’s happening right now. I have a few friends who are looking for entry-level jobs and they find none.

        It really sucks.

        That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we’ll earn a lot more in a few years.

        • 5too@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          You’re not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

          That said, even if you and I will be fine, it’s still bad for the industry. And even if we weren’t the ones pulling up the ladder behind us, I’d still like to find a way to start throwing ropes back down for the newbies…

          • CosmicTurtle0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            They wanted someone with experience, who can hit the ground running, but didn’t want to pay for it, either with cash or time.

            • cheap
            • quick
            • experience

            You can only pick two.

          • squaresinger@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 day ago

            You’re not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

            True. It was a long-standing problem that entry-level jobs were mostly found in dodgy startups.

            Tbh, I think the biggest issue right now isn’t even AI, but the economy. In the 2010s we had pretty much no intrest rate at all while having a pretty decent economy, at least for IT. The 2008 financial crisis hardly mattered for IT, and Covid was a massive boost for IT. There was nothing else to really spend money on.

            IT always has more projects than manpower, so with enough money to spend, they just hired everyone.

            But the sanctions against Russia in response to their invasion of Ukraine really hit the economy and rising intrest rates to combat inflation meant that suddenly nobody wanted to invest anymore.

            With no investments, startups dried up and large corporations also want to downsize. It’s no coincidence that return-to-work mandates only started after the invasion and not in the two years prior of that where lockdowns were already revoked. Work from home worked totally fine for two years after covid lockdowns, and companies even praised how well it worked.

            Same with AI. While it can improve productivity in some edge cases, I think it’s mostly a scapegoat to make mass-fireings sound like a great thing to investors.

            That said, even if you and I will be fine, it’s still bad for the industry. And even if we weren’t the ones pulling up the ladder behind us, I’d still like to find a way to start throwing ropes back down for the newbies…

            You are totally right with that, and any chance I get I will continue to push for hiring juniors.

            But I am also over corporate tears. For decades they have been crying over a lack of skilled workers in the IT and pushing for more and more people to join IT, so that they can dump wages, and as soon as the economy is bad, they instantly u-turn and dump employees.

            If corporations want to be short-sighted and make people suffer for it, they won’t get compassion from me when it fails.

            Edit: Remember, we are not the ones pulling the ladder up.

            • knexcar@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              22 hours ago

              Was it really Russia’s invasion, or just because the interest rates went up to prevent too much inflation after the COVID stimulus packages? Hard to imagine Russia had that much demand for software compared to the rest of the world.

              • squaresinger@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                18 hours ago

                Did you not read what I wrote?

                Inflation went up due to the knock-on effects of the sanctions. Specifically prices for oil and gas skyrocketed.

                And since everything runs on oil and gas, all prices skyrocketed.

                Covid stimulus packages had nothing to do with that, especially in 2023, 2024 and 2025, when there were no COVID stimulus packages, yet the inflation was much higher than at any time during COVID.

                Surely it is not too much to ask that people remember what year stuff happened in, especially if we are talking about things that happened just 2 years ago.

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I would say that “replacing with AI assistance” is probably not what is actually happening. Is it economic factors reducing hiring. This isn’t the first time it has happened and it won’t be the last. The AI boosters are just claiming responsibility for marketing purposes.

          • AA5B@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            It may also be self fulfilling. Our new ceo said all upcoming projects must save 15% using ai, and while we’re still hiring it’s only in India.

            So 6 months from now we will have status reports talking about how we saved 15% in every project

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      1 day ago

      Couldn’t have said it better myself. The amount of pure hatred for AI that’s already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.

      Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 day ago

        People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.

        This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren’t selling (at a massive loss) the hot garbage they’re selling now and focused on research.

        As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      They can be helpful when using a new library or development environment which you are not familiar with. I’ve noticed a tendency to make up functions that arguably should exist but often don’t.

    • MrPoopyButthole@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      edit-2
      1 day ago

      They aren’t detail oriented enough to write full applications or complicated scripts.

      I’m not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

      In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

      Yep, I agree with that.

      There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 day ago

        Greenfielding webapps is the easiest, most basic kind of project around. that’s something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

        • ipkpjersi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          So you’re saying there’s no such thing as complex webapps and that there’s no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there’s never any competent developers that work on them and there’s no use for them at all?

          Where do you think we are?

            • ipkpjersi@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              1 day ago

              Who says I made my webapp with ChatGPT in an afternoon?

              I built it iteratively using ChatGPT, much like any other application. I started with the scaffolding and then slowly added more and more features over time, just like I would have done had I not used any AI at all.

              Like everybody knows, Rome wasn’t built in a day.

  • xep@fedia.io
    link
    fedilink
    arrow-up
    21
    ·
    1 day ago

    Code reviews take up a lot of time, and if I know a lot of code in a review is AI generated I feel like I’m obliged to go through it with greater rigour, making it take up more time. LLM code is unaware of fundamental things such as quirks due to tech debt and existing conventions. It’s not great.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Code reviews seem like a good opportunity for an LLM. It seems like they would be good at it. I’ve actually spent the last half hour googling for tools.

      I’ve spent literally a month in reviews for this junior guy on one stupid feature, and so much of it has been so basic. It’s a combination of him committing ai slop without understanding or vetting it, and being too junior to consider maintainability or usability. It would have saved so much of my time if ai could have done some of those review cycles without me

      • homura1650@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        37 minutes ago

        This has been solved for over a decade. Include a linter and static analysis stage in the build pipeline. No code review until the checkbox goes green (or the developer has a specific argument for why a particular finding is a false positive)

  • FancyPantsFIRE@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.

    My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.

    I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.

    That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 day ago

      That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

      This is the must frustrating problem I have. With a few exceptions, LLM use seems to be inversely proportional to skill level, and having someone tell me “chatgpt said ___” when asking me for help because clearly chatgpt is not doing it for their problem makes me want to just hang up.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I had to sort over 100 lines of data hardcoded into source (don’t ask) and it was a quick function in my IDE.

      I feel like “sort” is common enough everywhere that AI should quickly identify the right Google results, and it shouldn’t take 3 min

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      By having it write a quick function to do so or to sort them alphabetically within the chat? Because I’ve used GPT to write boilerplate and/or basic functions for random tasks like this numerous times without issue. But expecting it to sort a block of text for you is not what LLMs are really built for.

      That being said, I agree that expecting AI to write complex and/or long-form code is a fool’s hope. It’s good for basic tasks to save time and that’s about it.

      • doxxx@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        I’ve actually had a fair bit of success getting GitHub Copilot do things like this. Heck I even got it to do some matrix transformations of vectors in a JSON file.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        The tool I use can rewrite code given basic commands. Other times I might say, “Write a comment above each line” or “Propose better names for these variables” and it does a decent job.