• 4 Posts
  • 290 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle



  • I have a nature reserve near my house and I walk there quite frequently. It’s nice to get away from the noice of the cars, and enjoy the quiet sounds of tree, birds, and the wind.

    Unfortunately for many people in this country, the only places within walking distance of home are paved urban sprawl. It is not particularly safe to walk there, and neither is it pleasant with the lack of shade, constant vehicle noise, and urban heat in the summer.

    In my experience, areas with good public transport have safer walking paths that are often surrounded with nature (even if it’s sometimes just a short distance on each side), but areas with poor public transport just have roads with minimal plants or safe walking paths.

    I don’t want to drive for 2 hours to the countryside every time I want some peace and quiet, I want to live there all the time. I also shouldn’t have to give up the benefits of living in a city to get away from car dependent suburbia.
    There are many countries with quiet safe cities, all because they have adequate alternatives to driving.

    My point is that all cars are contributing to cities that are hostile to humans, and adequate public transport (including walking and cycling paths) is far better than an electric car.



  • I often use LLMs to give me code snippets in a language I don’t know.

    When I started programming (back in the dark days when StackOverflow was helpful), it took me months to learn a language well enough to do what I wanted, and I had several weeks where I would be frustrated that I just couldn’t find what I was doing wrong, or what was the name for what I wanted so I could search for it.

    AI has allowed me to drastically speed up my learning time for new languages, at the expense of me not really understanding much. I’ll accept that compromise if I just want one script, but it’s a hard habbit to drop when actual understanding is needed.

    Aside from telling me what language features exist, or showing me the correct syntax (exactly what a language model is designed for), I have found AI is mostly just confidently wrong.


  • One of the main problems I found was that AI would sometimes write code that looked good, was well documented and even worked flawlessly. But it would take 15-20 complicated lines to perform a task that happened to be a language feature and could have been done with a single function call.

    Other times it would write code that appeared to work at first glance, and after a reasonable inspection also seemed good. Only after trying to re-write that task myself did I realize the AI had missed a critical but subtle edge case. I wouldn’t have even thought to test for that edge case if I hadn’t tried to design the function myself.

    I’ve also heard someone else mention that AI will often rewrite code (often with subtle differences) instead of writing a function once and calling it several times. The AI code may look clean, but it’s not nearly as maintainable as well written code by humans.
    I do have to admit that it is significantly better than poorly written code by overworked and underpaid humans.

    All of this is ignoring the many times the code just didn’t compile, or had basic logic errors that were easy to find, but very difficult to get the AI to fix. It was often quicker to write everything myself than try to fix AI code.