Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    You can change jobs if the new one also sponsors you, and it’s my understanding that xAI tapped people from Tesla, but I might be wrong about that

    Anyways, what’s happening sure looks like malicious compliance to me… It’s really not that hard to get an AI to list far right talking points, it’s just hard to bake it into the model

    So you have people that made a pretty good model, but also can’t figure out basic AI infrastructure? I find that very hard to believe

    • wise_pancake@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Had no idea they were doing that, but that’s plausible

      And yes, it would shock me they can build this model this well and fuck this up.

      I just hold little sympathy for the employees.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I mean… It is genuinely hard to work for someone not evil. Let’s say you’re an AI engineer… Meta is probably the best because most of the non-corporate LLMs flow from there… But they’re also using it to build personalized echo chambers, which is horrible

        OpenAI is at the top and Microsoft has shown every inclination to make it a monopoly, so I could understand wanting to work on competitors

        You could go smaller and work somewhere like anthropic, but then you don’t have the resources to be on the cutting edge (depending on your specialty)

        I blame people who buy Teslas more than those who work at Tesla at this point. Especially when they slow walk the bad things…I mean, Twitter would probably be less Nazi if more talent stayed onboard to resist institutionally