Meanwhile, some new details emerged about the days leading up to Altman’s firing. “In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world’s largest investors for a new chip venture,” Bloomberg reported. Altman reportedly was traveling in the Middle East to raise money for “an AI-focused chip company” that would compete against Nvidia.

As Bloomberg wrote, “The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter. Altman’s ambitions and side ventures added complexity to an already strained relationship with the board.”

“According to people familiar with the board’s thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them,” the WSJ report said. The sources said it wasn’t a single incident that led to the firing, “but a consistent, slow erosion of trust over time that made them increasingly uneasy,” the WSJ article said. “Also complicating matters were Altman’s mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI’s technology or intellectual property could be used.”

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    2
    ·
    7 months ago

    OpenAI said the “new initial board” will consist of D’Angelo, economist Larry Summers, and former Salesforce co-CEO Bret Taylor, who will be the chair.

    Those pesky board members with their annoying AI safety ideals are gone, replaced by new board members with excellent experience in squeezing profits. Next they’ll probably attempt to turn the non-profit parent org into a for profit corporation so they can get equity/stock grants. Yay!

    I guess OpenAI will get enshittified next year.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    5
    ·
    7 months ago

    He’s a billionaire. There are no honest billionaires. Things will only get worse when billionaires go unchecked.

    • Heresy_generator@kbin.social
      link
      fedilink
      arrow-up
      33
      arrow-down
      7
      ·
      edit-2
      7 months ago

      Because “AI” hype is what the venture capitalists are feeding to the financial and tech press theses days and Sam is the venture capitalists biggest “AI” star because he’s a good snake oil salesman.

      • theherk@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        7 months ago

        While not inaccurate, that is extremely reductive. The rapid improvement of AI at the transformer level is currently one of the most interesting things happening across many fields including arts and sciences, that also has the widest deviation between potential good and potential harm. OpenAI and its complex governance model are directly at the center of that growth and embroiled in one of the most fascinating governance struggles in recent history.

        This drama when combined with how disruptive this technology is likely to be across a wide range of markets affecting the world’s economies makes this interesting and also has the added benefit of being a news departure from the bombings and other terrible stuff going on around the world. Much more fun for popcorn and chat than wars and such.

      • Lmaydev@programming.dev
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        9
        ·
        7 months ago

        We are way beyond hype at this point.

        It’s a total game changer.

        As a developer ChatGPT has completely changed my workflow and massively increased my productivity.

        • micka190@lemmy.world
          link
          fedilink
          English
          arrow-up
          27
          arrow-down
          4
          ·
          7 months ago

          As a developer, comments that talk about how ChatGPT is changing the development game confuse the hell out of me. What are you people doing that ChatGPT makes your workflow massively more productive?

          • It gets documentation/help wrong or straight-up makes shit up
            • Same thing with having it generate actual code
          • If “generating code I’d normally copy/paste” is such a game changer, your architecture/design needs a rework
            • Yes, even for tests (seriously, we’ve had ways to pass arrays of inputs into tests for years, having it copy/paste the same test a hundred times with different values is fucking atrocious)
          • Code “assistant” suggestions have been fucking horrid from my experience with them (and I end up disabling it every time I give it a try)
          • Lmaydev@programming.dev
            link
            fedilink
            English
            arrow-up
            17
            arrow-down
            4
            ·
            edit-2
            7 months ago

            When using any new language or framework I can get up and running very quickly.

            Used to take time to read the intro docs and then start digging around trying to find the features I need. Now I can straight ask it how to do certain things, what is supported and the best practises.

            If I see a block of code I don’t understand I can ask it to explain and it will write out line by line what it’s doing. No more looking for articles with similar constructs or patterns.

            It’s amazing at breaking down complex SQL.

            Many tedious refactoring tasks can be done by it.

            Creating mappers between classes is very good because it can easily pickup matching properties through context if types and names don’t match.

            Generating class from a db table and vice versa.

            If you have a specific problem to solve rather than googling around for other solutions you can ask it for existing methods. This can save days or more of discovery and trial and error.

            It’s really good generating test cases based on a method.

            Recently I implemented a C# IDictionary with change tracking built in. I pasted the code in, it analysed it and pointed out a bug then wrote all the tests for the change tracking.

            It did better than I thought it would. Covering lots of chains of actions. Which again found a bug.

            It’s fairly good at optimising code as well.

            As for the mistakes you should be able to spot them and ask it to correct. If it does something invalid tell it that and it will correct.

            You have to treat it like a conversation not just ask it questions.

            Like Google you have to learn how to use it correctly.

            We also have bing enterprise which uses search results and sources its answer. So I can look at the actual web result and read through.

            The hallucination thing is basically a meme at this point by people that haven’t really used it properly.

            • Whoresradish@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              When I google an issue I quickly get a list of possible solutions with other developers commenting on them with corrections. People can often upvote and downvote answers to indicate if they work or not and if they stop working.

              With ai I get a single source of information without the equivalent to peer review. The answer may be out of date and it may misunderstand my request. It may also make the same mistake I am making that I would have caught with a quick googling.

              The ai may be able to make boilerplate code occasionally without too much rework, but boilerplate code is not that hard to make already.

              The AI is massively more expensive than a search engine and I have not seen any indication that will change soon. This is the biggest problem in my mind. I don’t ever expect to have to pay for google. I expect in the future the ai will need to be paid for somehow and I have a feeling they will have to charge too much to justify the use of AI for software development work.

              AI has plenty of good uses, but I do not believe software development is the winner. Block chain for instance was massively useful for git repositories, but not useful for many of the crazy things companies attempted to use it for.

              • Lmaydev@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                7 months ago

                If you use bing search AI it sources its answers. It basically does what you would do when looking through sources and at ratings. But when you find the info you want you can click the link it used to generate it.

                It’s also free I believe.

                • Whoresradish@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  7 months ago

                  Right now AI like that is heavily subsidized by investors. My concern with AIs feasibility is that training is so expensive that it won’t be able to stay free. Remember we can only stop ai training if the AI topic is no longer developing. Also if the AI can source its answer with a link, did it provide me with a new service that is better than a search engine?

            • ZahzenEclipse@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              As a newer developer is has been amazing for me and alot of experienced developers also recognize how much benefit it provides so im honestly confused by your standpoint.

              • Lmaydev@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                You’ll find the old guard hates change and will shit on things like this without even trying them.

          • cashew@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            7
            ·
            7 months ago

            Failing to understand why does not make you correct by ignoring it.

            Learning how to use AI tools is another meta-skill just like learning how to use a search engine such as Google. The latter is widely accepted as a must-know for software developers.

        • ZahzenEclipse@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          If you’re not actively using AI for a tech job then you’re leaving yourself behind. It’s look ignoring using Google.

    • Bezerker03@lemmy.bezzie.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      Chatgpt was one of the biggest game changers in tech in ages. Seeing the company implode over night has been interesting.

    • yildo@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      7 months ago

      Because Microsoft and VC types have thrown many billions of US dollars at this and similar companies, so a lot of (their) money is at stake

    • misk@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      7 months ago

      While large language models and similar “AI” technologies are very overhyped, they are already plenty usable for things like deepfakes which if left unchecked have significant potential to be weaponised and destabilize societies.

      OpenAI is a non-profit that’s behind those machine learning models and practical applications like ChatGPT. In principle it should govern development so that it’s safe and responsible. There are many allegations that Sam Altman became focused on profit betraying non-profit mission.

      While OpenAI is not technically controlled by commercial entities (it has 49% stake by Microsoft) it’s entirely dependent on them for funding which likely led to being strong-armed to have Altman regain control.

  • BigMacHole@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    7 months ago

    Why didn’t the board mention any of this when they were asked about why he was fired?

        • killeronthecorner@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          7 months ago

          No. They destabilized the value of the company and put their partnerships at risk. It was a dumb move and has had the expected outcome.

          • ZahzenEclipse@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            7 months ago

            The whole purpose of the board is to provide a safety valve. If the CEO is hiding stuff from the board then that seems like a completely legitimate reason to throw out a CEO. It’s hard to be a safety valve when the CEO is actively hiding information from the board to make the decisions they need to make.

  • edric@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    7 months ago

    I’m still confused how their chief scientist was part of the coup to remove Altman and at the same was one of the signatures on the letter demanding his return.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I actually think it was because of Greg Brockman previous head of the board that quit after hearing the news about Altman.

      They told him he was vital to the company after firing Sam and removing him from the board.

      Ilya their chief scientist officiated Greg and his wifes wedding. Apparantly Greg wifes pleaded to Ilya to support their return.

      I think the main issue here is open ai’s stated goal of developing safe agi to benefit all of humanity, destruction of the company and not making any profit would be align with that.

      However with so many player developing for profit ai catching up it is probably safter to have an openai raking risks then not having any openai at all.

      Ilya probably hoped that Greg and most co workers would stay without Altman, but since they where not the outcome prospects became worse enough to regret.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    This is the best summary I could come up with:


    The three who are leaving the board are OpenAI Chief Scientist Ilya Sutskever, entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

    OpenAI’s interim CEO, Emmett Shear, who led the company for a few days, wrote, "I am deeply pleased by this result, after ~72 very intense hours of work.

    “In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world’s largest investors for a new chip venture,” Bloomberg reported.

    As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter.

    A Wall Street Journal behind-the-scenes report noted that the nonprofit board’s mission is to “ensur[e] the company develops AI for humanity’s benefit—even if that means wiping out its investors.”

    The sources said it wasn’t a single incident that led to the firing, “but a consistent, slow erosion of trust over time that made them increasingly uneasy,” the WSJ article said.


    The original article contains 772 words, the summary contains 184 words. Saved 76%. I’m a bot and I’m open source!