• breaks@lemmy.studio
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    But for large website operators, the choice to block large language model (LLM) crawlers isn’t as easy as it may seem. Making some LLMs blind to certain website data will leave gaps of knowledge that could serve some sites very well (such as sites that don’t want to lose visitors if ChatGPT supplies their information for them), but it may also hurt others. For example, blocking content from future AI models could decrease a site’s or a brand’s cultural footprint if AI chatbots become a primary user interface in the future. As a thought experiment, imagine an online business declaring that it didn’t want its website indexed by Google in the year 2002—a self-defeating move when that was the most popular on-ramp for finding information online.

    Really curious how this will end up

    • axibzllmbo@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      That’s an interesting point that I hadn’t considered, the comparison to Google indexing in the early 2000’s may prove to be very apt with the number of people I’ve seen using chat GPT as a search engine.