• besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    12 hours ago

    So if I have two machines running the same local LLM and I pass a prompt between them, I’ve achieved data compression by transmitting the prompt rather than the LLM’s expected response to the prompt? That’s what I’m understanding from the article.

    Neat idea, but what if you want to transmit some information that an LLM can’t tokenize and generate accurately?

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      And how do I get the prompt that will reliably generate the data from the data? Usually for compression we do not start from an already compressed version.