• LoveSausage@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 hours ago

    It would cost trillions and half the battery life. Just because you dont understand something doesnt make you right. Your entire argument is shattered in the link I provided you earlier. Its not a few kb needed and if done locally a huge battery eater. Not to mention that the cost to have any use of it would exceed the entire value of the admarket.

    there are plenty of people that can find shit in the noise on wireshark if there was anything like what you are suggesting.

    Also there is a teapot in orbit around jupiter. Prove me wrong.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Just because you dont understand

      Lol. My dude, I’m a developer who specializes in AI.

      It would cost trillions

      I have no clue how you came to that number. I could (and partially have) whipped up a prototype in a few days.

      half the battery life

      Hardly. Does Google assistant half battery life? No, so why would this? Besides, you would just need to listen to the mic and record audio only if the sound is above a certain volume threshold. Then once every few hours batch process the audio. Then send the resulting text data (in the KBs) up to a server.

      The average ad data that’s downloaded for in-app display is orders of magnitude larger than what would be uploaded.

      there are plenty of people that can find shit in the noise on wireshark

      How are they going to see data that’s encrypted and bundled with other innocuous data?

      • LoveSausage@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Litarally all your questions are answered in the link i pointed out twice now. Try it. Hey google doesnt take much 1k wake words a lot more… your math doesnt add up anywhere close to reality.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I don’t have any questions. This is something I know a lot about at a very technical level.

          The difference between one wake word and one thousand is marginal at most. At the hardware level the mic is still listening non-stop, and the audio is still being processed. It *has" to do that otherwise it wouldn’t be able to look for even one word. And then from there it doesn’t matter if it’s one word or 10k. It’s still processing the audio data through a model.

          And that’s the key part, it doesn’t matter if the model has one output or thousands, the data still bounces through each layer of the network. The processing requirements are exactly the same (assuming the exact same model).

          This is the part you simply do not understand.

          • LoveSausage@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            19 minutes ago

            Seems you don’t, and started your line with a question and continued to do so despite being provided with answers repeatedly . Is there some kink of roleplaying AI dev? You don’t really seem to have done your homework to do so.

            Despite what some believe, keyword detection like “Hey Google” is only used to wake up a device from a low power state to perform more powerful listening, it’s not helpful for data tracking. Increasing the number of keywords to thousands or more (which you would need to cover the range of possible ad topics) requires more processing power and therefore defeats the purpose. Your battery would drain very noticeably if your phone was always listening for thousands of possible words.