• brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      14 hours ago

      Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it’s largely about predicting outputs based on trained input examples.

      It doesn’t have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more “oldschool” approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.

      You’ve probably run ML models in photo editors, your TV, your phone (voice recognition), desktop video players or something else without even knowing it. They’re tools.

      Seperately, image similarity metrics (like lpips or SSIM) that measure the difference between two images as a number (where, say, 1 would be a perfect match and 0 totally unrelated) are common components in machine learning pipelines. These are not usually machine learning based, barring a few execptions like VMAF (which Netflix developed for video).

      Text embedding models do the same with text. They are ML models.

      LLMs (aka models designed to predict the next ‘word’ in a block of text, one at a time, as we know them) in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google’s labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros marketers poisoned the well).

    • Zwuzelmaus@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      12 hours ago

      I don’t remember too much tbh, just that we heard about the theory at university and tried out some of the mathematical methods. They were tiresome ;)

      Today I would recommend to start your studies on the wikipedia pages about Markov models and about machine learning.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      Yann Lecun gave us convolutional neural networks (CNNs) in 1998. These are the models that are used for pretty much all specialized computer vision tasks even today. TinyEye came into existence ten years later in 2008. I can’t tell you if they used CNNs, but they were certainly available.