• 0 Posts
  • 70 Comments
Joined 3 years ago
cake
Cake day: June 18th, 2023

help-circle

  • You want me to explain the bloom filter? So that you can say “see, I told you it makes sense to use as a memory efficient guaranteed no false negative checks on if a user has seen a video before. Dumbass”.

    Then I’d reply something like “yeah, I, know… The point isn’t whether or not bloom filters can make sense here… What’s being discussed is whether or not this was generated by a human or LLM… That, even if someone was making a diagram, of a system, where bloom filters was used, for the case of checking if you’ve already seen a video… It would still be weird to present it in the way it is presented in the diagram, for a human, but not so weird for an LLM, if you consider how LLMs work by associated concepts, where a ‘creator filter’ and ‘bloom filter’ are linguistically more connected, and explains why they are used similarly in the diagram, even though the latter is hardly considered a filter by anyone who has used one, not to mention that the actual ‘filter’ here would be the concept of ‘not/seen before’… and again, who’d want to give a shit about which data structure is used to most space-efficiently perform that evaluation… for fucks sake”

    Then, you’ll be all non sequitur and accuse me of some ad hominem like “you use big words on purpose to seem smart, but you are dumb”.


  • Nah. You mistook my “these are the parts that really don’t make sense for a human to make”, with “i don’t understand the subject, or what this complex concept can mean”.

    If you don’t see the difference, you’re just going in a loop of trying to argue the wrong point. I was hoping to save you the trouble of “you don’t get it” line, by saying “trust me, I do get it, I’m a god damn expert”.

    I’m happy to indulge in explaining things to people who want to learn something. I happily fuck with people who seem disingenuous to that goal. If I was wrong and you genuinely meant to ask “why doesnt this make sense”, then I’m sorry. I misread your intentions, and I’ll keep it in mind.




  • You still think that’s a relevant point? Did I also not point out to what extent it does make sense in that context, but still why it is weird, and why an LLM might do that weird thing, but a human wouldn’t?

    Maybe start at the beginning, and read again what I wrote. This time, do it with an assumption that I know what I’m talking about? Also, since you’re already learning stuff. Read about how LLMs and transformers work. Maybe that might help. I don’t know. Either way, fine by me. Fingers crossed you figure it out.


  • Let me ask you this tho. When you say “do in fact make sense”. Are you basing it that in the context of what you think this diagram is saying? Or do you mean “do in fact make sense” in the context of knowing how such an algorithm would be constructed?

    You still keep missing my points. And they aren’t difficult points either. The fancy jargon words were a basic ass description of what a bloom filter does. So you’re kinda making my argument, which is funny for reasons I’m sure won’t be appreciated.

    I’m also not tangentially an expert, for fucks sake. I’m the kind who’s day job is to design simpler things than what this diagram is trying to “explain”, and telling you, that it comes across as if made by with a toddler’s understanding. I also didn’t say this was 100% guaranteed to be LLM, I said it smelled like it. I have suggested other possible explanations: stupidity, incompetence, and even a mental stroke.

    Your take on being tangentially an expert might be a woosh moment

    I’m also out of shits to give at this point. Literally.


  • I think you might have missed my point. I wasn’t listing stuff I had trouble understanding. I was listing stuff that didn’t make much sense. The distinction is relevant. The end result, even if you manage to find some excuse that extends the already generous benefit of doubt, it still doesn’t result in anything useful or informative.

    I’m also not using fancy words (or…?). The only fancy thing that stands out is the the “Bloom filter”, which isn’t a fancy word. It’s just a thing, in particular a data structure. I referenced it because its an indication of an LLM, in behaving like the stochastic parrot that it is. LLMs don’t know anything, and no transformer based approach will ever know anything. The “filter” part of “bloom filter” will have associations to other “filters”, even tho it actually isn’t a “filter” in any normal use of that word. That’s why you see “creator filter” in the same context as “bloom filter”, even though “bloom filter” is something no human expert would put there.

    The most amusing and annoying thing about AI slop, is that it’s loved by people who don’t understand the subject. They confuse an observation of slop (by people who… know the subject), with “ah, you just don’t get it”, by people who don’t.

    I design and implement systems and “algorithms” like this, as part of my job. Communicating them efficiently is also part of that job. If anyone came to me with this diagram, pre 2022, I’d be genuinely concerned if they were OK, or had some kind of stroke. After 2022, my LLM-slop radar is pretty spot on.

    But hey, you do you. I needed to take a shit earlier and made the mistake of answering. Now I’m being an idiot who should know better. Look up Brandolini’s law, if you need an explanation for what I mean.


  • I’m not too happy to spend time pointing out flaws in AI slop. That kind of bullshit asymmetry feels a bit too much like work. But, since you’re polite about it, and seem to ask in good faith…

    First of all this is presented as a technical infographic on an “algorithm” for how a recommendation engine will work. As someone whose job it is to design similar things, it explains pretty much nothing of substance. It does, however, include many concepts that would be part of something like this, with fuzzy boxes and arrow that make very little sense. With some minor trivial parts you can assume from the problem description itself. It’s all just weird and confusing. And, “confusing” not in the “skill issue” sense.

    So let’s see what this suggested algorithm is.

    1. It starts out with “user requests the feed”, and depending on whether or not you have “preference” data (prior interests or choices, etc), you give either a selection based on something generic, or something that you can base recommendations on. Well… sure. So far, silly, and trivial.

    2. “Scoring and ranking engine”. And below this, a pie diagram with four categories. Why are there lines between only the two top categories, and the engine box? Seems weird, but, OK. I suppose all four are equally connected, which would be clearer without the lines. Also, what are the ratios here? Weights for importance, of some sort? “Time-Decayed”? I hope that’s not the term that stuck for measuring retention/attention time.

    3. On the three horizontal “Source Streams” arrows coming in from the left, its all just weird. The source streams are going to be… generated content, no? But let’s give it the befit of the doubt and assume it’s suggesting that, given generated content, some of it might can be considered relevant for “personal preference” and has a “filter: hidden creators”, but, none of that makes any sense. The scoring and ranking engine is already suggested to do this part… The next one is “Popular (high scores) filter: bloom filter (already seen)”. Which mixes concepts. A bloom filter is the perfect thing to confuse an LLM, because it has nothing to do with filters in the exact same context “filters” was used for the above source stream. Something intelligent wouldn’t make this mistake. But, it does statistically parrot it’s way to suggest that a bloom filter might have something to do with a cost effective predicate function that could make sense for a “has seen before”. However, why is this here?

    I’ll just leave it at that. This infographic would make a lot of sense if it was created by some high schoolers who were tasked to do something like this. Came up with some relevant sounding concepts. Didn’t fully understand any of them. Which is also exactly the kind of stuff LLMs do

    I don’t think loops hired a bunch of kids, so LLM it is.

    And the like “Our new For You algorithm is pretty complex, so we created this infographic to make it easier to understand!”, doesn’t help the case against LLM either. There a many complex parts of a recommendation engine, but none of the things in this infographic explain or illuminate those complex parts…

    But, I might be wrong, and this is their earnest attempt at explaining how their algorithm works. In which case, they are just bad at either explaining it, or at designing it, most likely both. Then again, if I’m right, and this is generated by an LLM still gives the same impression, but leaves some room for “someone who isn’t technical, asked an LLM, and phoned this in because it looked cool, and people who don’t know any better will think so too!”




  • It might seem like it, especially for late diagnoses, but I don’t think the ratio of individuals with ADHD is increasing.

    They are getting hit with an information overload unlike anything previous generations have had to deal with. And, what was possible to cope with earlier in life, now in mid-life, results in a choice of either a mental breakdown from exhaustion, or medication that helps deal with the symptoms.

    There are some technological and cultural trends that exasperates the issue, especially short form social media, which I think governments have failed at protecting the younger generation from. It’s not like it’s an easy thing to fix. Try banning sugar from the sugar addicted children who’s sense of identify and self worth is made out of sugar. Not to mention capitalistic forces salivating over how dirt cheap and easy it is to manipulate them en mass.