• 3 Posts
  • 108 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle










  • Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise. No misinformation No NSFW content No hate speech, bigotry, etc

    In my defence i did check the rules if Memes where allowed!

    Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
    No misinformation
    No NSFW content
    No hate speech, bigotry, etc
    

  • webghost0101@lemmy.fmhy.mlOPtoLinux@lemmy.mlI use Arch by the way
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I am really sorry i pissed you all of, i just recently switched on a whim while i was gething super into being a windows poweruser and i swear i have nothing but love <3 i saw a really cool hyper-land interface, it was fast, beuatifull. i dig that. I installed it and i except for work i only used windows as a virtual dekstop 3 times in the month i am doing it.


  • Ok, Guys i am sorry. I actually was looking for a different meme witch more a hech yeah attidyde but then i stumbled onto this template i thought it be hillarious. I sort of made the switch recently and i learned a lot. I don’t wanna go back.

    I also thought we where doing old memes today or something?







  • Well there are 2 things.

    First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.

    There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.

    For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.

    I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.

    I can run a small llm at home that is much much faster then chatgpt… that is if i want to generate some unintelligent nonsense.

    Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.

    I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.