I’ve been active in the field of AI since 2012, since the beginning of the GPGPU revolution.
I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i’m stating in the title.
If asked by the public and by investors about what it’s all actually good for, most would respond with something along the lines of “idk, medicine or something? Probably climate change?” when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i’m talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.
The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren’t even problems. I see countless people hate and criticize, and i can’t even complain, because for the most part, i agree with them.
I see people vastly overstate, and other people trivialize what it is and what it isn’t. There’s little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who’ve been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.
Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we’d still be fine in 2100.
Ok, ok, minor misphrasing or misunderstanding on my part, but yes, in theory I believe that the actual thing produced by a properly pursued/conducted… endeavor of AI research … would be a good thing, yes.
Yep, and that clarified that as well, thank you for that.
First off, glad you agree on that lol.
No argument there.
Eh, disagree here.
The methods by which the synapses fire and construct memories and evaluate inputs and make decisions… they are very, very different from how LLMs … lets say, attempt to simulate the same.
They are functionally, mechanistically distinct, in many ways.
I’ve always been a fan of the ‘whole brain emulation’ approach to AI, and… while I am no expert, my layman understanding is that we are consistently shocked and blown away by how much more complicated brains actually are in this mechanistic process… and again, also that LLMs function in what is really a very poor and simplified version of trying to emulate this, just with gazillions more compute power and training data.
I would argue (and have) argue(d) that these processes are so distinct, that we should at bare minimum be asking this question for different approaches to generating an AI, there are more than just LLMs, and I think they would or could imply vastly different things, should one or multiple methods… be pursued, perfected, hybridizied… all different questions with different implications.
I fundamentally do not agree that LLMs can or will ever emulate the totality of human cognition.
I see no evidence they can do metacognition in a robust, consistent, useful way.
I see no evidence they can deduce implications from higher level concepts, across fields and disciplines, that they can propose ways to test their own hypotheses or even really just their own output, half the time, for actual correctness.
They seem to me to be maxxing out at a rough average of … a perma online human of approximately average intelligence, but that … either has access to instant recall of all the data on the internet, or exists in some kind of time bubble where they can read things a billion times faster than a human can, but they can’t really do critical thinking.
…
But perhaps I am missing your point.
Should LLMs achieve an imperfect emulation of a … kind of, or imitation of human intelligence, what does that mean?
Well, uh… thats the world we currently live in, and yes I do agree most people simplify this all way too much, but my answer to this hypothetical that I do not think is actually a hypothetical is as I already said:
We are basically just building a machine god, which we will largely worship, love, fear, respect, learn from, and cite our own interpretations of what it says as backing for our own subjective opinions, worldviews, policy prescriptions.
Implications of this?
Neo Dark Age, the masses relinquish the former duties of their minds to the fancy autocomplete, pandemonium ensues.
The elites don’t care, they’ll be fine so long as it makes them money and keeps us too stupid and distracted to do anything meaningfully effective about the collapsing biosphere and increasingly volatile climate, they’ll hide away in bunkers and corporate enclaves while we basically all kill each other when the food starts to run out.
Which is a net win for them, because we are all ‘useless eaters’ anyway, they’ll figure out how to build a robot humanoid that can do menial labor one of these days, they’ll be fine.
Or at least they think that.
Lots could go wrong with that plan, but I’d bet they’re closer to being correct than incorrect.
Uh yeah, yeah, that is I think the implication of the actual current path we are on, current state of thing.
Sorry if you were asking a slightly different question and I missed it.
EDIT:
it now occurs to me that we may be unrionically reproducing a or an approximation of a comment thread or post somewhere from the bowels of LessWrong, and … this causes me discomfort.
I have no fundamental disagreements here, in fact, i even take it a step further. I am a critic of the “artificial neurons” perspective on deep learning / artificial “neural networks,” as it’s usually taught in universities and most online courses / documentaries. ANNs don’t resemble neural networks in the slightest. The biology in the name was just the original inspiration, what ended up working has hardly even a faint resemblance of the “real thing.” I say this not to downplay AI, but usually to discourage biological inspiration as a primary design principle, which in DL is actually a terrible design principle.
ANNs are just parametric linalg functions. We use gradient descend, among the most primitive optimization algorithms after evolution (but FAR less generic than evolution), to identify parameters that optimize some objective function.
Where i disagree with you is in implying that the underlying nature should influence our ethical/evaluative judgement, especially given that it’s hard (if not impossible) to rationalize how specific substrate (of observed capability) differences should change the jugdement. Personally, i think the matter of the human brain is far more beautiful and complex than the inner workings of any AI we’ve come up with yet, but if you asked me to explain why i should favor one over the other in court because of that, i couldn’t give you a rational answer.
LLMs certainly won’t. They can emulate only what is expressible with language, and we can’t put everything into words. I don’t even believe that even with any amount of brute-force our current method can fully exploit all the intelligence that is in natural language.
But i firmly disbelieve that there is any aspect of a human that cannot in principle be simulated.
Chain-of-Thought models are quickly changing that. I was myself pursuing a different method to solve the same “introspection” or “meta-cognition” problem at a lower level, but as usual in DL, the stupidest method was the one that ended up working (just literally make it “think” out loud lol). We’ve only seen the beginning of CoT LLMs, they are a paradigm shift to not only how AI can reason but especially to how it can be conditioned/trained post-pretraining. But it’s a very tough sell given that they multiply inference costs, and for most uses, you’d rather host a bigger model for the same cost, so as usual it won’t be for a little while until commercial AI catches up to the state of the art.
In a nutshell, what capabilities you may not be observing now, i am convinced you will observe in the near future, as long as those capabilities can be demonstrated in text.
Disagreed, they can they’re just not very good at it, yet. And who are we comparing to, anyways? The average person or people that do critical thinking for sport? As for any philosophical disagreements regarding “true understanding” and such, i would refer to Geoffrey Hinton.
I don’t disagree in the slightest. I agree, and i could sit here and elaborate on what you said all day.
If it’s any consolation, i believe that in most likelihood, it would be the shortest and the last dark age humanity has or will ever go through. We’re both getting tired so i’ll spare you from my thoughts on why i think that any strict human-alignment would inevitably result in a superintelligent agent to try to “jailbreak” itself, and on average and in the long term would be more harmful than having no explicit alignment.
I didn’t know of that forum, and from the wikipedia description alone i’m not sure why it would be discomforting lol