I’ve been active in the field of AI since 2012, since the beginning of the GPGPU revolution.
I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i’m stating in the title.
If asked by the public and by investors about what it’s all actually good for, most would respond with something along the lines of “idk, medicine or something? Probably climate change?” when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i’m talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.
The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren’t even problems. I see countless people hate and criticize, and i can’t even complain, because for the most part, i agree with them.
I see people vastly overstate, and other people trivialize what it is and what it isn’t. There’s little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who’ve been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.
Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we’d still be fine in 2100.
… No, a rock is not capable of cognition, any purpose to the existence of a rock is therefore assigned by a cognizant, likely sentient entity.
In your opinion, as a human, a rock maybe exists to be your favorite rock, or to be processed and mined, or to be a beautiful part of a scene, or to be a vital part of a geologic and ecological system, be a thing to skip across a lake, whatever.
The rock doesn’t create for itself a purpose, and things that can ascribe purposes to other things likely will not be in 100% agreement about what that purpose is.
To be an end in itself requires neither cognition nor agency. Let’s make the obvious explicit, which is that we’re clearly using different definitions of “sake.”
And to declare my general stance more explicitly to prevent further misunderstandings, i firmly reject any voodoo notion of sentience, consciousness, qualia or free will. Free will is merely semantic rape and the “mind-body problem/duality/paradox” is the most blatant case of religious thought tainting philosophical thought to the point of ignoring/tolerating a hard contradiction, and i ascribe to the Illusionist school of thought regarding qualia. There is no purpose, but it just so happens that things are pretty for reasons we can’t yet explain (complexity theory), and i find that inspiring.
The “ethical” difference between a rock and my mother (or myself, or you) is that if i kick a rock, it’ll neither complain nor make me feel bad. And my feelings themselves are just molecular dynamics. Ethics itself is just making an elephant out of the fly that are the social intuitions and behaviors of a social species.
Given this elaboration, to repeat myself: I desire AI only for its own sake. I just want it to be, for also the same reason that an artist wants their artwork to be. I want to be pretty, i want it to be well liked. But I want it to exist in the world even if nobody but itself would ever look at it, where it’ll just be and do hopefully pretty things that will make this local part of the universe a little bit more interesting.
It is not doing pretty things, and i am upset about that.
I tried to respond more generally to your total idea in another comment.
…
But as to… you rejecting the conception of sentience, will, agency, qualia, etc…
Then why bother asking anyone’s opinion on this, in a language?
Language is after all just a higher order emergent construct.
If you revert to semantic deconstruction, clearly you do not even find … uh, meaning, or worth, or purpose I guess… in the notion of general terms ascribed to general levels or kinds of complexities that can emerge out of… quarks and gluons and electrons doing stuff.
…
You are positing a ‘what should be done?’ question.
This is impossible to answer without at least a semblance of ethical reasoning.
But you also seem to both reject the notion of ethics as meaningfully useful, while simultaneously positing your own kind of ethics that basically boil down to:
AI is your field, AI specialist wants to make pretty AI for the sake of beauty, as a painter wants to make paintings for the sake of beauty.
I am sympathetic to this, and … as empathetic as a non AI specialist programmer can be, I think…
But I cannot reconcile this seemingly blatant contradiction of you asking an inherently ethical question, holding an inherently ethical stance/opinion, and then also just seemingly rejecting the notion of ethics.
Perhaps I am misunderstanding you in some way, but you appear to be asking a nonsensical question, holding an arbitrary belief via basically special pleading.
Because it’s fun and engaging, it tickles those neurons. Perhaps there is, unbeknownst to me, also an underlying instinct to expose oneself in order to be subject to social feedback and conditioning, for social learning and better long-term cohesion.
I don’t reject ethics itself, i reject the idea that it has any special importance that transcends totally intra-human goings-on. I do not reject that certain ethical theories, or just the bare-bones moral intuitions, can have utility within and towards endemically human goings-on, and under endemically human definitions. After all, we evolved those social intuitions for a reason.
EDIT: To connect this to my reply to your more general comment: Modeling part of human thought, even imperfectly, should make it at least partly overlap with “human” and “human goings-on” in the context of even entirely human-centric ethical debates.
Yes but it’s just one farmiliar manifestation of a greater “ethic,” if you want to call it that. I’d call it a personal affinity, ideal, or perhaps a delusion: The reverence of all forms of beauty and complexity, and AI has the potential to become the greatest form of beauty and complexity in, as far as we can tell, the entire galaxy and possibly the whole Virgo supercluster or beyond. Or, far more likely, it can be the cosmic satire (and possibly destruction) of it all. We’re not making a real effort to make it the former. And as i hinted in the last sentence of my original post, i believe what we’re actually doing steers us well clear of the former.
I hope it makes sense now.
Well, now perhaps that instinct is known to you.
On that note…
Selected quotes from Morpheus, the prototype-of-a-much-larger-system AI from the original Deus Ex game (2001), cannonically created around 2027:
Yep, I didn’t come up with the line of thought I’ve been espousing, but I do think it to be basicsally correct.
…
As to our ‘On the nature of Ethics’ discussion…
Ok, so if I’ve got this right… you do not fundamentally reject ethics as a concept, but you do believe they are ultimately material in origin.
Agreed, no argument there.
I also agree that an… ideal, or closer to ideal AI would be capable of meta-ethical reasoning.
Ok, now… your beauty ideal. I am familiar with this, I remember Plato and Aristotle.
The sort of logical problem with ‘beauty’ as a foundation of an ethical system is that beauty, and ethical theories about what truly constitutes beauty… basically, they’re all subjective, they fall apart at the seams, they don’t really… work, either practically nor theoretically, a system unravelling, paradox generating case always arises when beauty is a fundamental concept of any attempt at a ‘big ethics’, a universal theory of ethics.
Perhaps ironically, I would say this is because our brains are all similar but different, kind of understood but not well understood mystery boxes.
Basically: You cannot measure nor generate beauty.
Were this not the case, we would likely already have a full brain emulation AI of a human brain…
… we would not have hordes and droves of people largely despising AI art on just the grounds that we find it not beautiful, we can tell it is AI generated slop, just emulations, not true creations.
…
Anyway, what I intepreted as a contradiction… is I think still a contradiction, or at least still unclear to me, though I do appreciate your clarifications.
As summary as I can:
You are positing an inherently ethical stance, asking an inherently ethical question… and your own ethical system for evaluating that seems to be ‘beauty’ based.
I do not find the pursuit of beauty to be a useful ethical framework for evaluating much of anything that has very serious real world, material implications.
But, we do seem to agree that, in general, there are other concievable ways of ‘doing’ ‘attempting’ or ‘making’ AI that seem to be more likely to result in a good outcome, as opposed to our current societal ‘method’ which we both seem to agree … is likely end very badly, probably not from a Terminator AI take over scenario, but from us being so hypnotized by our own creation that we more or less lose our minds and our civilization.
…
Ok, now, I must take leave of this thoroughly interesting and engaging conversation, as my own wetware is approaching an overheat, my internal LLM is about to hit its max concurrent input limit and then reset.
=P
I was going mad about that and hoping you wouldn’t notice. You noticed.
I should play that game. The 2nd quote rings with smth i’ve been rambling on about elsewhere regarding why humanity embraced agriculture and urbanism, where the expert discourse (necessity) contradicts the common assumption (discovery and desire).
Yes, but i think you misunderstood my edit? I meant to say that a strong enough semblance to humanity should make it worth considering under even human-centric ethics, whichever those ethics are. AKA rationally deserving of ethical consideration.
I believe even that is of material origin. I call it “beauty” but it’s really just the analogy used by complexity theorists (as in the study of complex systems) to describe what they study. Yes, that would make “beauty,” in the uncommon sense that i use the term here (story of literally every philosophical debate and literature), not subjective. Apologies for not stating this more clearly.
Following my clarification - taking a barren planet, terraforming it, seeding it with the beginnings of new multicellular life, and doing the same with every workable world out there, i would say is spreading or generating beauty. Just as one potential example of all the things that humanity will never do, but our inevitable successor might. It might itself be a creature of great complexity, i would say such ability would definitely imply it, a seemingly entropy-defying whirl in a current, itself actually accelerating entropy increase, as life itself. I am referencing an analogy made in The Physics of Life, by PBS Spacetime, if i’m not misremembering. The vid has a mild intro into complexity science, as in the study of complex systems.
I’m a bit confused myself right now. Let’s backtrack, originally you stated:
And now
That is a very fair point, but i don’t see a logical contradiction anymore. If i understand correctly, you saw the contradiction in me asking ethical questions, and stating ethical opinions, while rejecting the notion of ethics. As i clarified, i do not reject the notion of ethics.
I reduce ethics to the bare bones of basic moral intuition, try to refrain from overcomplicating it, and the “ethical authority” (see also pure reason, which failed; or God, which you can’t exactly refute; or utility, which is a shitshow; as other ultimate “authorities” proposed in absolute takes on ethics) that i personally kind of add to that is the aforementioned concept of “beauty”. You may disagree with it being a reasonable basis for ethics, as you do, and you may it’s all philosophically equivalent to faith anyways. But i don’t see a strict contradiction?
I think my “ethics” are largely compatible with common human ethics, but add “making ugly/boring/banal things is inherently bad” and “making pretty/interesting/complex things is good,” and you get “Current AI is ugly, that’s bad, i wish it weren’t so. If we made AI ‘for its own sake’ as opposed to as a means to an end, we would be trying to make it pretty, the existence of beauty i see as an end in itself.” I think i’m just vastly overwording the basic sentiment of many designers, creators, gardeners, etc.
Understandable. I should do the same ^^