I don’t understand this idea completely myself but it’s an evolved form of technocracy with autonomous systems, suggest me some articles to read up on because in the field of politics I am quite illiterate. So it goes like this:
- Multiple impenetrable, isolated AI expert systems that make rule based decisions (unlike black boxes, eg. LLMs).
- All contribute to a notion and the decision will be picked much like in a distributed system, for fairness and equality.
- Then humans are involved, but they too are educated, elected individuals and some clauses that stop them from gaming the system and corrupting it.
- These human representatives can either pick from list of decisions from AI systems or support the already given notion or drop it altogether. They can suggest notions and let AI render it but humans can’t create notions directly.
Benefits:
- Generally speaking, due to the way the system will be programmed, it won’t dominate or supress and most of the actions will be justified with a logic that puts human lives first and humans profit second.
- No wars will break out since, it’s not human greed that’s holding the power
- Defence against non-{systemized} states would be taken care by military and similar AI expert systems but the AI will never plan to expand or compromise a life of a human for offense
Cons:
- Security vulnerabilities can target the system and take down the government’s corner piece
- No direct representation of humans, only representation via votes on notions and suggestions to AI
- Might end up in AI Apocalypse situation or something I dont know
The thoughts are still new to me, so I typed them out before thinking on paper. Hence, I am taking suggestions for this system!
tl;dr is let AI rule us, because hard coded-rule based decision maker is better than a group of humans whose intents can always be masked and unclear.
Two problems. First, AI give answers built from models fed by training data. Where are you going to get this training data? How will you ensure that the data itself doesn’t have bias baked into it? This has been a problem already because the data we’ve fed into AI models for training reflects our human (many times misogynistic and racist) notions. As an example: Lets say your AI model is designed to select the best person for President of the United States. The training data we’d have is: all past US presidents
As we have yet to elect a woman President, one obvious criteria the model will incorporate is: “candidate must be male”
Obviously we know that’s not the case, but all of our past behavior has shown this is a requirement for Presidency and that is all the model knows.
A second problem even after the model is built is introduced bias. As in, the operator of the model, especially in non-black-box AI is to change the weights of what factors are considered more or less important in the final answer. Show me who, in your AI technocracy, will control the introduced bias, and I’ll show you who is actually making the desicions.
Who is determining what is “fair” and “equal” is? We have seen there are certain groups that are trying to push the stupid notion that white people should be in charge. If these morons are included in the group that decides what "fair or “equal” is, then the resulting AI answers will just as racist.
This bolded part is the absolute hardest part, and your whole description is kind of handwaving it away. The whole of humanity has been looking for a system of leadership that is incorruptible. We haven’t found one yet, and your clauses here would be the magic to any system irrespective of AI or not involved.
I appreciate you seeing a problem and trying to propose a solution. Don’t let me stop you, but incorporate my feedback and others into your thoughts and see where it takes you. I’d love for you to find something we could use.
Also, I am a computer science student, I just had distributed systems, machine learning, deep learning and Artificial intelligence in my course last semester. Though I don’t have tons of practical experience here, I have a basic theoretical foundation.
By distributed systems, I was referring to an architecture similar to a blockchain and its fault tolerance and leader election algorithm.
First: I believe a simple predicate logic to determine ethics, eg is below. Then a GAN to simulate situations and receive feedback from it. Humans would monitor this and keep a track of decisions that were incorrect.
Second: I proposed a distributed network of rule based AI systems that polls on the notion. The different polls would be because different systems would lead different regions. Say our country has 6 states, my state wants nation wide farming subsidy bill to be passed but on this notion no other system agrees, than the notion get pulled back for review. And the state system, would have multiple local systems established. Even if by some reason some systems are pwned, then the network would still stand.
Third: See, in USA particularly, Trump got elected because major media outlets say, a lot of people didn’t show up to vote and a lot of young people jumped the gun. So, let’s say this margin of error was because of unaware citizens… will the rule based decision maker make this mistake? No, because access to information is universal and transparent, and since AI systems are receiving the suggestions from humans they won’t have any problem with making wring calls as the system is transparent
Fourth, for the handwaving part: The clauses I meant were:
Yes, I do believe this is a lot of handwaving and fictious ideas but I think humans can’t do correct surveillance and we can’t hold the power and not get corrupted, thus its better that a computer program that can’t go further than it’s purpose, monitor us and hold the power. Since no single one will be more powerful than the system yet the population would rule itself leveraging the system