I’ve been thinking about this for a while, and would love to have other, more knowledgeable (hopefully!) opinions on this:
I’ve been dwelling on how we might be able to enforce some sort of set of rules or widely agreed upon “morals” on artificial general intelligence systems, which is something that should almost certainly be distributed in order a single entity from seizing control of it (governments, private individuals, corporations, or any AGI systems), and which would also allow a potentially growing set of rules or directives that couldn’t be edited or controlled by a singular actor–at least in theory.
What other considerations would need to be made? Is this a plausibly good use of this technology?
I’ve been thinking about this for a while, and would love to have other, more knowledgeable (hopefully!) opinions on this:
I’ve been dwelling on how we might be able to enforce some sort of set of rules or widely agreed upon “morals” on artificial general intelligence systems, which is something that should almost certainly be distributed in order a single entity from seizing control of it (governments, private individuals, corporations, or any AGI systems), and which would also allow a potentially growing set of rules or directives that couldn’t be edited or controlled by a singular actor–at least in theory.
What other considerations would need to be made? Is this a plausibly good use of this technology?