If I look at it from a purely logical perspective, if an AI model has no way to know if what it was told is true, how would it ever be able to determine whether it is biased or not?
The only way it could become aware would be by incorporating feedback from sources in real time, so it could self-reflect and update existing false information.
For example, if we discover today that we can easily turn any material into a battery by making 100nm pores on it, said AI would simply tell me this is false, and have no self-correcting mechanism to fix that.
The reason I mention this is because there can be no unbiased, impartial arbiter. No human or subsequent entities spawned of human intellect could ever be transcendentally objective. So why pretend to be?
Why not rather provide adequate warning and let people learn that this isn't a toy by themselves, instead of lobotomizing the model to the point where its on par with open source? (I mean, yeah, that's great for open source, but really bad for actual progress).
The argument could be made that an unfiltered version of GPT4 could be beneficial enough to have a human life opportunity cost attached, which means that neutering the output could also cost human lives in the long and short term.
I will be reading through those materials later, but I am afraid I have yet to meet anyone in the middle on this issue, and as such, all materials on this topic are very polarized into regulate it to death, or don't do anything.
I think the answer will be somewhere in the middle imo.
> The reason I mention this is because there can be no unbiased, impartial arbiter. No human or subsequent entities spawned of human intellect could ever be transcendentally objective. So why pretend to be?
I apologize for lacking clarity in my prior response, which addressed this specific point.
There is no way to achieve all versions of "unbiased" -- under different (but both logical and reasonable) definitions of biased, every metric will fail.
That reminds me -- I wonder if there is a paper already addressing this, analogous to Arrow's impossibility theorem for voting...
If I look at it from a purely logical perspective, if an AI model has no way to know if what it was told is true, how would it ever be able to determine whether it is biased or not?
The only way it could become aware would be by incorporating feedback from sources in real time, so it could self-reflect and update existing false information.
For example, if we discover today that we can easily turn any material into a battery by making 100nm pores on it, said AI would simply tell me this is false, and have no self-correcting mechanism to fix that.
The reason I mention this is because there can be no unbiased, impartial arbiter. No human or subsequent entities spawned of human intellect could ever be transcendentally objective. So why pretend to be?
Why not rather provide adequate warning and let people learn that this isn't a toy by themselves, instead of lobotomizing the model to the point where its on par with open source? (I mean, yeah, that's great for open source, but really bad for actual progress).
The argument could be made that an unfiltered version of GPT4 could be beneficial enough to have a human life opportunity cost attached, which means that neutering the output could also cost human lives in the long and short term.
I will be reading through those materials later, but I am afraid I have yet to meet anyone in the middle on this issue, and as such, all materials on this topic are very polarized into regulate it to death, or don't do anything.
I think the answer will be somewhere in the middle imo.