I'm wondering now.. if you could replace just an ant's brain with an artificial, neuron-for-neuron copy, such that the ant would continue on the outside to behave in an identical way in identical situations, what if you went one step further, and replaced the hardware neural network with a virtual one running on a general purpose, ant-brain sized CPU? Or what if you had one third of the original ant's brain intact, one third replaced with artificial neurons, and the last third virtualized? Would you end up with three separate, yet closely interacting consciousnesses?
For that matter, if you take a human being, and cut the fibers connecting the two hemispheres, you end up with two separate minds, as demonstrated in experiments. Presumably you end up with two separate consciousnesses too.
If you replaced the neurons in your head one by one (say, 1% per day, over 100 days) with tiny machines functionally equivalent to neurons, what would be the effect from your point of view? To the outside, you would remain the same.
This line of inquiry is generally referenced as the Ship of Theseus argument. The underlying philosophical question about identity holds even for inanimate objects whose parts are replaced.
But, this argument has also been extensively reapplied to brains, bodies, and minds as well. The Chinese Room thought experiment is one common reference for this, where a functional system is trained to translate Chinese texts but framed in a way to cast doubt on whether there is any understanding of Chinese.
For that matter, if you take a human being, and cut the fibers connecting the two hemispheres, you end up with two separate minds, as demonstrated in experiments. Presumably you end up with two separate consciousnesses too.
If you replaced the neurons in your head one by one (say, 1% per day, over 100 days) with tiny machines functionally equivalent to neurons, what would be the effect from your point of view? To the outside, you would remain the same.