Originally posted by PawnokeyholeCan you talk about all that is required for conscieousness, you know?
Can't you distinguish an assertion from a question?
Can you specify the properties of neurons that guarantee consciousness or those of silicon circuits that forbid it?#
If not, why are so certain of your contrary position?
We can talk about the current flow through a processor, all that goes
into it, but is just current through a processor, is there a magical
ciruit design that turns that current into a real thought that makes
the computer become aware? If not the only thing man can do is
simply create a faster electrical abacus with the ability to program
it.
Kelly
Originally posted by PawnokeyholeThis is the point I voiced from the beginning as to the disingenuity -- I hope it is merely subjective bias -- of the robot analogy:
*sighs*
Note: The below differs slightly from the version.
Okay, suppose I make beings--robots if you like, something else if you don't--with roughly equally balanced natural tendencies to be nasty or nice. Some people might think this approximates humans.
I leave these being to interact with humans in significant ways.
But they also have s other reasons that eliminate that liability. But, all else equal, He would be liable.
How can a creator be liable for the harm his creation does to itself and of its own choice?! Allow me to use an analogy of my own:
I set up a "virtual environment" (VE) with a series of artificial intelligence (AI) nodes having distinct identities and personalities. The intention is for me to interact with the virtual environment to "channel" the nodes to allow for wholesome interaction in the creation of new "super-code". This obviously allows for the possible risk of creating a cyber-virus, or perhaps a virtual masterpiece of unthinkable magnitude. To further complicate the issue, the AI nodes firewall my access into their VE, turn on each other and wreck their environment.
Would I still be liable for damage done to my own "virtual" creation? To whom would I be liable?
Edit: Try to understand my point of a closed, created system. Your argument of "unleashing them on the public" has no valid application to reality, since by definition, there is no "uncreated" (i.e. non-robot beings) within the created universe.
Originally posted by HalitoseSeems like you would have to be a tad reckless to allow your creation to disrupt the purpose you created it for.
This is the point I voiced from the beginning as to the disingenuity -- I hope it is merely subjective bias -- of the robot analogy:
How can a creator be liable for the harm his creation does to itself and of its own choice?! Allow me to use an analogy of my own:
I set up a "virtual environment" (VE) with a series of artificial intellige ...[text shortened]... nition, there is no "uncreated" (i.e. non-robot beings) within the created universe.
Originally posted by PawnokeyholeThat's the sort of liability God faces for creating beings with free will, with conflicting instincts towards good and evil. There may be other reasons that eliminate that liability. But, all else equal, He would be liable.
*sighs*
Note: The below differs slightly from the version.
Okay, suppose I make beings--robots if you like, something else if you don't--with roughly equally balanced natural tendencies to be nasty or nice. Some people might think this approximates humans.
I leave these being to interact with humans in significant ways.
But they also have s ...[text shortened]... other reasons that eliminate that liability. But, all else equal, He would be liable.
Your ceteris paribus distinction raises another point:
What if the intention was not to have purely "good" or "nice" robots, but "willingly good" or "willingly nice" robots? Let's take love -- would the act (or emotion) be the same if it didn't arise by choice? You can progamme your cellular phone to say: "I love you", but I'm sure you'd agree that a sms message from a lover with the same holds so much more meaning -- even pathos.
I would contend that God willed to rather create beings capable of true love (with the added risk of hate, i.e. evil) rather than automated-response-robotrons.
Originally posted by KellyJayNeurons 'talk' to each other through electrical impulses as well. What part of a 'neuron' net (brain) do you think is not possible to replicate by other means? Do you suggest that neurons are somehow exempt from the laws of physics? If so on what grounds? If you don't think that neurons are exempt from the laws of physics then there will be some system you can construct that will exactly replicate the important (for the purposes of thought in this case) processes that go on inside and between neurons, and therefore an entire working human brain can be constructed. Are you saying that this would not be sentient (assuming humans are)? And if you allow this to be a sentient being then why would it not be possible to build a nonhuman sentience, or do you not allow for any creature other than Homo sapiens to be sentient? If you think that sentience can't be explained through physical means then where is your evidence for any non-physical entity that somehow imbues sentience?
Can you talk about all that is required for conscieousness, you know?
We can talk about the current flow through a processor, all that goes
into it, but is just current through a processor, is there a magical
ciruit design that turns that current into a real thought that makes
the computer become aware? If not the only thing man can do is
simply create a faster electrical abacus with the ability to program
it.
Kelly