Originally posted by scottishinnzAs I said before, if we were designed to accomplish such a task it would be possible. However, based upon our current inability to even fathom what consciousness is I think it highly unlikely that we were designed to accomplish such a task.
How do you know this? Is it not possible that we'll recreate consciousness in a computer one day, and it'll take its future progress into its own hands?
Originally posted by scottishinnzI’m not sure I can go there, Scott—but then again, I may be misreading your response here, taking it out of context... (In none of this am I positing a supernatural god, or the supernatural at all.)
100%
If we think in terms of logos—the logos of the Stoics, the logos of nature—then we are ourselves engendered from, and of, that logos (my axiom of non-duality). Therefore, it is reasonable to conclude that our logos is coherent with the logos of the totality. To that extent, we can certainly expect to learn and know things—perhaps the most fundamental things—about that logos/totality.
But we are nonetheless, I think, limited by perspective (I’m thinking along the lines of Nietzsche’s perspectivism here). To use another metaphor, we can only comprehend the “text” in terms of our own “grammatical” capabilities (though, again, one would think that our grammar is coherent with that of the cosmos as a whole).
No matter how close we get to mapping the totality, the only perfectly accurate map of the totality is the totality itself—which also seems to be dynamic. I see no reason to assume that ours is the singular consciousness whose cognitive capacity—whose logos—is exhaustive vis-à-vis the whole.
You might have to program the computer so that it is capable of generating its own grammar, that transcends our own... The computer might then know everything, but that doesn’t mean that we would, even if the computer is able to translate the knowledge back into the limits of our grammar. (This is not to deny the possibility to AI, just to say that an AI whose cognitive capacities are the same as ours—even with superior computing and programming abilities—is as limited as ours; and that an AI whose cognitive capacities exceed ours can still not give to us knowledge beyond those capacities.)
On the other hand, I also see no reason to assume that what today is unknown must remain unknowable—who knows? 🙂 Nor do I have a need to posit a supernatural, or extra-natural, category in order to “preserve the mystery.” Nor do I propose a limit to inquiry. It is, however, the mystery of what we now do not know, as well as our wonder at what we do know, that spurs both philosophical and scientific inquiry, and aesthetic response—art, myth, poetry.
And that may be another aspect of the Camusian absurdity inherent in the human existential dilemma: we want to know everything, but to the extent that the inquiry itself adds richness to our lives, knowing everything could have psychologically, if not existentially nihilistic consequences.
On the other other hand—as I just riff “out loud” here—that is the very kind of nihilism that Nietzsche sought a solution for. His solution was largely a kind of passionate stoicism (amor fati) and heroic aesthetics. When the scientists and the philosophers have figured it all out—the artists will still have free play. Camus’ Sisyphus composes tunes as he walks back down the mountain...
There: now I feel better... I got the Stoics, non-duality, Nietzsche and Camus all on one page—and that’s a strange grammar.
There seems to be a fundamental split here on two different levels. Firstly that between the organic and inorganic nature of consciousness and secondly between the materialist version of a mind and that of the (for want of a better word) etherealist version of a mind. Allow me to indulge myself a little here, I'm neither a psychologist or a computer programmer, nevertheless I think it's important to have a go at slicing the problem up a la Phaedrus.
The nature of consciousness seems to be the key for what we derive our humanity from; it convinces us that we are alive, individual and mostly free to act as we choose. With this comes emotions and here, for me at least, is the essence of consciousness. To act logically is easy, it requires merely an understanding of rules which, if followed will yield a correct result. But we are not particularly logical creatures, we prefer to act emotionally, using induction and an organic appraisal of the situation; a 'feel' for it. To act inductively requires some sort of feedback system, a gauge of credibility and a contextual pyramid of past experience, for this we use emotion a great deal. The feedback mechanism is what we call consciousness. *
If however, we look at computers, we see logic machines. Machines which require a set of rules to make decisions based upon them. There is no emotion here, no consciousness, no awareness of the situation. So it would seem impossible to give a computer the rules for an organic process because simulating consciousness doesn't seem to be the same as actually having consciousness; it would still be a logical rule-based system. Hold that thought.
The second part of this is the materialist/etherealist view of minds. A materialist view says that the mind is just the chemical compounds, physical structure, electrical pathways etc. that inhabit the soft, squishy, pink, wrinkly blob in your skull. The etherealist view of minds is that there is some unquantifiable notion of 'mind' which philosophers like Descartes ascribed to. It is a property of the brain that cannot be found in the cerebrum by scientific means, and yet gives us our uniqueness, our humanity, our consciousness. I personally take umbrage with this view, but I know a number of very clever philosophy lecturers who still ascribe to this.
So, with these two distinctions made, lets return to the nature of Artificial Intelligence. I think this is a poor terminology, Artificial Consciousness would perhaps be better. I asked you to keep a thought in mind, namely that giving a computer the rules for being conscious seems to be contrary to the essence of a conscious being. Well, not, I think, in light of the distinctions I have made. Allow me to digress slightly now into the domain of language and this is where it becomes more musing and conjecture, but bear with me. There is evidence to support language as an innate property of the human mind (if anyone's interested I'll go deeper into this, vistesd I think this might interest you). If this is the case, it seems to me that this is evidence for the brain having a materialist function on an organic substrate. No surprise there, but if the brain organically codes complex communicative processes which can be replicated in inorganic machines, why can we not create organic processes in the same way by making organic machines?
If the mind is material, we don't even have to map the totality of the mind to do this, we can take the correct substrate and give it the correct stimuli to begin building the same pathways and feedback systems that the human brain has. That brain can evolve and learn simply by existing and receiving stimuli; there's no logical rules, no inorganic hardware, just whatever material processes are wrought by genetics (the rules that genetics provide are a further level of this ever increasingly complex piece of musing so I'm not going to go into them now, or I'll never finish this). The feedback mechanism required to function as an organic material brain is consciousness and it is my contention that this arises naturally from the brain in the same way as language is innately coded.
So finally, and reading back there's about as much clear train of thought in this post as there are honest men in the Whitehouse, we get to some sort of conclusion(?). If we set out to make artificially conscious machines by inorganic processes I think we will fail. If however we use a biological substrate with the same organic makeup as our own minds (a blank brain if you will) I believe that the nature of consciousness is an innate feedback mechanism, built on a material mind and that it can arise organically, given the correct physical structure. So we may yet achieve artificial consciousness in non-humans.
The more interesting thing for me is what rights these beings might have in the world. Would they be allowed to go to church? Vote? Be protected under law? etc.
Phew, all done 🙂
*This really should have been dealt with more thoroughly, but I got all caught up in one thing and couldn't see the best way to lay it out. I wanted to talk a little about Godel's incompleteness theorem (very little - I'm not a mathematician) as put out in "Godel, Escher and Bach" By Douglas Hofstadter, which deals with artificial intelligence in a myriad of ways from maths to art to music using the notion of feedback mechanisms
Originally posted by StarrmanVery good post.
There seems to be a fundamental split here on two different levels. Firstly that between the organic and inorganic nature of consciousness and secondly between the materialist version of a mind and that of the (for want of a better word) etherealist version of a mind. Allow me to indulge myself a little here, I'm neither a psychologist or a computer progr ...[text shortened]... f ways from maths to art to music using the notion of feedback mechanisms[/i]
On a side note, it's interesting to note that some researchers, as Antonio Damasio (whose books are fascinating, by the way), are conducting research on both the importance of emotions to our consciousness and also regarding the 'mechanical' nature of emotions.
Originally posted by StarrmanExcellent post.
There seems to be a fundamental split here on two different levels. Firstly that between the organic and inorganic nature of consciousness and secondly between the materialist version of a mind and that of the (for want of a better word) etherealist version of a mind. Allow me to indulge myself a little here, I'm neither a psychologist or a computer progr ...[text shortened]... f ways from maths to art to music using the notion of feedback mechanisms[/i]
To continue with Palynka's commentary on this there are some things I'd like to add.
The brain basically works due to the complex interplay of different neurons interacting with each other. Neurons are of course always making and breaking connections, allowing us to learn new things, and forget others. We experience such an 'analogue' impression of the world in the same way that a digital TV is able to give a picture of a circle, or to make complex gradients of colour etc.
Also, of course, we exhibit "learning" unlike a digital computer. A digital computer uses a preformed logical pathway for each calculation. Our brain doesn't have to do that - one of the tricks it has evolved the ability to do, to free up processing time, and to save time (especially in difficult or dangerous situations), it can rely on past experience, something current computers don't do (although some programmers are getting better at this type of thing).
Perhaps if we had some kind of biological system, like Starrman suggests, we could have a system that works more like the human brain, able to make and break connections, and to become truly conscious.
Originally posted by StarrmanI second that. Very good post. I had similar thoughts about the matter but never attempted to relay them. 🙂
There seems to be a fundamental split here on two different levels. Firstly that between the organic and inorganic nature of consciousness and secondly between the materialist version of a mind and that of the (for want of a better word) etherealist version of a mind. Allow me to indulge myself a little here, I'm neither a psychologist or a computer progr f ways from maths to art to music using the notion of feedback mechanisms[/i]
Originally posted by PalynkaBoth a second to your nomination, and a note that I too find Damasio fascinating. I like the multi-level way that he treats consciousness—e.g., we are not just aware of “the movie in our heads” (his phrase, if I recall), but also aware of being aware of that movie.
Very good post.
On a side note, it's interesting to note that some researchers, as Antonio Damasio (whose books are fascinating, by the way), are conducting research on both the importance of emotions to our consciousness and also regarding the 'mechanical' nature of emotions.
As a non-dualist, I think I would have a hard time being an “etherealist,” treating consciousness as some kind of ghost in the machine. There does seem to be a difference between strict materialism and physicalism—that latter, for example, I think would accommodate Starrman’s metaphor of our being electrical currents within an electro-magnetic field (if I mangled that, Starr, set me right!).
Also, I too think that language probably arises as a natural aspect of our consciousness.