Artificial Consciousness Forum

AC FORUM

w:Picture

This Forum is sponsored by the GreySmith Institute of Advanced Studies: Artificial Consciousness Project

By using this Forum you agree to publish your comments under the following License

CC-By-SA 3.0


FORUMEdit

Place topics of interest near the start of this file and Housekeeping stuff near the end, in order to reduce viewer fatigue


Topics of InterestEdit

Best Theories of Consciousness (Scientific Consensus)Edit

There is a group of volunteers working on a survey project with the goal of rigorously measuring whatever 'scientific consensus' there might be for the best theories of consciousness. They are using the open survey system at canonizer.com specifically being developed for such purposes. It is a wiki system with 'camps' and survey capabilities added. Anyone can join , help develop, and defend their favorite 'camp' as a team. Expertise is determined via a peer ranking process in topics such as this Mind Experts topic where peers can rank each other in a top 10 kind of way. This quantitative measure of expertise is used in the "Mind Experts" canonizer algorithm which can be selected on the side bar to rigorously measure how much scientific consensus there is for various camps. These measures can be compared to the what the general population believes (one person one vote canonizer algorithm), and other ways of 'canonzing' things.

There is currently a topic on theories of consciousness where there is already some tentative leading scientific consensus 'camps' such as this Consciousness is representational and real camp. The growing number of supporters in this camp already include leading thinkers such as Steven Lehar, John Smythies, and others. There are a growing number of diverse camps being created, but to date this particular camp continues to extend its lead and apparently shows that there might be a significant amount of consensus amongst experts on theories of consciousness after all.

Of course the more people that participate in the development of this survey and 'canonize' their views the more comprehensive and useful the information becomes, so everyone is encouraged to do so. There will be huge advantages to knowing concisely and quantitatively what everyone thinks on such important issues going forward as ever more demonstrable scientific achievements are accomplished in this field. Brent.Allsop 18:14, 5 July 2009 (UTC)

Thank you for this posting. Please sign it when you get the time I would probably be in the camp that says that consciousness is representational and Partly Illusory.--Graeme E. Smith 17:12, 5 July 2009 (UTC)
What do you mean by "partly illusory"? Brent.Allsop 19:01, 5 July 2009 (UTC)
Well it is simple really, If we assume that consciousness is representational, then we find all these areas where the representation breaks down. Things like the blind spot at the center of the fovea, how time seems to be flexible depending on what we are doing, and other subjective impressions of the world that do not jibe with our experimental findings about the world. I think that the philosophers that are pushing Phenomenalism are for instance being fooled by the tendency of the brain to bind elements, into thinking that they are unitary. Things like that. My belief is that these illusions that take away from the strict relationship between reality and representation, are necessary simplifications, many of which are detectable only after conscious description and detection of errors. Therefore Consciousness itself might be thought of as being partly representational, and partly made up of these illusions. Does that help?--Graeme E. Smith 19:21, 5 July 2009 (UTC)
Yes, that helps immensely. Things like the blind spot, how time 'seems' to be flexible, and other 'impressions' whether mistaken or not, are proof that our knowledge is representational, are they not? I completely agree with you about the idea of representations being 'unitary' being a mistaken idea. It sounds like we agree on much, but disagree on the proper conclusions to draw from such? --Brent.Allsop 02:59, 20 July 2009 (UTC)
Perhaps, or perhaps I am just a fresh look at an old problem. The fact is that these boundary problems define the limits of the representation. The brain glosses over them, giving us the impression that for instance there isn't a blind spot in the center of the fovea. This gives us the impression among other things of the Unitary nature of the representation. I just stress that Illusion more because it makes the Phenomenalists more understandable, and also because it tells us something about the nature of the representation, the fact that it is not necessarily an accurate representation. Dr. Aleksanders work with ICONIC memory for instance, suggests that the neural networks should represent the contents of vision fairly exactly. Yet here we see something that indicates that the ICONIC model might be an oversimplification.--Graeme E. Smith 15:05, 20 July 2009 (UTC)
What would you say phenomenal red is? Is it property of something we are looking at? Or is it a property of our knowledge of such? or something else, or not, entirely? --Brent.Allsop 02:34, 24 July 2009 (UTC)
I guess, part of the problem, with answering that question, is the problem of the definition of Phenomenal. Some people use it to mean, experential, and some people use it to mean indivisible, and some people have even more arcane definitions for it. I have tried to work within the definitions that I was given, and have been soundly rebuffed by others that use a different definition. However let us look at the "Experience of Red" if you want to take phenomenal in that way. There is a part of the brain that is sensitized to color, and there is a sensitivity in the retina that links a certain range of colors to a certain type of neuron. This means that red colored light, enters the brain of the individual always in the same neural location. Whereupon it immediately trips over a form of content addressable implicit memory, that is triggered when shades of red are part of the optical stimulation.
This means that perception is not necessarily about the nature of the light, it is also about the previous experiences of light, and how they are interpreted. Now we come to the conversion between implicit memory and explicit memory. Because of the nature of implicit memory it can't be retrieved, and so can't be addressed except in the reactive phase, and then only as a large field of data presented in parallel, that I call the Data Cloud because of its nebulous organization, and yet extreme content. In the data cloud it is impossible to winkle the red out of the cloud, because there is no address for it.
We can however filter the data cloud according to environmental location, even though the data is expressed in stove-pipe sensory modalities, by linking different sensory modalities to each other if they represent the same environmental zone. I call this effort the creation of salience zones, because the zones that are picked for isolation by filtering all the others out, are the zones that have the highest instinctive pull. The data cloud components that remain are called Functional clusters, and are probably tagged with a synchronous Oscillation in the Gamma Range, called in the industry a GSO or Gamma Synchronous Oscillation.
The GSO is then used to isolate the Functional Cluster by suppression of every frequency but the selected GSO, at the ACC. This works by actually suppressing the cerebral cortex in the other zones, leaving the selected salience zone intact. Obviously Red doesn't mean anything at this point, although the information is carried in the data cloud.
The isolated salience zone, is then converted to an addressable form, by isolating each individual neural group, that is part of the data cloud. What we get is a map of Neural Group locations that are active in the functional cluster. I call this list of neural groups a CHUNK in historical reference to Millers chunking because the effect is very similar. The primary difference is that the CHUNK by itself has no meaning, and the CONTENT it refers to, is still stored in implicit memory.
I won't go into the micro-anatomy of the brain at this point, but suffice it to say that there are tissues called Allocortical Tissues with three layers that I suggest are pure implicit memory and tissues called Isocortical Tissues with usually around 6 layers that I say are implicit memory with explicit addressing. The top three layers are very similar in both types of tissue, which leads me to believe that explicit addressing is just a special mode of implicit memory.
Thus when we rehearse the CHUNK for a specific Functional Cluster we get almost the same Data Cloud that was originally found when we converted the functional cluster in the first place. Any differences belong to the fact that the implicit memory is constantly learning as new stimuli are entered, and so the contents are not stable.
Being a data cloud again does nothing for interpretation so red still does not mean anything, but because the CHUNK contains a list of Neural Group addresses, If it can be selectively rehearsed, we can isolate specific combinations of Neural Group addresses. Feedback between the selective element of the Attention system and the Addressing Element goes through a selective inverting filter, which allows selection of sub-chunk zones of salience. These sub-chunk zones are again tagged with a unique GSO, so that the ACC can suppress everything else, allowing the data cloud to be restricted. At this point, it is quite reasonable to assume that it would be possible for red colored elements to be segregated out, and since red has a greater salience than other colors, the effect would be a tendency to draw the attention to red elements. But despite the fact that we can detect the zone that has the higher salience associated with red, there still is no meaning to red.
Now we map the salience zones together and form clusters of similar content, by comparison of a signal against a previous signal, and isolate the zones that are actually red from the zones that just have as high a salience as red. Red still doesn't have a meaning but it has a location. The next step is to associate the location with the color. It is only at this point that we can isolate red elements, and determine that they have a similarity to other red elements. But, we still can't say that we have captured the meaning of red. Nor, can we say that we have "Experienced" Red yet, because all this has been done automatically as part of the perceptual process. Part of the problem with dealing with the phenomenal nature of red, is that we gloss over all this detail that is happening before we even can select an item that is red to experience.
I am going to stop there and let you relieve the pressure of disbelief that this approach to memory usually causes, before I go on to explain how red gains significance beyond its salience value.--Graeme E. Smith 14:13, 24 July 2009 (UTC)
As is usual, we're struggling with definitions and understanding what each of us means by various terms.
I think I can map my understanding of a bunch of these concepts into your terminology, but I may be incorrect in some instances.
From the perspective of someone in the Nature has ineffable phenomenal Properties camp everything you are talking about so far is all about abstract able cause and effect behavioral properties / systems. Even your talk of the 'salience' of red seems to only have to do with behavior, rather than what red is really 'like'.
This all seems to be consistent with you talking about something more being required before you go on to talk about how red 'gains significance beyond its salience value'?
From my perspective, you still need to talk about what red is 'like'. I'm hoping that is yet to come?
So far, it looks like we are in agreement, that whatever it is that is 'like' red, is definitely not a property of the strawberry we are looking at - but it is a property of our knowledge, or the final result of this perception process? -- Brent.Allsop 21:13, 26 July 2009 (UTC)
Red isn't "like" anything at all at this stage, if only because we haven't fully perceived it. That is part of the problem I have with the Ineffable Phenomena crowd they try to crowd "Likeness" in before the reactive process is completed. Part of the problem lies in the fact that we can pick red out and address it, but we still don't really know what it means, that it IS red that we are picking out, not just a salience zone that is more interesting. Now this is where I get into the Naive machine doctrine. A Naive machine that has no map between its input and the demand memory that is formed from that input, has no choice but to build a map the hard way, and the simplest system that can do that, is a system with random choice and in this case balanced against salience zoning, that allows it to revisit areas of higher salience more often in its random walk among the sub-elements.
This means we have two uniquely different memories that contain the same contents, a memory that is implicit, and a memory that we can demand but which we have to build a map of in order to make use of it. It also means that our most basic motivational drive is one based on random impulse.
At the Cortex Level, we can see this as a primary sensory zone, called a Core Zone, in each sensory processing center, and immediately associated with that, a Belt zone that stores the isolate-able elements from the salience zone that have caught our attention. Now we know that the salience directs our attention to some extent, and that red is more salient than brown, so we are not surprised to find that the eyes track the red ball more often than the brown ones.
Have you ever heard of Hoffsteaders SLIPNET experiment? Essentially he tried an experiment with a workspace, and an Associative Network that was "Slippery", where associations were tentative and flexible, and used it to build anagrams by learning rules of association. Anyway, so called Associative areas are associated with each belt, like the belts are associated with the cores. I haven't worked out the exact architecture of the transfer between these areas, if only because the book I had on the Architectonics of the Human Telencephalic Cortex was due back in 2 weeks and so I didn't have time to study it as deeply as I would have liked, Worse it was rather uninformative about the detail of the structure of the Associative areas, except to note a drop in myelin density in Laminae I.
Anyway, it is probably in these Association Areas that a slipnet like alignment of similar elements according to some associational vector is created to form a rule of association. In other words it is here that we first associated what red is like, with the storage elements that have been grouped together in the Belt areas. Candy Apple red is like Fire-engine red, because there is a slippery rule that says that somehow they are associated by having a relationship with the color receptors in the eyes, that favor the Red Receptors if they exist. This still doesn't deal with what red is like, but it does tell us how we know that one red is like other reds. I am still a little ambivalent about where in the Attention system the Association Areas fit. Sometimes I think I am missing an attention phase, and sometimes I think I am right to associate them with my complicit attention phase. I won't really know until I get a lot closer to the architectonics and connectonics of the belt-associative connection. Of interest is the Myelin connection between the third laminae of the Belt area, and Laminae I of the Association Areas. Is there a direct connection in which case we might need another attention phase, or is there an indirect connection ie via the Corpus collosum. The reduction in myelination of Laminae I is significant because it reduces the number of areas of brain that feed into the association areas, allowing them to concentrate on the data flowing into them from the belt. The more I think of it, the architecture is more likely to support a local connection rather than the remote connection via the collossum.
This means I have a fourth phase of explicit Attention to account for... Better stop there and let the idea percolate a bit.--Graeme E. Smith 01:42, 27 July 2009 (UTC)
I completely agree with you that “Red isn’t like anything at all at this stage” But, also, everything you are talking about here is only abstract able behavioral stuff, and none of this is phenomenally like anything. So, even though all this stuff is surely very interesting, it all has nothing to do with phenomenal red, and what it is like, it seems to me. Brent.Allsop 22:09, 3 August 2009 (UTC)

Well, yes and no, gradually we are building up relationships between red things, relationships that we can then use in judgements later. So while there is no experential basis for red still at this point, we have the primitives that we can use later to make experential judgements. Does that make sense? We need to have a link between candy and apple, between apple and red, and between the color candy-apple on a car, and red, in order to make the link between the candy on the apple, and the candy-apple color. The Candy-apple name exists because someone made those links, and named the color after the sickening sweet candy coating over an apple at a fairground. In other words these primitives are what is needed to make the statement "Candy-Apple paint is like the Candy on a candy apple at the fairground." However to make the full linkage we need a sequence of retrievals of these smaller relationships, or some way to analyze them that brings out the relationships when they all fire in parallel.

So here we have the binding problem, how does the candy-apple color on a car get linked to the candy color on a fairground candied apple? Now the phenomenalists will tell you that this is a big problem. But I tend to disagree, in my model the linkages are already bound by the time they are linked. In other words the concept of a Quale is thought to be a show stopper for physicalism. Yet in my model, Qualar output is the result of parallelism pure and simple. Because of parallelism, the link between the word candy, and the red confection coating the apple, is firing in parallel with the link between the Red color of the confection, and the basic linkage for red, which is firing in parallel with the color of the car, which is firing in parallel with the linkage between the color of the candy and the color of the car. In other words there are a lot of parallel outputs available, and the problem is not how to bind them together they are bound by simultaneity, the problem is how to sort out which ones are most important.

Here we get into the idea of meaning, meaning is a judgement about the significance of the data we have generated about the perception. Essentially we prioritize the parallel information so that we know which information is the most important to know. To get to meaning we have to present the parallel data to specialized processing modules, that determine for instance if fire engine red is prettier than Candy Apple red. Usually we agree that it isn't. There is something to the luster of Candy Apple Red that makes it more attractive. Of course age them for a while, and they both lose their luster, but that is another question altogether.

But even at this point when we are making the decisions about what is important in the parallel perceptions of candy-apple red, we are not "Experiencing it" This is all still reactive processing. To experience something and know that we have experienced it, we need to tell ourselves about these important bits of information. We need a reflection of the choices back into the brain where it can be analyzed. It is only at the time of reflection that we know that Candy-apple red is like the red of a Candied apple, and is more beautiful than fire-engine red.

Let me leave those thoughts with you and see how you feel about them.--Graeme E. Smith 02:03, 4 August 2009 (UTC)


What do you mean when you say: “the phenomenalist will tell you...” are you implying that you are not a “phenomenalist”? Are you saying you do not believe there is such a thing as qualia or something? I would like to know, concisely, what you believe on this issue.
The theory described in the Consciousness is Representational and Real camp by far has more scientific consensus than any other theory of consciousness, so far. Would you consider the experts on theories of mind in this camp as “phenomenalists”?
The people in that camp would argue that you are confusing abstract able behavior representations with what qualia are like. The proof is in the way you say things like: “we are building up relationships between red things” and so on. What you say is obvious, but has nothing to do with what red is like. And red could be like red, even with absolutely no relationships like this of any kind. It is also a mistaken idea to think that there are any ‘primitives’ to red. Red is simply our knowledge of what we are looking at. It is the final result of all the intermediate or ‘primitive’ representations. There is nothing that is ‘primitive’ to red. The same goes for ‘meaning’. There could be a phenomenal machine that experiences red, with no meaning associated with it of any kind. All this reveals your confusing and all this is just distracting you from what is important about the quality of red. -- Brent.Allsop 22:42, 10 August 2009 (UTC)

Ah... then you are one that would at least argue the same line for interests sake. Ok for the purposes of this discussion let us say that there is the Experience of Red, and then there is the Representation of a specific red. I do not care about the experience of red, all that much, because it is a phenomena of the illusions related to perception. What I am interested in, is what is representational. Those whom you say are of the representation is real school don't make the same distinction, and as a result get all tied up in the "Experience". Phenomenalism the stressing of the experience of red, over the representation, is an error as far as I am concerned, and because along with that basic error comes the error of Quales that are not representational, you get the mis-understanding that Red doesn't have primitives. Red itself may not, but any specific red color does, in that it is not made up solely of red. The representation of red, has to combine the outputs from multiple sensory clusters associated with different wave-lengths of light, to assemble red. How can it do so, if we negate the contributions of specific frequencies from the representation? As far as I am concerned saying that there are no subcomponents no primitives to red, is flying in the face of accepted wisdom as to the nature of light. Remember the sensory cluster that detects green gives an output even if no green is detected, so there is no such thing as a Red representation that does not contain a value also for green, even if that value is 0.

It is an illusion that red is itself a primitive and thus qualar. If you don't realize that the experience of being human is an illusion, then you get all tied up in things like the poor guy on another board that thinks that the SELF is a whole body thing, and that it wouldn't exist without dark matter. It feels to him as if his self is his whole body, so that must be true even if the brain is the focal point of the nervous system that presents the image of self to consciousness.

Difficulty of Making a Demand Memory out of a Neural NetworkEdit

You might be surprised just how difficult it is, to implement a demand memory system on a neural network--Graeme E. Smith 16:44, 2 February 2009 (UTC)


Cognitive ArchitectureEdit

You might want to take a look at the GreySmith Virtual Architecture and compare it to the CLARION Architecture and the Franklin IDA Architecture Which are two of the most successful cognitive architectures yet produced.--Graeme E. Smith 21:09, 18 March 2009 (UTC)

HousekeepingEdit

GreySmith Institute General InterestEdit

Color SchemeEdit

Who was the idiot that thought up this ugly color scheme!

Actually the color scheme has changed very little from the Tabbed Portal Template I used to make the portal, It had ugly grey tabs for the second and third page, I changed the second tab to yellow, and my eyes went buggy when I wrote the second page, so I toned it down to gold, and I lightened the ugly grey to silver, but that is all the changes I made.--Graeme E. Smith 16:58, 2 February 2009 (UTC)

General LayoutEdit

You used the "Tabbed portal" template but where are all the fancy boxes?

Yeah, I did, and I had to hack it to get it to go to multiple levels, But frankly I think all those pretty boxes take away from the sense of control I get from simple links. If you think it would look better with pretty boxen, then lets discuss it and I will put them back, (But not today!)

--Graeme E. Smith 16:29, 2 February 2009 (UTC)

What's with the organization into schools, divisions, subdivisions, and departments?

What it is, is a suggested framework for thinking about Artificial Consciousnesses base science this is after all on a computer, and on a wikiVersity so it can be changed at a heartbeat. But don't demand changes now, lets discuss it first so we know what they should be, I am dreading going back and changing all the files. and if I had to do it every week, I would die of boredom. Later once we know what we are doing, some of you can help me, especially at the department and project levels.--Graeme E. Smith 16:29, 2 February 2009 (UTC)