< 1 2 3 4 > 
 
   
 

What could be wrong with a strictly information processing view of consciousness?

 
d0rkyd00d
 
Avatar
 
 
d0rkyd00d
Total Posts:  354
Joined  15-03-2016
 
 
 
17 April 2017 17:25
 

I’m being an irresponsible forum user, as I have not yet read all of the replies before pitching in here. 

Conscious awareness is separate from the thoughts running through your head, as I believe Harris references in “Waking Up,” which can be demonstrated through a certain kind of meditation.  I’m not an expert on meditation, so feel free to correct me, but I know one form of meditation involves simply observing the thoughts that go through your head.  In order to do this, there has to be an observer that is separate from the thought itself.  So the brain is very proficient in thought generating, but there is a higher perch from which to observe these thoughts as they are generated.  I think THAT is the conscious self. 

I have not read refutations to the Chinese room problem in a long time, but of what I do remember, I was not convinced by their arguments.  I still think the thought experiment poses a hurdle that seems difficult to clear.  If you aren’t familiar with it, here’s a link:

http://www.iep.utm.edu/chineser/#H1

Also, my intuition (and perhaps some (mis?)information) tells me that a large aspect of our consciousness grew out of our emotions, and I assume possibly our emotions precipitated from our nervous system.  I also would think that we would not want the AI we create to feel things like pain.  So if pain is a necessary component to emotion, and emotion to the experience of consciousness, is it still then still possible?

Sorry for the ride here people, I live in Colorado and the marijuana here is plentiful and good.  I’ll check back later to see if any of this is coherent.

 
diding
 
Avatar
 
 
diding
Total Posts:  241
Joined  07-01-2016
 
 
 
18 April 2017 06:27
 

I’m guessing that all of those weird things that our mind does are somehow related to the type of material our brain is made of.  It affects processing speed and storage capacity and I think most of it is evolved for staying alive.  An AI with greater capability for speed and storage might never have thoughts that just seem to appear out of nowhere.  It’s mind might never wander.  It might not need to dream.  It probably won’t get emotionally attached to it’s “arm” or any other piece of its “anatomy”, physical or data.  I’m guessing it wont even get attached to the idea of it existing.  I don’t know why it would.  Does anyone have any explanation why an AI would care if it were or weren’t?

[ Edited: 18 April 2017 06:31 by diding]
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5765
Joined  08-12-2006
 
 
 
18 April 2017 17:47
 
Giulio - 16 April 2017 10:52 PM

You mention that the act of recall “involves a series of reconstructive steps…” If I understand you correctly, then I think these reconstructive steps are what I’m calling consciousness: the process of constructing the model, or reconstructing reality in our imagination.

My point here is the following. Look first at what you called recognition. Let’s picture this as:
              UnconsciousActivity (UA) -> Experience of the OutputOfUnconsciousActivity (EOUA)

The conscious experience, the EOUA, doesn’t seem to add to the process, it is just an experience at the end. The work was done unconsciously.

Yes, I agree the work was done subconsciously, or unconsciously if you prefer. The EOUA is the end; the UA the means. The experience of recognition is a feeling, often accompanied by other feelings like fear or pleasure. From the standpoint of utility, recognition allows us to react appropriately to different stimuli without the need for conscious activity. You recognize the sound of a rattlesnake and you’re instantly on high alert, even before you’re aware that you’re hearing a rattlesnake. You fail to recognize a person in your house and you’re instantly wary, before you’re even aware that you can’t recall who this person is. You recognize that the sights are aligned on the target and instantly pull the trigger, before you’re even aware of the sight alignment or of pulling the trigger—and if you don’t realize this is how you’re supposed to shoot, you’ll think you got lucky when you accidentally pulled the trigger.

Giulio - 16 April 2017 10:52 PM

Now to recall. Is it just a composite process that is a sequence steps of the above type:
    (UA -> EOUA) -> (UA -> EOUA) -> ... -> (UA -> EOUA)

For example, recalling what’s on your desk might correspond to:

    (UA -> “my laptop”) -> (UA -> “and my laptop has a mouse”) -> ... -> (UA -> ” and I picked up that book on Shackleton, what was it called?”) -> (UA -> “Endurance”)

It may feel like the conscious activities in this composite process (ie all the EOUA’s) are actually actively driving the whole process, doing the hard work, but are they? Or is consciousness just in the passenger seat, noticing the experiences along the way, with all the hard work still being done unconsciously?

You’re missing an important first step here. Before the first UA is initiated, you must deliberately intend to recall all the things on your desk. (Recognition, on the other hand, happens automatically, without intention.) It’s this intent that initiates the process of constructing a model of your desk in your imagination—i.e., the process I’m calling consciousness. I guess I’m using the word, “model” somewhat loosely. It doesn’t have to be an image of your desk. It might be a kind of narrative, like you’re implying: “I recall that my laptop has a mouse; therefore, there must be a mouse on my desk. And I recall leaving the book about Shackleton on my desk; therefore, the book about Shackleton must also be on my desk.”

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  158
Joined  26-10-2016
 
 
 
18 April 2017 18:43
 
Antisocialdarwinist - 18 April 2017 05:47 PM
Giulio - 16 April 2017 10:52 PM

You mention that the act of recall “involves a series of reconstructive steps…” If I understand you correctly, then I think these reconstructive steps are what I’m calling consciousness: the process of constructing the model, or reconstructing reality in our imagination.

My point here is the following. Look first at what you called recognition. Let’s picture this as:
              UnconsciousActivity (UA) -> Experience of the OutputOfUnconsciousActivity (EOUA)

The conscious experience, the EOUA, doesn’t seem to add to the process, it is just an experience at the end. The work was done unconsciously.

Yes, I agree the work was done subconsciously, or unconsciously if you prefer. The EOUA is the end; the UA the means. The experience of recognition is a feeling, often accompanied by other feelings like fear or pleasure. From the standpoint of utility, recognition allows us to react appropriately to different stimuli without the need for conscious activity. You recognize the sound of a rattlesnake and you’re instantly on high alert, even before you’re aware that you’re hearing a rattlesnake. You fail to recognize a person in your house and you’re instantly wary, before you’re even aware that you can’t recall who this person is. You recognize that the sights are aligned on the target and instantly pull the trigger, before you’re even aware of the sight alignment or of pulling the trigger—and if you don’t realize this is how you’re supposed to shoot, you’ll think you got lucky when you accidentally pulled the trigger.

Giulio - 16 April 2017 10:52 PM

Now to recall. Is it just a composite process that is a sequence steps of the above type:
    (UA -> EOUA) -> (UA -> EOUA) -> ... -> (UA -> EOUA)

For example, recalling what’s on your desk might correspond to:

    (UA -> “my laptop”) -> (UA -> “and my laptop has a mouse”) -> ... -> (UA -> ” and I picked up that book on Shackleton, what was it called?”) -> (UA -> “Endurance”)

It may feel like the conscious activities in this composite process (ie all the EOUA’s) are actually actively driving the whole process, doing the hard work, but are they? Or is consciousness just in the passenger seat, noticing the experiences along the way, with all the hard work still being done unconsciously?

You’re missing an important first step here. Before the first UA is initiated, you must deliberately intend to recall all the things on your desk. (Recognition, on the other hand, happens automatically, without intention.) It’s this intent that initiates the process of constructing a model of your desk in your imagination—i.e., the process I’m calling consciousness. I guess I’m using the word, “model” somewhat loosely. It doesn’t have to be an image of your desk. It might be a kind of narrative, like you’re implying: “I recall that my laptop has a mouse; therefore, there must be a mouse on my desk. And I recall leaving the book about Shackleton on my desk; therefore, the book about Shackleton must also be on my desk.”

[UA] -> “Let me recall what’s on my desk”

You don’t think this happens?

 

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5765
Joined  08-12-2006
 
 
 
18 April 2017 20:11
 
Giulio - 18 April 2017 06:43 PM

[UA] -> “Let me recall what’s on my desk”

You don’t think this happens?

If it did, it would seem as though the desire to recall what’s on your desk came out of the blue, like an epiphany. Is that what happened? No, your desire to recall what’s on your desk (if in fact you experienced it) came from my suggestion to try to recall everything on your desk. I cannot recall a time when my desire to recall something came out of the blue like an epiphany.

So, no, I don’t think it happens—at least not to me. Does it happen to you?

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  158
Joined  26-10-2016
 
 
 
19 April 2017 06:14
 
Antisocialdarwinist - 18 April 2017 08:11 PM
Giulio - 18 April 2017 06:43 PM

[UA] -> “Let me recall what’s on my desk”

You don’t think this happens?

If it did, it would seem as though the desire to recall what’s on your desk came out of the blue, like an epiphany. Is that what happened? No, your desire to recall what’s on your desk (if in fact you experienced it) came from my suggestion to try to recall everything on your desk. I cannot recall a time when my desire to recall something came out of the blue like an epiphany.

So, no, I don’t think it happens—at least not to me. Does it happen to you?

Certainly we may feel the desire (in a nonverbal way) to do or think something before we do/think it. But that felt desire I suspect is a manifestation of UA. And of course, perhaps more relevant to your comments, we construct narratives to make sense of what has happened (including our thoughts), and these narratives are themselves felt experiences.

Eg I read your suggestion to recall the items on my desk -> UA -> I felt the desire to try recalling -> UA -> I verbalised this almost immediately afterwards…. Later, perhaps very shortly afterwards, I recall this episode as if I had consciously directed the decision to recall my desk in response to your suggestion…

It’s interesting you have brought up recall and memory.

I have for a while believed (influenced I suppose largely by stuff I have read, but also from a little introspection) that conscious experiences (nonverbal felt experiences, sensations and emotions, as well as semantic narratives and thoughts) are like the ripples or waves on water that are caused by under water currents (these being the unconscious activities). But I have asked myself, if this is the case, what on earth could be the evolutionary purpose of conscious experience if it is just a manifestation of something else?

I wonder now if conscious experience in the moment itself has no evolutionary purpose, but evolved as part of the process of forming and then recalling long term memories - specifically of information relevant to the things our bodies and mind can do in the context of the environment we live in.

Maybe animals developed the ability to experience smell only so they could recall it (that was evolution’s way of doing it); or rather the experience of smelling something is actually part of an animal’s memory formation process of that smell - that’s essentially what it developed for; not because having the experience itself had any value.

Similarly, maybe the experience of having a thought or constructing a narrative in our minds is there to play a part in the memory formation of that thought process (which is going on unconsciously, like the currents under the water). The thought process could occur just unconsciously, but the experience of it is just how we form the memory of it. It’s not as if we have to have conscious experiences of thinking to necessarily think; though conscious thinking is presmuably of a specially constrained formed (constraints, if flexible and modifiable, being a potentially powerful characteristic eg as in language). We must have many other unconscious mental processes that we can’t form memories of.

These are just ramblings at this point, and I am not sure I believe all of it, and I have no idea if there is any evidence of this. [I don’t think disorders where people can have conscious experiences but cannot form long term memories refutes this though.]

 

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5765
Joined  08-12-2006
 
 
 
21 April 2017 09:09
 
Giulio - 19 April 2017 06:14 AM

Certainly we may feel the desire (in a nonverbal way) to do or think something before we do/think it. But that felt desire I suspect is a manifestation of UA. And of course, perhaps more relevant to your comments, we construct narratives to make sense of what has happened (including our thoughts), and these narratives are themselves felt experiences.

Eg I read your suggestion to recall the items on my desk -> UA -> I felt the desire to try recalling -> UA -> I verbalised this almost immediately afterwards…. Later, perhaps very shortly afterwards, I recall this episode as if I had consciously directed the decision to recall my desk in response to your suggestion…

Be that as it may, the point is that the act of recalling the items on your desk requires the process of constructing a “model” of your desk in your imagination, which is A) not required for the act of recognizing any given item on your desk; and B) the process I’m calling consciousness. You may not be aware of everything your brain does to construct the model, but you are—you must be—aware of the model.

Here’s an alternative to recalling all the items on your desk. Suppose you spent an hour or so memorizing a list of everything on your desk by words: “light, printer, telephone…” When prompted by the stimulus, “Name all the items your list,” you’d reply by spitting out the list, sort of like singing a song. You could do that by rote, without any awareness of your desk or the items on it—i.e., without constructing the model of your desk in your imagination.

It’s like the difference between memorizing a list of directions (“left on Main Street, right on Sunset Boulevard, third house on the left”) and holding an image of a map of the neighborhood in your imagination.

Giulio - 19 April 2017 06:14 AM

It’s interesting you have brought up recall and memory.

I have for a while believed (influenced I suppose largely by stuff I have read, but also from a little introspection) that conscious experiences (nonverbal felt experiences, sensations and emotions, as well as semantic narratives and thoughts) are like the ripples or waves on water that are caused by under water currents (these being the unconscious activities). But I have asked myself, if this is the case, what on earth could be the evolutionary purpose of conscious experience if it is just a manifestation of something else?

I wonder now if conscious experience in the moment itself has no evolutionary purpose, but evolved as part of the process of forming and then recalling long term memories - specifically of information relevant to the things our bodies and mind can do in the context of the environment we live in.

Maybe animals developed the ability to experience smell only so they could recall it (that was evolution’s way of doing it); or rather the experience of smelling something is actually part of an animal’s memory formation process of that smell - that’s essentially what it developed for; not because having the experience itself had any value.

Similarly, maybe the experience of having a thought or constructing a narrative in our minds is there to play a part in the memory formation of that thought process (which is going on unconsciously, like the currents under the water). The thought process could occur just unconsciously, but the experience of it is just how we form the memory of it. It’s not as if we have to have conscious experiences of thinking to necessarily think; though conscious thinking is presmuably of a specially constrained formed (constraints, if flexible and modifiable, being a potentially powerful characteristic eg as in language). We must have many other unconscious mental processes that we can’t form memories of.

These are just ramblings at this point, and I am not sure I believe all of it, and I have no idea if there is any evidence of this. [I don’t think disorders where people can have conscious experiences but cannot form long term memories refutes this though.]

Again, I think you may be attaching a slightly different meaning to the word, “consciousness” than I am. Consciousness is the process of constructing the model. Yes, that involves unconscious activity—we’re not aware of what our brain is doing when it accesses our stored memories and makes a model out of them. But in order to list the items on our desk, we have to be aware of the model. Either that or first memorize the items and then recite them by rote.

That said, I think consciousness is probably overrated, at least for the average person. Most of what we do every day does not involve consciousness. I agree that thinking doesn’t have to be a conscious activity. Epiphanies—the end result of unconscious thinking—are evidence of that.

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  158
Joined  26-10-2016
 
 
 
22 April 2017 19:25
 
Antisocialdarwinist - 21 April 2017 09:09 AM

Be that as it may, the point is that the act of recalling the items on your desk requires the process of constructing a “model” of your desk in your imagination, which is A) not required for the act of recognizing any given item on your desk; and B) the process I’m calling consciousness. You may not be aware of everything your brain does to construct the model, but you are—you must be—aware of the model.

I agree with you, and am warming to your use of ‘model’. I feel there is something to this. But I am also as yet not sure what insight this gives other than noting recall as we experience it involves a process of consciously creating and holding onto a ‘model’.

Here’s an alternative to recalling all the items on your desk. Suppose you spent an hour or so memorizing a list of everything on your desk by words: “light, printer, telephone…” When prompted by the stimulus, “Name all the items your list,” you’d reply by spitting out the list, sort of like singing a song. You could do that by rote, without any awareness of your desk or the items on it—i.e., without constructing the model of your desk in your imagination.

It’s like the difference between memorizing a list of directions (“left on Main Street, right on Sunset Boulevard, third house on the left”) and holding an image of a map of the neighborhood in your imagination.

Let’s stick with maps. I like maps. (As an aside, if you are interested in maps I recommend Alan MacEachren’s excellent book ‘How maps work’.)

The questions that are relevant here are: what is map construction, and can there be different forms of it? And what value-add does consciousness bring to this when it is involved?

At a minimum, by map I mean some structure that encodes geometric or spatial or even logical relationships between a set of ‘things’, and maybe some quantitative descriptors of the relationships. But presumably in this context a map means more than an abstract structure (that could be codified mathematically), ie these ‘things’ in the map must be associated with some semantic meaning (perhaps implying an embedding in other bigger maps, maps of maps etc), or be connected with memories and other mental constructs, or just be associated with types of sensory inputs.

I am assuming that the mind does unconsciously construct maps of this form from a very early age, and that animals (including bees, and who knows maybe even some amoeba - see the book Wetware by Dennis Bray) unconsciously construct maps.

So when we consciously construct a map (hold the image in our imagination, as you say), what type of information about the world, or about ourselves in the world, are we capturing that couldn’t be captured by an unconscious map construction process?

This is the question I’d like to answer.

One conjecture could be that the awareness itself of map construction, and of the resulting map ‘held in the imagination’, does not encode information of any value (in terms of increasing the likelihood of survival and gene reproduction) that could not be encoded unconsciously. I think this must be the case for the following reason: if the imagined map can be stored away in long term memory unconsciously and then be later recalled consciously, even though the recall is conscious and may involve effort (unwinding Ariadne’s thread through the maze of our memory, so to speak), the full information content must have been stored unconsciously, even if this was implicit.

So what does awareness in the recall process add?

It would seem that conscious experience for us is a necessary part of the process of formation and recall of certain types of long term memories - perhaps related to why Gerard Edelman refers to the conscious experience as a ‘remembered present’.

My question really at this point is, from a survival/reproduction perspective, what can conscious recall facilitate beyond what could be achievable by unconscious brain activities (call them algorithms) that have access to the same information that is in the memory. 

It isn’t clear to me that functionally the awareness aspect facilitates anything more. I suspect, for instance, that what we typically refer to as conscious thinking (deductive and inductive reasoning, concept and relationship formation etc) doesn’t require conscious awareness, I.e. that one day its operation will probably be able to be codified formally/mathematically without the need for an experiencing I.

Maybe the right way to understand the role of awareness is that it is fundamentally there to facilitate a certain type of memory ie the storing and activation of certain types of information.

Maybe one day AI will be able to store and recall the same information content (that is in these types of long term memories) without an associated ‘awareness process’; or maybe there is something about the quality of the information being stored that means its formation and recall - it’s reactivation for use, let’s say - necessarily involves conscious experience. I don’t know. It does feel like that recalling a memory, holding the parts and the whole, and the potential relationship with other things (on the edge of the map we have created) in our minds, is like bringing something back to life ie taking something inert and activating the potential in it; or the difference between ‘ingredients on the kitchen bench + a cake recipe’ vs ‘the cake that’s been cooked from those ingredients following the recipe’ (so maybe consciousness lets us have our cake and eat it). But probably statements like this are unlikely to lead to much insight given their self-referential nature (being a bit like ‘It does feel like…this feeling…is like a feeling’).

Is any of this consistent with your perspective?

 

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5765
Joined  08-12-2006
 
 
 
25 April 2017 21:50
 

By (my) definition, consciousness is the process of constructing the model—or map—of reality. So no, I don’t think your position—that consciousness is mere awareness of a process, or awareness of the end result of a process that happens without it—is consistent with mine. The question you’re asking, to my mind, is whether awareness of the model is a necessary part of the process of constructing the model, i.e. a necessary part of consciousness. If not, then it stands to reason that AI could be developed with the capacity to construct a model of reality without being aware of it.

To a certain extent, I’d have to say that awareness is not a necessary part of the process. Autonomous cars (or even Walter’s tortoises) already do that, don’t they? The difference is that their model is objective. That has certain advantages, but it seems to me that it also limits them to existing reality. Since our model is subjective, we can imagine permutations of reality. We can imagine possibilities. Is awareness necessary for constructing models of variations on reality? My gut says it is, but I’ll have to think about it some more.

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  158
Joined  26-10-2016
 
 
 
26 April 2017 13:45
 
Antisocialdarwinist - 25 April 2017 09:50 PM

By (my) definition, consciousness is the process of constructing the model—or map—of reality. So no, I don’t think your position—that consciousness is mere awareness of a process, or awareness of the end result of a process that happens without it—is consistent with mine.

That’s not quite my view. My view is that consciousness is a property some mental processes have, and other don’t. Awareness (not necesarily self-awareness) is just another word I am using in this context for that property.

Perhaps unlike you, I believe certain types of information models can be constructed unconciously, and in fact are done so by many living things.
Though this is admittedly definitional. By using the word model
more broadly, I am allowing myself to ask what is it that the models
constructed consciously can do or represent that unconsciously constructed ones can’t, and what would be the evolutionary benefit. I have noted already that the information content of consciously created models appear to be stored unconsciously.

One perhaps crazy conjecture I put forward above somewhere was that consciously created models have no additional information content, and the exeprience of consciousness is just part of the process used by us (and other animals) of creating and recalling longterm memories (models, if you like).

The question you’re asking, to my mind, is whether awareness of the model is a necessary part of the process of constructing the model

Yes, you can put it that way as well.

[ Edited: 26 April 2017 13:56 by Giulio]
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5765
Joined  08-12-2006
 
 
 
26 April 2017 19:29
 
Giulio - 26 April 2017 01:45 PM
Antisocialdarwinist - 25 April 2017 09:50 PM

By (my) definition, consciousness is the process of constructing the model—or map—of reality. So no, I don’t think your position—that consciousness is mere awareness of a process, or awareness of the end result of a process that happens without it—is consistent with mine.

That’s not quite my view. My view is that consciousness is a property some mental processes have, and other don’t. Awareness (not necesarily self-awareness) is just another word I am using in this context for that property.

Perhaps unlike you, I believe certain types of information models can be constructed unconciously, and in fact are done so by many living things.
Though this is admittedly definitional. By using the word model
more broadly, I am allowing myself to ask what is it that the models
constructed consciously can do or represent that unconsciously constructed ones can’t, and what would be the evolutionary benefit. I have noted already that the information content of consciously created models appear to be stored unconsciously.

One perhaps crazy conjecture I put forward above somewhere was that consciously created models have no additional information content, and the exeprience of consciousness is just part of the process used by us (and other animals) of creating and recalling longterm memories (models, if you like).

The question you’re asking, to my mind, is whether awareness of the model is a necessary part of the process of constructing the model

Yes, you can put it that way as well.

And my answer to that—what the conscious model can do that the unconscious one can’t—is that it allows us to build a particular kind of model, one that’s different from, but analogous to, reality. Autonomous cars can’t imagine what the commute would be like if only another lane was added to the highway.

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  158
Joined  26-10-2016
 
 
 
27 April 2017 03:38
 
Antisocialdarwinist - 26 April 2017 07:29 PM

And my answer to that—what the conscious model can do that the unconscious one can’t—is that it allows us to build a particular kind of model, one that’s different from, but analogous to, reality. Autonomous cars can’t imagine what the commute would be like if only another lane was added to the highway.

I think your example about autonomous cars is wrong, though it depends what you mean by ‘imagine’. Certainly from a functional perspective or information processing perspective (ie what are the things we could usefully imply about the commute if a lane was added), this type of alternative scenario analysis can be reasonably performed by sophisticated algorithms.  Just look at AlphaGo - it’s a self-learning pattern recognition algorithm (so not if.. then.. else.. based) that can ‘imagine’ alternative Go playing strategies and board states and can beat the best Go masters in the world (something most experts until recently thought wouldn’t be achievable for at least another 10 years given Go’s mind boggling complexity). Sure, machine learning can’t do a lot of the imaginative things we can yet, but that may be just a matter of time. Self-programming algorithms is a real area of research. Again this is all from a functional information processing perspective (what can be deduced from the model) rather than an ‘experiential’ perspective (what it feels like to hold the model in one’s imagination).

So I don’t feel you’ve adequately or convincingly answered the question or I haven’t yet understood your answer.

Sorry to keep pressing you (I only do because I’ve learnt from our previous exchanges), but what do you make of the following syllogism I gave earlier:  the information content of any consciously constructed model can be stored in long term memory for later recall; the storing of it is not held in consciousness; therefore the information content of a consciously constructed model must be able to be represented in the unconscious mind.

What’s the ‘so what’ of this? Presumably the information content of a consciously constructed model could be accessed and used by an unconscious process (though I don’t know if there is evidence of this). If this is the case, it is any interesting question as why we evolved to consciously experience the recall. Maybe our mind is regularly unconsciously accessing and using these stored consciously constructed models, but the only times we consciously recall these occasions are when the mind decides for some reason to create a new long term memory about the activity of accessing and processing of the stored memory. (Again this is pursuing the hypothesis: conscious experience = process of the mind committing some information management/model construction to memory).

 

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5765
Joined  08-12-2006
 
 
 
27 April 2017 08:12
 
Giulio - 27 April 2017 03:38 AM
Antisocialdarwinist - 26 April 2017 07:29 PM

And my answer to that—what the conscious model can do that the unconscious one can’t—is that it allows us to build a particular kind of model, one that’s different from, but analogous to, reality. Autonomous cars can’t imagine what the commute would be like if only another lane was added to the highway.

I think your example about autonomous cars is wrong, though it depends what you mean by ‘imagine’. Certainly from a functional perspective or information processing perspective (ie what are the things we could usefully imply about the commute if a lane was added), this type of alternative scenario analysis can be reasonably performed by sophisticated algorithms.  Just look at AlphaGo - it’s a self-learning pattern recognition algorithm (so not if.. then.. else.. based) that can ‘imagine’ alternative Go playing strategies and board states and can beat the best Go masters in the world (something most experts until recently thought wouldn’t be achievable for at least another 10 years given Go’s mind boggling complexity). Sure, machine learning can’t do a lot of the imaginative things we can yet, but that may be just a matter of time. Self-programming algorithms is a real area of research. Again this is all from a functional information processing perspective (what can be deduced from the model) rather than an ‘experiential’ perspective (what it feels like to hold the model in one’s imagination).

So I don’t feel you’ve adequately or convincingly answered the question or I haven’t yet understood your answer.

Sorry to keep pressing you (I only do because I’ve learnt from our previous exchanges), but what do you make of the following syllogism I gave earlier:  the information content of any consciously constructed model can be stored in long term memory for later recall; the storing of it is not held in consciousness; therefore the information content of a consciously constructed model must be able to be represented in the unconscious mind.

What’s the ‘so what’ of this? Presumably the information content of a consciously constructed model could be accessed and used by an unconscious process (though I don’t know if there is evidence of this). If this is the case, it is any interesting question as why we evolved to consciously experience the recall. Maybe our mind is regularly unconsciously accessing and using these stored consciously constructed models, but the only times we consciously recall these occasions are when the mind decides for some reason to create a new long term memory about the activity of accessing and processing of the stored memory. (Again this is pursuing the hypothesis: conscious experience = process of the mind committing some information management/model construction to memory).

I initially thought your syllogism is already accepted as fact, at least in the sports world. (I keep returning to the sports world because I first became interested in consciousness in that context.) There’s a technique called “visualization,” in which you imagine practicing without actually doing it. The study I’m most familiar with involved two groups of Soviet shooting competitors. Both groups already had the basics down. One group was told to practice with rifles and ammunition for X hours a day; the other group was told to imagine practicing without rifles. At the conclusion of the study, the group which had employed visualization was found to have improved to a similar degree as the group which had practiced for real. This seems like a perfect example of the consciously constructed model being accessed and used by an unconscious process: when the shooters who visualized practicing then went on to shoot in an actual competition.

(To shoot well in this particular kind of shooting competition, offhand rifle, the shot must be executed subconsciously. By the time the shooter becomes aware that the sights are aligned on the bullseye and “consciously” pulls the trigger—a process that takes anywhere from 250 to 750 milliseconds—the sights are no longer on the bullseye. “Automating” these two steps—connecting the neurons that recognize proper sight alignment directly to the neurons that pull the trigger, through thousands of hours of repetition or visualization—makes them happen almost instantaneously.)

But now that I think about it, I’m not so sure it’s a good example at all. The consciously constructed model isn’t really being accessed subconsciously because the directly connected neurons obviate the need for the model. So maybe you can provide an example of an unconscious process accessing the consciously constructed model? The advantage of unconscious/subconscious processes is that they don’t require a model, which only slows things down.

As for autonomous cars imagining how much better the commute would be with more lanes on the freeway, I don’t see that happening—not unless the car was specifically programmed to do so. What reason would it have for autonomously imagining a “better” commute? For that it would need a subjective point of view, or a specific definition of “better” from someone else’s subjective point of view.

[ Edited: 27 April 2017 08:20 by Antisocialdarwinist]
 
 
icehorse
 
Avatar
 
 
icehorse
Total Posts:  5603
Joined  22-02-2014
 
 
 
29 April 2017 15:51
 

So many ideas here, but I want to ask about this moment

from ASD:

That said, I think consciousness is probably overrated, at least for the average person. Most of what we do every day does not involve consciousness. I agree that thinking doesn’t have to be a conscious activity. Epiphanies—the end result of unconscious thinking—are evidence of that.

I mostly feel as though you’re all working with not-yet-well-defined terms. In this case, I would argue that most of what we do every day our explicit / talking brain can’t explain, but we’re still conscious of it. So:

- the instructions from brain to heart to beat - this time - are unconscious.
- the instructions from implicit brain to legs to walk are conscious, but not explicitly describable.
- the words I’ve typed here are a product of my explicit brain.

 
 
Giulio
 
Avatar
 
 
Giulio
Total Posts:  158
Joined  26-10-2016
 
 
 
29 April 2017 18:39
 
icehorse - 29 April 2017 03:51 PM

I mostly feel as though you’re all working with not-yet-well-defined terms.

For sure. (If we could define consciousness fully, it is unlikely we would be trying to understand what it is exactly.)

But I’d say more. Part of what a conversation like this does is bring to the surface (make us aware) of implicit assumptions we are making when forming explicit conscious models - in this case, conscious models of the phenomenon of consciousness. There is a lot buried under the surface, in the unconscious let me say, of any conscious model of a suitably complex phenomenon: we can’t hold it all in our conscious awareness in one go, we need to unwind the implications, the relations etc like a ball of thread… (By making these remarks here I am, on the sly, responding to ASD. I am still thinking about a more direct response.) 

 

 
 < 1 2 3 4 >