Yes - and I don’t even mean that it has to be a skyhook or some sort of top-down process. It just feels that in going from point A (information) to point B (“what it’s like” to experience that information) some entirely new element should enter the picture. Some as of yet undiscovered force or unheard of algorithm for structuring that information. It may well be, however, that consciousness in construed entirely of elements and processes we’re already aware of, and simply emerges at a much higher level of complexity than is seen in unconscious machinery.
Yes - the entirely new element & force & algorithm is the emerging mind/body itself! This is what the argument of emergence in vast systems is all about. New ‘elements’ arise that can be best understood in their own light rather than in terms of their underlying constituents. The semantics arise in the emerged elements themselves! This is what the Life Game demonstrated in a limited and perhaps intellectually clumsy way. Unfortunately Dennett’s comments on the game were not as clear as they could have been (e.g. there was little mention of the role of the programmer in the synthetic ‘emergent’ phenomenon), but they are still worth reading. So the Life Game has a kind of axiomatic basis in the programmer that doesn’t really get away from Godel limitations. But this is not the case when we look at Reg Cahill’s paper (link provided in other thread).
On a personal level, I must say, I like a model such as this. It lends something sacred to the human experience without the need for a dogmatic religion. Mankind as creator and steward of the important things (values, subjectivity, etc.) that emerge with us, at our level of existence or complexity.
Wow - thanks for that Lexie, that’s a really nice summary and compliment!
On another note, wonder if you have any thoughts on the function of consciousness…
This is a really interesting point that I have thought a lot about lately as well. Roughly at one end of the “life-stance spectrum” of the (scientific) community you have what Eucaryote calls the God botherers. Back from there we have what Ayn Rand calls the “mystics”, i.e. those that believe that some form of consciousness is transcendental. Next come the deists - i.e. those that don’t believe in an interventionist god of any description but nevertheless feel that the perceived order of the universe must be explained by a higher intelligence kicking it all off in the first place. Next come agnostics and after that, soft atheists like myself, i.e atheists that don’t have an axe to grind against other beliefs and don’t insist that their own convictions are necessarily correct or final, but rather suggest that their own convictions are continually up for grabs, or temporal and contingent and emergent. Finally there are the hard atheists and anti-theists. Yes I would like others to share my convictions that “getting on with self-actualising” is more important than life-stance.
But the “life stance” a person takes typically affects what is thought about the answer to the question of the function of consciousness quite a lot. I would largely agree with Dennett that life & consciousness is just a probabilistic outcome of a QM universe. But how exactly did this QM universe come about? Did an evolved uberspecies have a hand in shaping it? It’s possible in my estimation, without invoking any mysticism whatsoever. However I wouldn’t go as far as saying consciousness is inevitable. That suggests a clockwork universe - and QM tells us this is not the case. I’d prefer to put it in a different way. Maybe if you ran the same “universe program” 1,000 times you would find our universe at its age with its human consciousness fitting comfortably within three standard deviations from the mean in a bell curve of universal order or self-organised complexity! That said, I think its easier to talk of systems having “blind purpose” a lot of the time, like Richard Dawkins often does.
Assuming a philosopher’s zombie is possible, though,
...I definitely wouldn’t…
then it seems as if consciousness could have many potential functions, doesn’t it? So, for fun, I’ll put a few of my musings on why we have consciousness below:
- It’s a mechanism to compress information. ... Or, maybe all the information that we experience consciously still has to be represented subconsciously somewhere, but consciousness allows us to process much faster.
- Subjective experiences are necessary for multi-factor decision making.
...In this model, I imagine consciousness more as feedback - we couldn’t make those type of decisions without constant subjective feedback from the environment. How did that feel? To what degree? What were the subjective consequences? Perhaps some infinitely complex algorithm could produce these results, but it could be that the addition of consciousness was simply the more straightforward, simpler answer (and maybe the fastest, when trying not to get eaten).
...it seems as if the level of subtlety and flexibility created by our emotions and subjective experiences would be very hard to recreate in a more binary computer system, programmed to “Do A, Avoid B”, no matter how complex you make it and how many “A’s and B’s” you add.
- Along those same lines, what if consciousness is a system for prioritizing the things which we can productively influence? ... Putting aside the mechanics of how that happens, though, what about why it happens? What information makes the cut and why?
Again, it seems as if this might be a sort of priority system for the things in our environment we can potentially influence. Obviously, basic sensory information tends to get first priority in consciousness as we don’t want to be walking into things or putting our hand on a hot stove. Often followed by problems that would have been significant to survival in the past - social relationships, resource management, etc. Of course, we could likely accomplish that without subjective consciousness - but maybe subjectivity and the level / type of experience that becomes attached to certain information is simply the solution our minds’ arrived at for quickly adapting to the environment and placing a priority on certain information.
Don’t really have much to add to all of the above. All of it accepts the model of the mind/body as a computatiional system. Only observation I would make is the interaction or interpenetration of instincts-with-consciousness or genes-with-memes. Your next paragraph seems to touch on this idea.
In both of the above scenarios, actually, I picture consciousness as information in the mind that somehow becomes infused with subjectivity - again, from a survival standpoint and from an information management standpoint, I think it follows that there are many situations where this could be adaptive or useful, above and beyond a philosopher’s zombie model.
- Consciousness was key to self-reflective thought and, beyond that, social thought (understanding what another person might be thinking). I try to imagine even the most advanced computer or robot “thinking about itself”. What would that mean without consciousness? I suppose it could have a program that allowed it to examine its own data and look at its own past performance. It could even evaluate its past performance, I imagine, and identify factors that blocked optimal performance, perhaps even “re-writing” itself to better meet those goals in the future. Again, to my mind, all of this seems pretty plausible for a non-conscious computer or robot with sufficient technology.
What is gained from this picture, then, by adding subjective experience to that process of reflection? My guess would be the sense of what we think of as “free will”, and again, increased ability to adapt and grow. What is a computer going to base its self-evaluation on, after all? Meeting a set of particular objectives, probably. And why should it “care” about those objectives? Why would a computer “want” to do anything? I suppose it “cares” about doing only the things it’s been programmed to do.
Humans, on the other hand, don’t have the benefit of a programmer. We have to “self-program” - so not only do we have to create our own goals and objectives, we must be able to change them at a moment’s notice. In the scenario above, how on earth could the computer do this? What exactly would it change its goals to, and based on what? It seems that in our case, subjectivity might be the answer to our ability to “self-program”. Of course, by “self” program, I don’t mean there’s some sort of ghost in the machine doing the program. I simply mean that to engage in a constant, adaptive feedback loop with our environment, subjectivity may be necessary. As you mentioned before, there’s the possible example of early humans recognizing their leader’s fallibility. If humans were a sort of non-conscious android race, why should they care about such a thing? It seems that subjective experience and the ability to hypothesize about creating a different type of subjective experience might be key.
I think you kind of answer your own question here. Self-programming enables us to flourish in our own environment - home / workplace / community / etc. based on the features / functions / benefits / values that we embody as a species and as fallible moral agents. I don’t think there is any reason why robots or computers couldn’t do the same one day.
- Consciousness is a sort of construction tool for the universe…It simply means that, for whatever reason, we’ve seen that our universe tends to create higher and higher levels of emergence and organization. Who knows why, it just does - and to this end, consciousness is a sort of motivational tool that causes humans to create even higher levels of emergence. How many things has mankind built and accomplished because of the subjective experiences of love, beauty, pride, striving for knowledge, and so on? Things that don’t benefit us in any immediate functional sense, I mean, survival-wise.
Yes, I agree, but many wouldn’t like the underlying suggestion. I kind of like the concept of Gaia even if I can’t yet agree with assigning any self-aware purpose to it. I like Matt Ridley’s concept of “the inexorable coagulation of life” in the same way. If you think about it, what species benefits from evolution in the long term? It adapts, maybe even morphs into a new species, and the ecology it is enmeshed in becomes richer and deeper. In those changes, the species loses its own independent abilities as it picks up its more interdependent abilities. As a bad example, humans now survive in cities and global networks but have lost their abilities to survive in the open plains of their evolutionary past. So what eventually gains from a species tightly adapted to its total environment? It seems Gaia does! Is this by design of the uberspecies? I don’t really expect so. Rather, I think it is ultimately due to the instability of nothingness!!
- Along those lines, I find transcendent experiences of consciousness a bit puzzling…I mean the types of experiences we usually associate with the arts or spirituality, at such a deep level that you really do feel temporarily “changed” or transformed by them.
Ok - all I’d say is that the mystery of consciousness as an example of a structured complexity and the mysterious feelings of the arts shouldn’t be bundled together as a support for mystery per se! I think Pinker suggestion in the last chapter of “How the Mind Works” had some truth to it: All “religion and philosophy are in part the application of mental tools to problems they were not designed to solve”. He suggests pretty much the same thing of the arts as well, as you seem to suggest in the next paragraph. Its the old problem of adding consciousness to instincts in our very recent evolutionary past.
Evolutionarily speaking, these wouldn’t seem to serve much of a purpose. I know there are theories that they are essentially the equivalent of a “Big Mac” - i.e., a Big Mac sort of over-stimulates the parts of our brain that evolved to crave fatty food, during a time when such an abundance of it would never have actually been available. My personal theory, though (especially since such feelings are often tied to religious experience) is that these states evolved to serve our highest sense of community and selflessness. Certainly throughout history there have been many times when people have had to sacrifice themselves for the perceived greater good. Sometimes for misinformed reasons (i.e., actual human sacrifice) but sometimes for very good reasons (risking your own life to overthrow an evil dictator). My guess is that these experiences that speak to something “greater than ourselves” tend to serve this purpose. It’s also no wonder they tend to be associated with religion - the idea of a god is the first thing that tends to come to mind when you feel something “greater than yourself”.
Lots to talk about here in terms of the result of adding consciousness to instincts. I will address in your next post…
Before looking at this post, just wanted to say that I think social structure, religion, warfare and altruism arise due to the interaction of instincts and consciousness (i.e. not due to instincts alone or some notion of group selection: I think what has been mistakenly called ‘group selection’ is just this interaction of genes and memes). In fact just as a strong sense of the out-group can lead to warfare, so a strong sense of the in-group can lead to altruism. It is also suggested by some that altruism would not arise in a world without warfare. But now it seems we enjoy the benefits of a certain self-sacrifice in society with a reducing incidence of violence.
Lexie_99 - 21 January 2012 04:42 PM
On yet another note, one of my current interests is how information is bound in consciousness. It seems to me that for any moment of consciousness we experience a swift binding of all sorts of information. Say I walk into a room and a dog runs toward me, my mind instantly integrates:
- The multitude of sensory information in the scene (colors, spatial position, etc.)
- Labels (That’s a dog, he’s running)
- Emotions (He’s scary, I love dogs)
- Plans (run away, I’ll pet him)
and so on. I wonder how much scientists know about this process at this point? It seems that in terms of mental health, so much depends on this process. When looking at emotions, for examples, it’s easy for an inappropriate emotion to be “pulled” in certain scenarios despite our best intentions (maybe spiders incite terror and we just can’t help feeling really really irritated at the sound of that tapping pencil). In terms of cognition, I see this all of the time - autistic children who may experience neural under-connectivity can’t seem to integrate information in a scenario; or a child with word-retrieval problems can “pull” absolutely everything about an object, down to the last detail, except for the actual label. It seems like a sort of magic - for all of the little factors that constitute a conscious experience, how do certain things get pulled into the final product, and how to influence that?
Yes - we’re just getting to a model of normal thinking that behaves a little like an unruly parliament - but eventually votes are cast and decisions taken. We’ve got a long way to go in terms of consciously affecting this model, but neuro-linguistic programming is maybe one approach. My bias is that I think we have to re-institute a sound understanding of values and value-sets directly associated with the two axes (and four quadrants) of consciousness-plus-instincts back into the parliament!
What is the “reason” that creatures have subjective experiences vs. simply running programs like a computer?
Perhaps I’m underestimating the complexity of mouse brains, or maybe there is something about a biological, life-based system that simply lends itself to consciousness in a way that computers and robots don’t. Maybe there’s no real “reason” at all, perhaps nature could have come up with robo-mice but just didn’t, by pure happenstance. Again, though, my thinking is - if the alternative is possible, then is there a specific (evolutionary or functional) reason we ended up with consciousness?
No - I don’t think there is any reason. Some classes of organisms are purely instinctive or computer-like and without brains & nervous systems and other classes, such as mammals, blindly found “in the vastness of design space” that brains and nervous systems, with varying levels of subjective self-awareness, were a valid way to survive. But they stumbled upon consciousness in the same way elephants stumbled upon the amazing appendage called trunks! There’s no reason why trunks should slowly spread to us humans or consciousness should slowly spread to plants or amoeba…
I think the question that’s ripe for inquiry right now is how to integrate (or in some cases de-integrate) those things that are automatically and subconsciously bound into the conscious experience? Right now it seems we only have practice effect (do it over and over until it becomes habit) and as you said, NLP. I guess because of my job, I eventually get frustrated with the limitations of those tools. I wish we had something more advanced (i.e., some of the neurofeedback that’s been researched now) or some better ideas for how to influence this process in a “low tech” way.
Yeah - I agree. We know a lot more about controlling instinctive biases than curbing silly mindgames. And we have “extended phenotypes” for dealing with the biases (e.g. computer programs that help nurses not make mistakes due to bias, in their care of hospital patients) but we are nowhere near the equivalent of helping people tackle their everyday mindgames. I think this is partly because we haven’t yet accepted the error in the mindgames like we have accepted the error in biases. I think it is also to do with pretty much everything previously discussed here. E.g. we need to settle the “fallible moral agency”/“free will” question more definitively and we need to settle the questions of religion, philosophy & science (& emergence) more clearly. We also need to accept a new emergent moral code (which includes feedback) through appreciating the four values sets that arise out of our consciousness-plus-instincts. And we need to see self-actualising as a more basic and worthy aim of life than chosen life-stance (as believer or agnostic or atheist). All this means we need the “Emergent Method”! A method that promotes “value delivery based on values held” more fundamentally than truth-seeking and truth-defending. It is also a method that promotes personal (and social) moral congruence. Thanks for giving me a chance to beat my drum again Lexie! I think if society slowly took on all these “low-tech” ideas, it could be slowly turned around to tackle the really big issues such as our sustainability over the next century. But, hey, I’m just a dreamer! Maybe Eucaryote is right and we’re all about to inevitably drown in a pool of our own excrement on this Petri dish we call Earth!
However I am thinking about setting up a blog page, but it takes time & money and some kind of assurance or belief that people would be interested in participating. For instance there are now over 30,000 members of this forum but literally only a handful participate each day. I think this is a little disconcerting. Maybe we’re ready for change, but not quite yet? Maybe we need to hurt a little more before we can get better?
Seems to be trying to put forward a basic proposal: “What if religions are neither all true or all nonsense?”. I think he does it really well, but maybe the idea of the Emergent Method is way past this!
This is a lot like the last paragraph of Wade’s “The Faith Instinct” book, which is worth a read: “Maybe religion needs to undergo a second transformation ... religion would retain all its old powers of binding people together for a common purpose ... It would touch all the senses and lift the mind. It would transcend self. And it would find a way to be equally true to emotion and to reason, to our need to belong to one another and to what has been learned of the human condition through rational inquiry.”
Both these guys don’t seem to tackle how incredibly difficult this would be to achieve! To me this is the really hard question to consider.
The way to achieving this, I think, is to really deeply acknowledge our fallible moral agency and begin to self-actualise. I think this is the central proposal I can offer by way of solution to the questions intimated by Alain de Botton & Nicholas Wade. So the Emergent Method would be a proposal to answer the really hard question!
Just like (indirect) consciousness – the hard question with respect to our social fabric is not how it works in detail (e.g. in a sense of belonging, in a shared artistic appreciation, etc.) but how it works at all! I am proposing it works because we are most basically (indirect) moral beings (and not just just amoral self-interested consumers). This state has arisen from the addition of ‘mysterious’ consciousness to instincts. We are moral beings that in the past have abrogated our moral agency to religions and religious authority. Now we have given up our superstitions and miracle stories and haven’t replaced them with anything yet. In that “giving up” we have also lost our sense of certainty and moral authority and community that those shared religious rituals imbued. But there is no going back. So there is only one way forward – to fully embrace our imperfect but emerging moral agency (with our emerging higher consciousness). I believe that if we heroically take this momentous step as a community we can find ourselves and reflect ourselves far more deeply in our societies and social fabric than religion was ever able to facilitate in the past. I suspect it is THE only way we can tackle the big question of our sustainability as a species over the next century.
Am I mad? And will Sam’s new essay on free will trip us up or aid us?
As a fellow Aussie, it may be a bit parochial of me - but I think you will get the drift. He talks about “our national story of creativity”, an arts industry “that could question and express itself” and participants “who respect their job as a professional calling”. He also speaks of Australia as a country where “performance rituals are at the heart of its being” and “the stories we tell ourselves and our children ... have a serious importance”. I think these things are true of all countries and cultures, with or without religion…
So this thread has expanded to include the mystery of the social fabric that arises out of individual consciousness-plus-instincts. It has also expanded to consider how consciousness-plus-instincts might artificially evolve in such as way as to deal with our most pressing challenges as a species - our sustainability on this planet over the next century.
In the last post I said:
Michael Kean - 28 January 2012 09:49 PM
Just like (indirect) consciousness – the hard question with respect to our social fabric is not how it works in detail (e.g. in a sense of belonging, in a shared artistic appreciation, etc.) but how it works at all! I am proposing it works because we are most basically (indirect) moral beings (and not just just amoral self-interested consumers). This state has arisen from the addition of ‘mysterious’ consciousness to instincts.
I guess I need to explain what is the difference between “moral beings” and Adam Smith’s more basic idea of self-interest. I agree that our morality emerges over time from the nature of our being as blindly selfish organisms endowed firstly with survival instincts but secondly with worldly-awareness and self-awareness. So we’re more than simple, blindly self-interested organisms now. We’re complex and aware social beings. We now know something of the “invisible hand” of nature pushing us towards more complex interactions with each other and our environment. We know our self-interest but we also know things like the win-win of the Non-Zero Sum that Robert Wright speaks about:
Our collective experiences of dreadful warfare with the out-group has also bound us together in the in-group in deeper altruistic ways than were possible before those dreadful experiences. So even warfare has contributed to our modern positive morality in important ways.
Bu our problem now is how we advance from here. Do we have time to avoid the squandering for the last of the resources on this Petri dish before we wipe ourselves out in the deepest adversity we have ever faced? Jared Diamond’s “Guns, Germs and Steel: The Fates of Human Societies” would perhaps suggest it is possible but only if we act in very unnatural ways that we didn’t manage to do in the past. Seems to me that Wright’s insights or Diamond’s insights are not enough in themselves. We have to overcome mysterious and wayward consciousness itself. For this task we need not just an understanding, or a highly advanced and intelligent morality, but a communal, grass roots drive to apply this new morality of our higher consciousness as well.
This is what I call the Emergent Method. It requires that we each fully take on the task of our individual self-actualisation as fallible moral beings. This means we drop our past superstitions and beliefs in miracles and fully take on board the awesome responsibility that confronts us as a species. It means we find global purpose and meaning pretty damned quickly. It means we drop our superstitious rituals but certainly not socially galvernising ritual per se. But I’ll admit I haven’t fully defined this idea as yet either. How would we enter into such rituals in such a way as to make us ready as a society not just for war, but for rebellion against the short-sightedness of our own genes and memes (as Richard Dawkins put it)? Edward de Bono’s “Six Hats Method” and “New Thinking for a New Millennium” seem on the right track, but we’re talking about a lot more than just dispute resolution here: We’re talking about a consciousness revolution.
Does anyone have any positive suggestions? Sam? I’d really like to know about them…
Kenneth, where I get hung up is trying to make sense out of intelligence emerging from matter. Intelligence manipulates matter rather well and it’s usually quite obvious when we see it. From our earliest tools we’ve been manipulating matter to suit our needs, and we’ve become very good at it. Did intelligence emerge from matter? While I’m not religious in any organized sense, it just seems logical to me that if one emerged from, or begat the other, a higher intelligence begat matter rather than the other way around.
As we search for other intelligence in the universe, and I’m sure it’s out there, it will probably be recognized by the stuff it’s built, as matter, left on it’s own, never combines to form the things intelligence does with it. I understand how logic can be very deceiving. Logically, the Earth is flat and stationary, and the sun revolves around it. Anyway, I’m not stating the above as a belief, rather just a logical observation.
Consciousness is not the only thing that materialist know to exist. I can’t speak for all materialists, but I do not limit my evidence of the physical world to the direct experiences of my five senses or my subjective experience. I can’t see a microbe without a microscope, but I know they exist. I can’t see the Galilean moons without a telescope, but I believe they exist. I can’t directly experience any one else’s experiences, but I don’t doubt they have them.
Arguing that materialists world view is self contradicting is just playing with words.
To me, consciousness (awareness) is an idling motor, waiting for someone or something to put it into gear.
I welcome your response but you will no doubt agree that:
Awareness should not stay around let alone *increase* with total loss of electric activity—not even for one case. It looks to me like you, Harris, and Shermer are all in denial regarding life after brain death (perhaps you have simply not seen the research in which case I have provided it to you and relieved your excuse for ignorance if any there was).
Studies like this will definitively test your openness to empirical evidence when it conflicts with your beliefs.