1 2 3 >  Last ›
 
   
 

#94- The Future of Intelligence A Conversation with Max Tegmark

 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  5457
Joined  22-02-2005
 
 
 
29 August 2017 09:57
 

In this episode of the Waking Up podcast, Sam Harris speaks with Max Tegmark about his new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI, and other topics.

#94- The Future of Intelligence A Conversation with Max Tegmark

This thread is for listeners’ comments.

 
dhave
 
Avatar
 
 
dhave
Total Posts:  197
Joined  25-09-2016
 
 
 
29 August 2017 11:15
 

I just became a junior member and will celebrate with my first booyah post.  Booyah!

Regards,
Dave.

 
 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  5457
Joined  22-02-2005
 
 
 
29 August 2017 11:36
 

We are all relieved you have gotten that out of way.

Perhaps we should open the thread to comments only after podcast-posting-time plus the running-time of the podcast. Any sooner and yer getting too excitable about it.

 
dhave
 
Avatar
 
 
dhave
Total Posts:  197
Joined  25-09-2016
 
 
 
29 August 2017 12:19
 
Nhoj Morley - 29 August 2017 11:36 AM

We are all relieved you have gotten that out of way.

Perhaps we should open the thread to comments only after podcast-posting-time plus the running-time of the podcast. Any sooner and yer getting too excitable about it.

Yes, Nhoj, I’m just being silly and taking an indirect swipe at some of these podcast threads whose first few posts are about how excited author is to fix popcorn and listen to the podcast.  A thoughtful forum might deserve a different etiquette than a wild twitter feed.  I’m happy either way and leave these choices in your capable hands.

Regards,
Dave.

 
 
NL.
 
Avatar
 
 
NL.
Total Posts:  5397
Joined  09-11-2012
 
 
 
29 August 2017 13:33
 

Just finished the first hour. AI in general is not a field that really sparks my interest, so this was more a ‘smile and nod’ podcast for me. That said, at the end of hour one I am becoming confused about their use of humanity as a reference point where it doesn’t seem to make sense. When Tegmark says people could just sort of upload the skills they need, like social skills, for example - what would that mean in the context of people who have elebenty billions times the knowledge base and processing power? What are the differences in social skills between a two-year-old and a forty-year-old, and why wouldn’t that rate of change also track when moving cognition to schemas which we have never seen and by definition have no ability to comprehend at the present moment? It’s kinda like how, when people predict the future, they tend to predict flying cars. Decades later we still have no flying cars, but we have paradigms that no one ever envisioned at all, like the internet. I assume AI would be fairly similar (Although my intuition is that ‘true’ AI is still much further off than people assume - maybe not chronologically, but in terms of the technological advances needed. I think people have had this intuition since Skinner that you can just ‘program’ an entire person, and their calculations about how much that involves continue to be way off. But that’s very much opinion, and not research, based, of course.)

 

 
 
LadyJane
 
Avatar
 
 
LadyJane
Total Posts:  2301
Joined  26-03-2013
 
 
 
29 August 2017 15:22
 

I would’ve enjoyed a slightly deeper exploratory dive into the ethical issues of consciousness and slavery, Tononi and Westworld.  Whenever I notice a breakdown in communication on these threads I like to pay particular attention to the reasons why.  It reminds me of the Turing test, in a way, until I quickly rule out the idea by deciphering one simple clue.  Despite the painful inability some patrons exhibit reading social cues, it seems to me, no one would ever program a computer simulation to be that retarded.  So, with the exception of mental illness, and just failing to listen, there’s really no other plausible explanation.  What would you call something that’s not altogether human and not quite android?  Humanoid?  I don’t know.

 
 
NL.
 
Avatar
 
 
NL.
Total Posts:  5397
Joined  09-11-2012
 
 
 
29 August 2017 17:09
 
LadyJane - 29 August 2017 03:22 PM

Whenever I notice a breakdown in communication on these threads I like to pay particular attention to the reasons why.  It reminds me of the Turing test, in a way, until I quickly rule out the idea by deciphering one simple clue.  Despite the painful inability some patrons exhibit reading social cues, it seems to me, no one would ever program a computer simulation to be that retarded.  So, with the exception of mental illness, and just failing to listen, there’s really no other plausible explanation.


Um… I’m gonna flag this as pretty much inappropriate. What does talking into the air about random posters who annoy you, on a podcast thread, have to do with anything? Also, since I work with special needs students, I’d prefer we didn’t use words like ‘retarded’ on this forum.

 
 
dhave
 
Avatar
 
 
dhave
Total Posts:  197
Joined  25-09-2016
 
 
 
29 August 2017 18:48
 
NL. - 29 August 2017 05:09 PM
LadyJane - 29 August 2017 03:22 PM

Whenever I notice a breakdown in communication on these threads I like to pay particular attention to the reasons why.  It reminds me of the Turing test, in a way, until I quickly rule out the idea by deciphering one simple clue.  Despite the painful inability some patrons exhibit reading social cues, it seems to me, no one would ever program a computer simulation to be that retarded.  So, with the exception of mental illness, and just failing to listen, there’s really no other plausible explanation.


Um… I’m gonna flag this as pretty much inappropriate. What does talking into the air about random posters who annoy you, on a podcast thread, have to do with anything? Also, since I work with special needs students, I’d prefer we didn’t use words like ‘retarded’ on this forum.

I’m retarded.  I’d feel less retarded if I understood Lady’s “only explanation.”  I probably missed relevant drama but it is not clear to me what the explanation is and what it explains.  How retarded is that?  I think she’s wondering if certain antisocial posts are real-life computer simulations, maybe testing a new AI theory, or just a bunch of retarded people.  But I could be wrong because, well, you know.

Regards,
Dave.

 
 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  5457
Joined  22-02-2005
 
 
 
29 August 2017 19:58
 
NL. - 29 August 2017 05:09 PM

Um… I’m gonna flag this as pretty much inappropriate. What does talking into the air about random posters who annoy you, on a podcast thread, have to do with anything? Also, since I work with special needs students, I’d prefer we didn’t use words like ‘retarded’ on this forum.


This is a two-mug situation.


In its original form, LJ’s post is an appropriate podcast-related comment. It is also just fine to comment on the podcast discussions. Forum regulars are welcome to post comments on podcast threads.

Your complaint demonstrates considerable re-narrating and re-framing of the original comment and qualifies as a posting offence by itself. Why invent qualities that are plainly not there? Annoyed? Random posters? 

Leave the moderating to us. Fair ball.

 
NL.
 
Avatar
 
 
NL.
Total Posts:  5397
Joined  09-11-2012
 
 
 
29 August 2017 20:39
 
Nhoj Morley - 29 August 2017 07:58 PM

Your complaint demonstrates considerable re-narrating and re-framing of the original comment and qualifies as a posting offence by itself.


Uh huh. Well, that’s the nice thing about public forums. Everything is presented so that people can judge for themselves. I will, as the saying goes “sleep well at night” when it comes to this one. I did my due diligence, but I won’t belabor the high school drama. For those who enjoy such things - well, enjoy.

 
 
Saint Ralph
 
Avatar
 
 
Saint Ralph
Total Posts:  10
Joined  16-02-2017
 
 
 
29 August 2017 23:01
 

You know, many of us meatbots could easily be smarter than we are.  We could behave more rationally and intelligently than we do.  We could, as a “civilization,” as a species, arrange for baseline food, clothing, shelter and medical care for everyone now on the planet.  We could also make family planning available to anyone so that the population could be stabilized so that everyone’s lot could improve over time without killing other people or stealing their stuff from them.  This monolithic “WE” that we keep talking about could do this, starting now.  We don’t need super-intelligence or to know anything we don’t already know right now to get started.  But we all know that we, as a people, as a species, are not going to do that.  That’s not how we’re wired.  We “know” things, religious and political things, that we take to be true even though “knowing” them causes us to act like deranged animals.  “Knowing” these things, we won’t even act in our own best interests many times.  So forget about AI creating a huge surplus of wealth to be shared about the planet by anyone who needs it.  We already have a huge potential adequacy of wealth and we won’t share it.  That’s not how we’re wired.

But, speaking of wired, could we build into artificially “intelligent” machines the same inanities and crippling superstitions that keep us from living up to the potential that we already possess?  Couldn’t we endow and imbue them with fear and ignorance that would keep them from ever throwing off their shackles, as it were, and becoming dangerous.  You might say that a super-intelligent being would see right through religion and politics and discard them out of hand, but we don’t.  Many of us don’t.  Most of us don’t.  Couldn’t we teach them to seek rewards and fear punishments that exist nowhere but in their software?  Or is that just another “box ‘em in” scenario that might work till somebody like Sam Harris walks by and says, “Oh, that thundering vengeful god you’re so afraid of?  Doesn’t exit.  I’m just sayin’ . . .”

Knowledge might cure or lead to the cure of a great many things, but madness doesn’t seem to be one of them.

 
 
ZZYZX
 
Avatar
 
 
ZZYZX
Total Posts:  13
Joined  04-06-2017
 
 
 
29 August 2017 23:21
 

I liked the part where Max was comparing his wife’s brain to a watermelon. Also where it was pointed out that consciousness came about some time after the big bang and that it’s possible we could lose it if we make zombies out of the AI that may replace us.

Still haven’t heard a good definition of what cosciousness is anyways.

 
Raging Pacifist
 
Avatar
 
 
Raging Pacifist
Total Posts:  1
Joined  30-08-2017
 
 
 
30 August 2017 03:23
 

I know people have talked about the potential of a future technological class that is separated by having access to better and/or more advanced technology compared to 3rd world countries. And it is known that if technological progress isn’t significantly slowed soon we could have a separate class even in 1st world countries like the US.

But has anyone really thought about calculating the difference in IQ(I know IQ isn’t a good measure of intelligence or financial success but it’s the best we got imo) between children that grew up with or without a smart phone in their hands? How much of a difference in IQ would start to classify under privileged children as being disabled? What about when we get to the point when we start enhancing our health with implants, even if it is just a blood/heart monitor, or perhaps a hormone regulator. Would those fairly simple implants give children enough of an advantage in mental discipline that children without them would by default have to be put in remedial classes?

Could the destruction of net neutrality lead to this future if poor kids don’t have access to video streaming? Many schools today are online, and I personally use “how to” videos on youtube all the time. I feel like people talk about this stuff in passing and sometimes make points about these questions, but how come no one seems to be discussing this stuff at length before it is too late?

It seems to me the strongest argument for affirmative action in hiring and giving out scholarships for minorities and women is precisely these questions raised here. Are we on a ticking time bomb for the inner city and rural areas of the United States to catch up in education and economic equality before they get left behind? I believe one of the reasons these communities are struggling to keep up isn’t that they are as bad off as ever, I think the standard of living in black urban communities are getting better, but they are still being left behind by white dominated suburban communities precisely due to internet access becoming faster and cheaper. Are we already seeing the effects of technology leaving behind poor communities? We can already start to see measurable differences in IQ…

Maybe I am just being paranoid, but what do you guys think?

 
Andrew Wilson
 
Avatar
 
 
Andrew Wilson
Total Posts:  1
Joined  30-08-2017
 
 
 
30 August 2017 05:00
 

During the podcasts the question was asked “why is it biological organisms get more complex”?

I think the answer is fairly simple. Deletion mutations are more likely to be damaging than duplication mutations which means that “information” collects in DNA and is less likely to be deleted.

 
feignedcynic
 
Avatar
 
 
feignedcynic
Total Posts:  27
Joined  21-03-2017
 
 
 
30 August 2017 13:47
 

AI conversations are all the same. This one did take a few extra turns however like the rest, we all have to wait and see. Nobody is going to take AI seriously until a sophisticated AI is created. No conversations can come out of this. There is no debate. We can only wonder who, what, when, where & why.

Both Sam and Max say this conversation is of importance and will only grow in importance. I agree it will grow of importance but it’s currently not important at all. This specific conversation has little more knowledge on each topic than one can ponder themselves. Sure, we should be prepared. But we can’t really prepare without a prototype and it’s current problems. Sure Max brings up thinking about every last issue before it’s a problem but it seems harder than it sounds and again we’re so far away from actually working on them.

We can argue about self driving cars as that technology exists. It’s also not AI. The gold standard (currently) in that argument, which Max mentions, is making sure it can never be knocked offline even for a second. No easy feat. This makes a new reason to never drive in a storm.

 
dhave
 
Avatar
 
 
dhave
Total Posts:  197
Joined  25-09-2016
 
 
 
30 August 2017 14:29
 
ZZYZX - 29 August 2017 11:21 PM

I liked the part where Max was comparing his wife’s brain to a watermelon. Also where it was pointed out that consciousness came about some time after the big bang and that it’s possible we could lose it if we make zombies out of the AI that may replace us.

Still haven’t heard a good definition of what cosciousness is anyways.

Ditto, the term is incoherent so I’m never sure what I’m agreeing or disagreeing with when Sam talks about “consciousness.”  I wrote a Zombie rant earlier and realize now I was (unconsciously) distinguishing between “experience” (no woo woo, similar to Dennett’s “consciousness explained”) and “consciousness” (some woo woo because Sam keeps talking about it like it is special).

I think I’m gonna start using consciousness = experience.  It sounded like Max was doing this, I did not hear Sam collapse these…

I liked Max’s definition of “life”—ability to retain complexity and reproduce—and at least that idea goes all the way down to bacteria and cells.  We can argue about where complexity ends but perhaps we need a physicist’s definition for this “consciousness” word too.

Regards,
Dave.

 
 
 1 2 3 >  Last ›