1 2 3 > 
 
   
 

#66- Living with Robots A Conversation with Kate Darling

 
Nhoj Morley
 
Avatar
 
 
Nhoj Morley
Total Posts:  5517
Joined  22-02-2005
 
 
 
01 March 2017 07:07
 

In this Episode of the Waking Up podcast, Sam Harris speaks with Kate Darling about the ethical concerns surrounding our increasing use of robots and other autonomous systems.

Living with Robots A Conversation with Kate Darling


This thread is for listeners’ comments

[ Edited: 21 September 2017 10:44 by Nhoj Morley]
 
LadyJane
 
Avatar
 
 
LadyJane
Total Posts:  2357
Joined  26-03-2013
 
 
 
01 March 2017 08:16
 

The ethical dilemmas that arise with the implementation of robot technology is like any discussion surrounding human morality.  That is to say it’s highly subjective.  We tend to anthropomorphize these items when it suits us but are then quick to divorce ourselves from responsibility when it comes to self driving cars or murderous drones.  Then we can conveniently chalk it up to mechanical malfunctions rather than human errors or autonomous weaponry rather than war crimes.  I do enjoy the increasing references to Westworld during these conversations.  The more human like the robot the more empathy we seem willing to extend.

 
 
jaco
 
Avatar
 
 
jaco
Total Posts:  3
Joined  01-03-2017
 
 
 
01 March 2017 15:06
 

Sam, great podcast as usual. You expressed concern about driverless cars being involved in accidents and the lack of reaction of the public, I think the reason is that very few (relative) number of these that are currently on the road. It’s going to be a different story when it’s a 50-50 ratio.

But I think we are in for a wild ride, he he. Imagine when they are smart enough to take things into their own hands such as stopping a drunk or otherwise incapacitated driver and the mayhem and unintended consequences that may result. It’s the same problem about how to deal with AI culturally.

Jacques

 
jaco
 
Avatar
 
 
jaco
Total Posts:  3
Joined  01-03-2017
 
 
 
01 March 2017 16:12
 

There is a cultural aspect to our driving habits that I have noticed in my travels.We adapt to this if we are in an area for a sufficient amount of time. We do this because we want to be predictable, We don’t want to be driving near the stereotypical lady Asian driver.  A predictable driver is a safer driver. I suppose when everything is driverless the AI’s will develop their own driving culture.

Jacques

 
jaco
 
Avatar
 
 
jaco
Total Posts:  3
Joined  01-03-2017
 
 
 
01 March 2017 16:30
 

I am lead to believe by powerful arguments that emotions are the true basis of our humanity.They give purpose and direction to what otherwise would be a zombiesque life. As AI gets more human-like so that we can relate to them, no doubt they will develop emotions. What happens when our driverless car refuses to take us to the bar, or wants us to go home early. Will it nag you all night long, Will there be a kill switch?...

Jacques

 
NL.
 
Avatar
 
 
NL.
Total Posts:  5500
Joined  09-11-2012
 
 
 
01 March 2017 19:17
 

I have a hard time considering the implications of living with advanced AI in the same way I have a hard time incorporating extraterrestrial aliens into my ethical framework. I got nothing to work with - maybe they’re ET or maybe they’re Pod People or maybe they’re both. In the absence of any specific information, it’s hard to say much about it.


I think there is an implication in Harris’s framework that it would be possible to build AI without humanity (although he’s never spoken to that directly, so maybe not.) I’ve also heard him refer to scientists who think a super intelligent machine would necessarily be a super moral one - and I can see a good case for that. The problem is that human level intelligence is a case study of one (species), so we have no way of knowing what’s essential and what could have been achieved via different routes. If it’s true that you literally can’t have advanced intelligence without advanced relational ability and all that comes with that (theory of mind, empathy, mirroring and so on,) then super smart but ‘mindless’ robots wouldn’t really be an issue in the first place. (We already have systems in place for dangerous agents or agent-like entities, after all, from tigers to semi self-guided weapons, so assuming they’d have to have a level of intelligence beyond even that to reach the level of concern they’re talking about. Again, to me it’s not entirely clear if that’s even possible without some of the attributes of humanity.)


On privacy concerns, was kinda expecting them to mention Hello Barbie, My Friend Cayla and i-Que Intelligent Robot (I had to Google the specific toy names but it was a big enough story that I was at least aware of it in a general sense, and I’m not a news junkie by any means, so assuming it was fairly publicized). Curious what they would have said about those cases.


I thought the uncanny valley phenomenon was interesting, had never heard of that before. Upon Googling it, I was surprised that none of the theories about why it exists seem to include the instinctive response that something is deeply wrong with one’s communicative partner. At a much subtler (I hope!) level, I deal with this all the time with severe shyness - I get red, look a little panicky when I shouldn’t be, maybe wide-eyed or avoid eye contact - so other people look at me kinda funny which in turn makes me feel more uncomfortable. I can only imagine if that effect were amplified twenty or thirty times to a cold-eyed glazed over robot stare in what should be a sentient being - I think in evolutionary terms, that would be a sign that you are possibly interacting with Norman Bates or something.


Anyways. Interesting podcast, even if AI doesn’t particularly interest me.

 
 
Saint Ralph
 
Avatar
 
 
Saint Ralph
Total Posts:  12
Joined  16-02-2017
 
 
 
01 March 2017 20:21
 

You know, this could go the other way, too.  In exploring how we treat robots, we might gain a lot of insight into why we treat each other as we do and what constitutes conscious being.  We might find out that we’re not nearly as autonomous or even as sentient as we thought.  We may have been very little more than “meat-bots” all along.  In a way not entirely different from the way a baby duck imprints on its mother shortly after hatching, we might find that we’ve been programmed through genetic and social evolution to “imprint” on apparent autonomy.  We may have already met the enemy and they might very well be us.

 
 
brandon davis
 
Avatar
 
 
brandon davis
Total Posts:  22
Joined  01-03-2017
 
 
 
01 March 2017 21:31
 

imho:
AI is only as wise/moral/Friendly as its code
the code is [N] written by All-Too-Humans
as per Black Mirror: AI serves the goals of whoever writes the code

now, if i may babble some heterodoxy without fear of reprimand:
what if humans are the AI in question?
of course a Good Linguistic Philosopher might offer that *by definition* the word-object “artificial” precludes Us from being the AI in question
but the Existential Humanists and Phenomenologists might say that we create/evolve Ourselves the way we do anything else
and our episteme>decisions>actions literally modulate our Epigenetics and our Adaptiveness…
so hey why not, aye?

Cheers,
-b

 
Torn
 
Avatar
 
 
Torn
Total Posts:  5
Joined  04-04-2016
 
 
 
01 March 2017 23:29
 

Two things I see throwing a wrench in autonomous cars:

1. People need someone to blame. In the scenario Sam mentioned about the first time an autonomous car mows down a pack of kids on a crosswalk, no one is going to accept that it’s the car’s fault, and that had it been a human, the error could have been much worse. People just won’t care. Even though it’s a very valid argument and it probably saved lives. People need to hold someone responsible. While in the end it will still be up to the owner of the vehicle to assume blame in this case, it’s a sure bet that the real ethical argument will center around autonomy. It has to be. Our whole punishment system is built around verying degrees of guilt and intent.

2. The fear of poor security. Each vehicle is connected to the automaker’s systems via a cloud system. If I’m an opponent of autonomy, I’m going to hammer home the fear of someone maliciously altering code. It’s already been shown hackers can gain control of vehicles and shut the motors off on the freeway. I don’t have to prove it, all I have to do is propose a scenario where hackers infect all the vehicles at one time. Tesla vehicles share data to improve their AI functions. “Did you hear if hackers find a way to hack one vehicle via the cloud, then all of the Tesla vehicle could potentially share the hack simultaneously with each other. It could cause vehicles steering wheels to turn all the way to the right and lock at high speeds. I’m sure I heard that, and about the hackers stopping cars wirelessly on the freeway. You can Google that. So dangerous. What’s wrong with good old-fashioned American cars the way they are? I want control of my vehicle, they just aren’t safe. Tell people. Get the word out.”

Done.

 
Saint Ralph
 
Avatar
 
 
Saint Ralph
Total Posts:  12
Joined  16-02-2017
 
 
 
02 March 2017 02:11
 

Too late.  Did you know that all vehicles manufactured since 2002 can, at the very least, be shut off by satellite command?  And that the later the model you have the more functions can be remotely controlled by the manufacturer, or the NSA, or the National Guard or any other branch of a concerned government?  You very probably own a vehicle like that right now (I don’t.  Mine’s fleet stock from the 90s).  Do you recall being asked if you wanted a Big Brother equipped car?  That’s because you weren’t.  You’re already driving a security sieve.

Do you really think American consumers are going to think all the way through to security problems?  The people who use abc123 as the password on all of their electronic devices?  As soon as autonomous passenger cars are available at an affordable price, everyone will have one.  They would so rather binge watch Game of Thrones episodes on the way to work than have to worry about what color that traffic light is or isn’t.

 
 
Brian888
 
Avatar
 
 
Brian888
Total Posts:  28
Joined  16-02-2017
 
 
 
02 March 2017 05:57
 

The discussion of the difference between interacting with robots and interacting with video game sprites interested me, because we already have a lot of experience with that in the world of acting and film-making.  Actors routinely report that it’s easier for them to “get into the scene” when they are acting against a physical prop and not just a tennis ball on a pole that serves as the stand-in for CGI to be inserted later.  Viewers sometimes can apparently tell the difference as well; there are many people who feel that horror movies, for example, are better when the special effects are real and practical (think of the alien bursting out of John Hurt’s chest, or Rob Bottin’s insane creations in The Thing), because the actors’ reactions feel more authentic.

On the lighter side, I’m reminded of something that I believe Tom Hiddleston said when he did an episode of Sesame Street.  He knew that he was only interacting with puppets (in this case, Kermit), and that it wasn’t even Jim Henson manipulating the puppet anymore.  None of that mattered; when Hiddleston began interacting with Kermit, he actually started to freak out (in a good way), along the lines of “Oh my God, I can’t believe I’m here with Kermit!”

Separately, the concept of sapience in robots also fascinates me.  How do we define sapience?  Is a robot sapient if it can pass the Turing Test?  In other words, if we honestly can no longer determine whether a robot is sapient (let’s define that as having a sense of self and being able to think recursively about one’s own thoughts) or whether it is an ideal Chinese Room or p-zombie that can perfectly mimic sapience but by definition is not sapient, how do we treat that robot?

 
NL.
 
Avatar
 
 
NL.
Total Posts:  5500
Joined  09-11-2012
 
 
 
02 March 2017 07:07
 
Brian888 - 02 March 2017 05:57 AM

Separately, the concept of sapience in robots also fascinates me.  How do we define sapience?  Is a robot sapient if it can pass the Turing Test?  In other words, if we honestly can no longer determine whether a robot is sapient (let’s define that as having a sense of self and being able to think recursively about one’s own thoughts) or whether it is an ideal Chinese Room or p-zombie that can perfectly mimic sapience but by definition is not sapient, how do we treat that robot?


One thing that interests me is whether complex thought - at the level of decision making, using language, and so on - could be separated from feeling. Again, Harris seems to assume it could be when he talks about the dangers of robot soldiers who wouldn’t fear being killed, but is this even possible? If it turns out that thought is deeply related to proprioception / interoception / exteroception, then one would have to build an enormously complex set of sensors, connected to some manner of program for ‘self’, to replicate the base material even needed for thought. Who’s to say all of our AI robots wouldn’t turn out to be more Woody Allen than Terminator, ha ha!

 
 
Mr Wayne
 
Avatar
 
 
Mr Wayne
Total Posts:  733
Joined  01-10-2014
 
 
 
02 March 2017 18:17
 

http://www.newsweek.com/how-robots-help-stop-gulf-oil-spill-72449

We all were dependent on remotely operated robots to fix the Gulf Oil leak in 2010.  I think AI could one day replace the men in the control room.  Maybe another leak can be prevented using intelligent machines.

 
CaliforniaDreamin
 
Avatar
 
 
CaliforniaDreamin
Total Posts:  7
Joined  27-02-2016
 
 
 
03 March 2017 02:29
 

Sam’s mention of military robots making us more likely to go to war reminded me of something from fiction called the Ares Conventions. In a nut shell, space faring humans all got together to agree not to nuke each other back into the Stone Age. Not only was nuking heavily restricted, but certain targets became off limits (civilian population centers, water purifiers, etc). The interesting thing about the Ares Conventions is that it made war more accessible, not less!

I also wonder how much of our humanity we lose when we allow others to carry out our own responsibility. Eddard Stark (of Game of Thrones fame) has a quote about swinging the sword yourself when you condemn a man to death. It’s always resonated with me that we should be willing to carry out our own verdicts, in person. Handing off this responsibility to robots would not only be unethical, but bad for human culture. What we decide is ethical or just changes over time, so no computer program could ever know just where we are along that procession. Shirking that duty off on robots would be to shirk off the advancement of ourselves.

Similarly I must question a robot making decisions about war. The number of lives saved is a matter of perspective and not something anyone can ever truely know. Did Hiroshima and Nagasaki save lives by ending the war early? Or did it just save American lives? Did anyone care about saving Japanese lives? Should they? These are not just questions of computational power, but of ethics. I think we are also far from the first people to really consider this. Frank Herbert’s Dune series had a whole history surrounding the outlawing of “thinking machines” because of their danger to mankind. Suplimenting them was the Mentant, humans with raised computational power that were essentially designed to replace the role of the thinking machine. Even then, mentants were usually advisors, not the people in command.

 
Jan_CAN
 
Avatar
 
 
Jan_CAN
Total Posts:  1393
Joined  21-10-2016
 
 
 
03 March 2017 06:33
 

I think the greatest concern would be a non/low-thinking robot soldier, under human control, that could kill with impunity.

I also would question giving a robot the ability to make decisions about war.  But then, we (humans) haven’t done such a good job in this regard so far.  Although fictional stories have provided many frightening scenarios and warnings, perhaps it could go the other way, depending on who’s doing the programming.


The Day the Earth Stood Still (1951)  (not the inferior remake with an altered plot)

Klaatu emerges from the saucer and addresses Barnhardt’s assembled scientists, informing them that he represents an interplanetary organization that created a police force of invincible robots like Gort to “patrol the planets in spaceships like this one, and preserve the peace” by automatically annihilating aggressors. “In matters of aggression, we have given them absolute power over us. This power cannot be revoked.” Klaatu concludes with, “It is no concern of ours how you run your own planet, but if you threaten to extend your violence, this Earth of yours will be reduced to a burned-out cinder. Your choice is simple: join us, and live in peace, or pursue your present course and face obliteration.” Klaatu and Gort depart in the spaceship.

 

 
 
Otto117
 
Avatar
 
 
Otto117
Total Posts:  11
Joined  06-03-2017
 
 
 
07 March 2017 18:52
 

I enjoyed this conversation as a general overview of the ethical issues that robot creation, use and interaction raises. It is always a pleasure to hear these issues discussed dispassionately and unflinchingly—although Sam did flinch a bit when it came to discussing “child sex dolls,” although not with the moralism that typically infects such discussions.

There was one area where both Sam and Kate were under-informed. Kate mentioned that “virtual child pornography” (i.e., pornographic images depicting no actual person or part thereof) was “legal,” but this isn’t the case. The Supreme Court merely determined that such images could not be legally considered “child pornography.” But drawings of children (both ‘virtual’ and just plain drawings) have been and are prosecuted as obscenity.

The test for obscenity ostensibly requires the prosecution to convince the jury that the work appeals to “prurient interest” in light of “contemporary community standards,” and to show both that the work depicts or describes sexual conduct “in a patently offensive way”, and that it lacks “serious literary, artistic, political, or scientific value.” In practice, however, these requirements are no barrier to conviction. It is self-evident that such drawings would satisfy the first two criteria in any community in the United States (or pretty much anywhere else, for that matter). As for “serious” value, that can only be won by the defense by calling art critics and art professors to testify convincingly to the artistically “serious” value of the work—and good luck finding critics/professors who believe such material actually possesses serious artistic value, and are willing to be investigated by the government (incident to their offering testimony), and vilified by the prosecution and its experts. It’s safe to say that any images that might be “saved” by a “serious artistic value” defense either are created by artists of such renown that everything they do is considered of serious value, or fail the prurience threshold (i.e., they are artistic but not particularly erotic).

Moreover, under federal sentencing guidelines, obscenity convictions for drawings and virtual images of children earn exactly the same prison sentence as convictions for actual child pornography. (Incidentally, there is no “virtual child pornography” that is so realistic as to be indistinguishable, or even nearly indistinguishable, from real children. Also, Kate’s statement that images of bestiality are not illegal is incorrect. Bestiality pornography depicting adults-only has been successfully prosecuted as obscenity.)

Regarding “child sex dolls,” there are no robotic models, and given the economics involved (and the tiny market), there are unlikely to be any in any near future. The inert variety are, with few exceptions, rather rudimentary, or just ill-made, and the few that aren’t cost upwards of $7,000. If there has been a Customs seizure or obscenity prosecution of “child sex dolls” in the United States, it hasn’t been publicized, unlike cases in Canada, Australia, the U.K., and Norway. But prosecution is hardly unimaginable. Since “sexual conduct” includes (under the Miller test) a “lewd exhibition of the genitals,” prosecutors would only need to argue that the genitals on the sex dolls are lewdly exhibited by definition.

As for evidence of harm, various researchers have shown a correlation between availability of actual child pornography, as well as drawings, cartoons and virtual images (including 3D computer games), and a reduction in sex crimes against children. (Set aside, for the moment, the insurmountable ethical problems with actual child pornography.) Given this evidence across multiple societies, there is little reason to begin with the assumption that sex robots will be any different, but it is certainly worth a consideration. However, one must be careful to avoid the same a priori assumptions that infect arguments against drawings and inert sex dolls, e..g, that such materials reinforce abhorrent desire patterns and fuel criminal sexual rampages.

 
 1 2 3 >