Showing posts with label cybernetics. Show all posts
Showing posts with label cybernetics. Show all posts

Monday, March 16, 2015

Are we destined to fall in love with our robots?

If we could look into the near future, would we see some people who normally would perhaps be single, instead living in a love relationship with human created non-human, basically in love with a personal robot?
From Svedka Vodka ad
We see the elements now a days that if put together, indicate one thing. That we will one day in the not to distant future be seeing those who are in love with their personal robots and it doesn't even need to be a fully anthropomorphized bot, but simply a digitized personal assistant on one's phone as exemplified in the film, "Her".

As many reading this may know, a few years ago I published a short sci fi story myself about just that issue. By the way, feel free to download and read a free copy. It's titled, "Simon's Beautiful Thought" (if you would prefer to pay for a 99 cent copy, it's also available on Amazon and thanks for the support in downloading either copy). There's also a video trailer for it if you like.

My story came out a couple of years before Spike Jonze and Joaquin Phoenix's film, "Her". I should say here that although my story came out a couple of years previous, the film was in the works for over ten years. Although I had never heard about the "Her" project until I saw the film's promos, I just felt the Zeitgeist that it was time for this story to be written. Most likely again really, as I'm sure someone sometime previously had already written one like it; but it's all about the timing. 

I've done well on that issue if I do say so, predicting, writing and releasing stories prior to more famous versions coming out in novels or film. Another example of that might be between my short story first published on the online hard sci fi magazine, PerihelionSF.com, titled, "Expedition of the Arcturus" (also now in ebook and audiobook formats and yes, here's the video trailer). It wasn't long after that the film, "Interstellar" started to be talked about, in production and eventually start to be promoted and finally released.

In my story entire families are sent to a planet in Earth's first generational space ship in order to save humankind whereas in "Interstellar", a team is sent to explore a similar need, but they have more time than Earth does in my story. Two very different stories based in the same need and offering questions if not answers to specific issues.

We need to examine these things ahead of time, to think about them before it's too late to do anything about it. That's where things like science and speculative fiction and Futurism come into play. And speaking about that, allow me to inject here an article titled: "20 Crucial Terms Every 21st Century Futurist Should Know". It's good to at least be aware of these things and at most, it's a fascinating field to get involved in.

One of those things we should be aware of and explore are the proposed Laws of Robotics. A set of very useful things to have as protection against some very bad things possibly happening. It doesn't take a lot of foresight to see how robots or AIs and humans will have very close relationships. 

Ones that may very well easily transition into situations where there is little difference in the experience between "normal" interhuman relationships and inhuman or transhuman ones. So we really do need to consider various elements in the topic. Basically, how robots will relate to us, how we will relate to robots and, how we will relate to one another and treat one another. Treatment of robots is another topic that was first explored in "I, Robot" by Asimov.

I see these relationships as going one of two ways. Mostly people will get along with their "tool" and that will be the end of it. But for some, the love interest will grow and seem to be rewarding and useful, or it will become dark and destructive. I think the orientation in my story was a bit more realistic considering what these assistants will be designed for and then the film, "Her" took the other more popular \ entertaining (salacious?) if not more sensationalist view. 

Science fiction and speculative fiction should tell a good story, but they should also point out realistic possibilities to give us a good view into the future so that we can be more prepared for what is to come. Much of science fiction has turned dark to show us the negative effects of technology on humanity, but it's not realistically all going to be dark. Dark is just more fun and offers more of a roller coaster ride experience in entertainment.

Some examples of this are the following. The excellent 1979 film, "Alien" which was a reaction in a way to the 1977 more positive film, "Close Encounters of the Third Kind". Alien scared the hell out of many people. The reaction to that film was another Stephen Spielberg film in 1982, "ET The Extraterrestrial" which had a far more positive effect. There was also of course Start Trek and Star Wars but those dealt more really with overall societal issues, such as war and galactic federations.

We have heard much of late from famous and great thinkers about Artificial Intelligence and the dangers it poses if we do not pay attention now, first and foremost to some very necessary things. To follow on that progression by the way, we have the first of the "The Terminator" film franchise that started in 1984, an iconic year (considering the George Orwell book of the same name) and was itself a reaction against the ET and for that matter.

Terminator and its SkyNet were prime examples of the worst that could come from AI if it is allowed to run unabated. 1970's, "Colossus The Forbin Project" was a much earlier example of this with its echoes back to Kubrick's 1968 film based on Arthur C. Clark's short story "The Sentinel" and titled, "2001: A Space Odyssey" with its demented and destructive HAL 9000 computer AI.
SPOILER, it wasn't HAL's fault but politicians in Washington DC as uncovered in the 1984 film, "2010" based on another 1982 Clark book, "2010: Odyssey Two".

Some of those who have raised the question of control and protections for humanity regarding digital autonomous beings are none other than Stephen Hawking, Bill Gates and Elon Musk, both very concerned about the power that AIs might end up with and a very real need for us to control them before we give them too much control over us or too much autonomy. 

Controls as in the "Three Laws of Robotics", Isaac Asimov first came up with believe it or not in his 1942 short story "Runaround". Laws he published in science fiction and which have since been updated and will continue to be made better or at very least, he had set the standard and the basis to indicate that we would indeed need such laws built into robots and AI's.

By the way, while we're talking about all this: 
"The first use of the word Robot was in Karel Čapek's play R.U.R. (Rossum's Universal Robots) (written in 1920)". Writer Karel Čapek was born in Czechoslovakia (Czech Republic) and I read his play many years ago.

Isaac Asimov's "Three Laws of Robotics":
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
He later introduced a fourth or zeroth law that outranked the others:
  • 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
This is further explored in the article, "Do we need Asimov's Laws?" The reader's comments below that article are also interesting and one offers a counter argument to the usefulness of the laws in, "Why Asimov's Three Laws of Robotics Can't Protect Us."

Well that's all very interesting and these are things we need to consider, but let's get back to the point of this article....

As I saw in the film "Her", the AI didn't follow the laws to a logical conclusion (for me), whereas in my story, "Simon's Beautiful Thought", they were. Is my story better? That's for the readers to decide and my tale was merely a short story, but in my view, they are in some ways equal. Both give the readers or viewers a good projection into two very different and possible futures. 

I read a short sci fi story when I was younger about a guy on another planet who was a top robot field handler. He could handle up to the considered limit of nine which very few could do. If I remember correctly these were like clones with attached appliances so that someone could wear an appliance himself and thereby control the cybernauts in the field to do manual labor too grudging for humans to do. 

Being that kind of a wild west mining planet, they had towns with bars and brothels to service the labor managers and handlers. Our hero falls in love with a prostitute and decides to run off with her. But at the end he discovers there is a control appliance under the bed where he meets his "lover", much like the one he uses to control his clones and he realizes in horror that this whole time he had been in love basically, with himself. He was the one guiding the clone "prostitute" to pleasure him as he needed and desired.

So not only was it a fake "hooker\john" relationship, it had no bearing in the realm of "relationships" whatsoever. Except to himself. Is that what we really want? It may even in some ways for some sound desirable, but unless you have a personality problem, I don't think that is actually the case.


For some more background on the topic there is a Huffington Post article on the robot paramour paradigm:

Also for a quick survey of sci fi and our obsession with sex and robotics:

While many of us will not "fall in love" with our digitized assistants, there will most definitely be those who will fall fast in love with their non human entity or entities (as some will have multiples). The near future is going to be an odd place indeed.

Considering people now fall in love with things like toasters and electrical appliances residing on the street, some are now falling in love with their RealDolls (life sized sex dolls appropriately weighted and articulated) with no animation at all to them. They cost around $7,000 and some people have more than one of them. These types of objectivists will most certainly become bonded to their purchased non-human "toys" that may be either inanimate, animated, interactive or eventually, both.

Consider thisMashable reports:

"With the press of a button, Barbie’s embedded microphone turns on and records the voice of the child playing with her. The recordings are then uploaded to a cloud server, where voice detection technology helps the doll make sense of the data. The result? An inquisitive Barbie who remembers your dog’s name and brings up your favorite hobbies in your next chitchat."

Is it a matter of time then that those who would now be against gay marriage will one day welcome the old days of when that was their only concern? Will they have to consider those who want to marry their dolls, or their AI or full cybernaut manufactured device?

My concern with all this isn't that people have a diverse array of things they can explore. More power to them in that; mostly. We have after all only scratched the surface of what it means to be human.

Humankind is a mere adolescent in it's development and there is much, much more to come. Not the least of which is our own physical and mental development. Once we have AIs freely available they will alter and further develop us in ways we can now only imagine, just by their mere existence and our interacting with them a on regular and long term basis.

But we have to imagine it all now, before it's too late.

My concern in all this isn't so much that people might fall in love with autonomous individuals, even if digitally created. We do that now with naturally organically created individuals in relationships.

My concern is in how people might fall in love with themselves, at least in the beginning, before robots become sophisticated enough to have their own mental / personality kernels with their own ID\Ego|Superego if you will to use an old Freudian paradigm.

At that point, it will be no different than a real human relationship.

However before that point, when AIs as they are now are mostly a reflection of who we are, for us to have a period in time where we are basically falling for ourselves, like falling in love with your reflection in a mirror, during that period in robotics development what might that do to humanity as a whole? Will it alter and develop us in a new direction where we are beings narcissistically involved only with ourselves? And if that were to become widespread, where would that leave humanity as a whole?

I see that as a far greater threat to humanity and our overall development, than I do an AI taking over the world as in SkyNet in the Terminator film franchise, taking over the world. That might be a better ending for humanity that for us to turn into the narcissistic mental midgets that we are seeing in the media now a days all around us. "Selfies" are rampant, entertainment news media is all about those who it is "all about". 

We have in point of fact, become enamoured with ourselves. For more on this see: "Psychogenic Photopenia-A New Disorder?"

If given the ability to further develop in that direction, how many may choose perceived perfection in a relationship over that of the more problematic and rich ones we receive from our fallible human beings in one to another?

There are many and varied, new and novel things in our future and I look forward to them all. To explore them, to consider and become involved in many of them. Still, we do need to consider looking before we leap as a species and consider well, our new paths before us.

And now, this....
Rise of artificial intelligence is changing attitudes on robot romance

And, this....

The Washington Post | Love in the time of bots

April 9, 2015
Source: The Washington Post — March 17, 2015 | Dominic Basulto
Artificial intelligence thought leader Ray Kurzweil has suggested that a real life human & AI romance might be possible in as little as 15 years.
In his review of the 2013 Spike Jonze film Her, Kurzweil said he expected similar types of advances by the year 2029, “Samantha herself I would place at 2029, when the leap to human level AI would be reasonably believable.”
Kurzweil says your romantic partner might not need a physical body, as long as there’s a “virtual visual presence.” Kurzweil sees this happening via virtual reality experience.
“With emerging eye mounted displays that project images onto the wearer’s retinas and also look out at the world,” he said, “we will soon be able to do exactly that. When we send nanobots into the brain — a circa 2030s scenario by my timeline — we will be able to do this with all senses, and intercept other people’s emotional responses.” [...]

Monday, March 2, 2015

American Ideals: Tobacco companies, Congress, Robotics and a thing called Ethics

I love his show and John Oliver is doing a great job on "Last Week Tonight with John Oliver", poking fun at idiot South American leaders who are poking back, and at Tobacco companies, the scourge of our world.
#JeffWeCan

Now all that being said, a word about Tobacco companies, American corporations in general and the state of the world. You see, America isn't the only country being abused. A quick word about Russia who has been absconded with by an ex KGB thug in Putin.

Putin, in trying to support the Russian people's ideals as he sees them, is failing to be a true leader who could lead his proud nation into the future. He has twisted things about so that he seems like a good thing for many, but is only a good thing for the elite few and is bringing down the disdain of an entire planet on his people.

There are many being misled world wide. The Russian people. The American people, Many of the Muslim community being charmed into absolute absurdity by ISIL.

We can thank those responsible for many of the world's problems today in various entities mimicking the Tobacco industry debacle over past decades for so much of the nonsense that is now happening all though our lives, in our American Congress and among our citizenry.

Tobacco years ago was having an image issue. They were selling an addictive drug in a form that literally killed hundreds of thousands of people world wide. How does one get over that kind of brand suicide? You confuse the issue, distract, redirect and more importantly, misdirect. Twist reality to the point that Satan smiles upon you.

Also note that many of these tactics were old KGB tactics of disinformation, and there's your American patriotism for you. Years of watching the KGB by Britain's MI6 taught them a great deal about how to alter reality to their desires and agendas.

The American CIA learned from MI6, the Tobacco industry learned from them, as did others. Because what happens in prisons, in the military, in the clandestine services, eventually always trickles down into civilian organisations, companies and lives.

And that's what the people did who the tobacco industry hired to fix their image and therefore their profits and so for many years, fixed their bottom lines. When it failed in America, they moved to other countries, now killing many overseas at younger and younger ages.

No matter anymore that their products kill people. Right? Because all that is important, is what the profit making entity does to preserve themselves their profits, their power. As was with them, so it is with others, even those who hold the public interest as their charter, even though they could really care less.

There is actually more to all this, an even darker side, as recently pointed out in a Salon magazine article, "Republicans’ deadly political strategy: Ruining our country hurts the Democratic Party".

Nothing seems to matter for Republicans in viewing their actions of twisting elections, facts, Gerrymandering, etc., except Republicans. Their remaining is power is far more important to them than doing the right thing for America.

If it could be proved to them that Democrats (or any other party) were a better thing for America and that they were damaging our country, they would still fight to remain in power being the zombie party that they have become.

That alone is reason to kill off the GOP if you ask me. Grand Old Party, my backside. They have become a party to sit on and not look up to.

Years ago some moron decided that corporations are people. Laws were passed to protect the stock holders over that of the citizenry. Stock holders, are therefore more important than the people who are being sold products and services to.

Think about that for a moment.

Who would make such laws? On the surface it sounds wise, that a for profit company who is there to make profit, should make profits their business.

But at what cost? And to whom? If corporations are anything, they are robotic or cybernetic, not human. They are an extension of humanity not a part of it.

At what point does a company stop and say, "Wait, are we stepping over the line?" But now they just have to respond, "Naw, we just have to make profits and we are considered a person too even though we are a corporation and so our only goal is to make our stock holders more rich, within the structure of the laws we can circumvent." Great, lucky us.

I would like to make a suggestion....

Issac Asimov many years ago came up with the three basic rules for robotics in order to protect humankind. They have since been refined by others, but I would like to suggest we use that now for corporations. Just as a place to start.

If an American company sells a product in America, or MORE especially, if it exports it or sells that product (made in America or in the local country), then it also has a moral, ethical and legal requirement to follow certain rules and these rules can start with and then be honed to be more appropriate, from these following self evident laws:

1) A robot (that is, a corporation) may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot (yes, yes, a corporation, you're getting it now) must obey the orders given it by human beings (now this is here actually referring to the ideal of humanity and not just its leaders), except where such orders would conflict with the First Law.

3) A corporation (originally a robot) must protect its own existence as long as such protection does not conflict with the First or Second Law.

And in that last one lay the problem most of the time lately. Corporations will do anything to maintain their bottom line and existence and in some cases, a corporation should be allowed to die. To say that the bottom line is existence and profit at any cost, is to beg for horrors to happen, sooner or later, here or there, in sight or behind closed doors or in foreign and poor countries.

It is something to consider, to act on, to make an America ideal.

It is a concept, an entity that could be respected around the world, and not too far in the distant future, in the off world(s), when and as that will become a thing.

We need to stop being fools (including myself) in not paying any of this any attention. We also need to consider that this may very well already be "a thing". A thing we need to address now before we are already in off world endeavors as it's approaching fast and once established, it may be too late to do anything about it.

We need to act on all this before it's too late. If it's not too late already.

I'm not the only one feeling like this, John Oliver has his own take on the infrastructure side of things as also mentioned in this Salon article..It's really all about the same things though.