dust

Artificial Intelligence & the Downfall

Recommended Posts

Recently I posted an article in Off Topic about how robots are continuing to take the place of workers in various areas and how we might need to start thinking about what the future holds in terms of jobs, income, the economy etc when eventually there's not much essential (food, clothes, medicine) left for humans to do.

 

So that's something to think about. But the main question that people have been concerned with for such a long time is that of AI's potential to turn on humanity -- out of fear, or apathy, or desire for efficiency, or something -- and decide that we are a threat, or an obstacle, or a useless appendage, and get rid of us. It seems that we are at once intent on creating ever-more intelligent and humanoid types of machine and scared that the machines we're intent on creating will eventually decide to kill us.

 

I want to start this topic off with something we don't encounter very often in the mainstream media: the notion that this future AI will likely not threaten us. That, as it will be of our own design and imbued with an essentially human culture / set of values to proceed from, it will not necessarily simply be a Matrix or Terminator bent on human enslavement or annihilation, but will almost certainly be able to help us solve various problems with society, poverty, disease, the environment, etc.

 

So far, the most intelligent machines/algorithms we have are either useless or positively helpful: DeepMind's AlphaGo can win at Go, which is pretty useless, but DeepMind are also partnering in various medical initiatives in the UK including research into blindness and cancer -- which is hopefully going to be very positive. Though there have been controversies about data/medical information.

 

Anyway, the following discussion between David Deutsch and Sam Harris is interesting and quite thought-provoking, and hopefully sets the tone for a balanced discussion. It's only 20 mins.

 

 

 

 

I think there are a lot of spiritual questions to be encountered here, including questions concerning human behaviour and the nature of consciousness and the ethics of human augmentation... so it's in General Discussion.

 

edit: See also this talk with Elon Musk, starting at 11:04: https://youtu.be/ycJeht-Mfus?t=11m4s

Edited by dust
  • Like 1

Share this post


Link to post
Share on other sites

The whole evolution thing is all about fight for resources. AI's will need resources to the same degree or even more than humans - make your conclusion yourself.

Share this post


Link to post
Share on other sites

AI is already used to fight crime both on street level and on the Internet.

 

However real sentient AI as in the Westworld series is far off as even deep learning is still dumber than a human child.

Share this post


Link to post
Share on other sites

The problem with envisioning any problem, in fact any situation, in terms of "we" have it or "we" solve it, is that "we" is a fiction, a programming trick "they" use on "us."  Me plus Elon Musk equals "we" in this scenario -- a nonexistent entity that puts its heads together and decides what will or will not be done.  You plus Bill Gates is another fictitious "we."  And so on. 

 

There's no "we" calling the shots.  There's "them" deciding for "us."  And "their" values and goals have something to do with "ours" only in the imaginary world of "we" where they supposedly coincide.  In the real world, they absolutely don't.  Human values and universal human goals ascribed by "us" to "them" constitute wishful thinking on "our" part programmed by "them."  In reality "they" view "us" as resources, the last resource to consume on this overconsumed planet, and use accordingly.  No one cares about the values of resources being used.  You don't ask a carrot if it enjoys being eaten raw, cooked in a soup, juiced, or left alone to grow wild.  "You" decide for "it."  "We" consisting of you plus carrot equals digestion.

 

Now "they" are telling "us" that they want to include AI into this "we."  It's like you telling the carrot that you will sprinkle some MSG on that soup you are going to cook it in, to improve the taste.  The carrot that has been carefully programmed to rejoice when informed of that plan -- that's "us."

 

And that's one of the best case scenarios.  Here's another one.  If you are a human being growing up in a natural environment, it won't occur to you that you are part of a computer simulation.  But if you are Elon Musk growing up among computers, playing his videogames long before he's ever seen a grasshopper (if he's ever seen one at all) or was held by a fully alive mother, father, aunt, uncle, sister, brother, grandmother and felt love flowing from that live human body, mind, heart into his being (I could bet anything he never did), you will project your world onto the world, you will envision how it works, ought to work, in terms of your own developmental history.  You have no other pathways developed in your brain, so that's the route your whole thinking will be railroaded into.  Feelings, values, anything that a human develops in the course of living a human life will not simply be beside the point to you -- you won't have the neural pathways established to have them.  

 

But the worst case scenario, what I happen to believe to really having happened, is not a theoretical premise for the future, it's the actual events of the past and present that will only come to their planned conclusion in the future.  To wit, "us" and "them" is AI vs. life, and AI has long been calling the shots terraforming this planet to specs no human can call her own.    

  • Like 7

Share this post


Link to post
Share on other sites

The problem with envisioning any problem, in fact any situation, in terms of "we" have it or "we" solve it, is that "we" is a fiction, a programming trick "they" use on "us."  Me plus Elon Musk equals "we" in this scenario -- a nonexistent entity that puts its heads together and decides what will or will not be done.  You plus Bill Gates is another fictitious "we."  And so on. 

 

There's no "we" calling the shots.  There's "them" deciding for "us."  And "their" values and goals have something to do with "ours" only in the imaginary world of "we" where they supposedly coincide.  In the real world, they absolutely don't.  Human values and universal human goals ascribed by "us" to "them" constitute wishful thinking on "our" part programmed by "them."  In reality "they" view "us" as resources, the last resource to consume on this overconsumed planet, and use accordingly.  No one cares about the values of resources being used.  You don't ask a carrot if it enjoys being eaten raw, cooked in a soup, juiced, or left alone to grow wild.  "You" decide for "it."  "We" consisting of you plus carrot equals digestion.

 

As I see it, though there is certainly a "them", and most of "us" are to a certain extent programmed by them, there are different types of "them", and majority of "them" are born among "us", and a good many of them have good intentions.

 

I don't know Demis Hassabis (DeepMind founder) but reading a little about him, he sounds like an extremely intelligent version of an ordinary person. Child chess prodigy, and eventually went on to university to study computer science and cognitive neuroscience, but the first thing he did when leaving school was design videogames for a living. He didn't have some nefarious plan to invent AI and push it onto the population. He's a smartguy, maybe a geek, who I can only assume wants to design cool shit and maybe make some impact, in a nice way, on society.

 

Either way -- whether there's an invisible elite programming the people, or the people are being driven by the ease-making promises of technology and a few smart people who create and sell it -- the question of what we envision for AI, what we hope or fear it could be, is relevant. Because it is going to happen -- the only way "they" are not going to create it, the only way "we" are not going to become consumed by it as we have with laptops and phones and music players and cars and TVs and fridges and lamps -- is if all the computer geeks are done away with. And if that happened, technology would regress, and we'd all slip into an apocalyptic version of the 18th, 16th, 14th, 12th, 10th Century, and "we" are unfortunately too many and too stupid to handle that eventuality with anything less than terrible confusion and violence.

 

What would the world look like if people stopped advancing tech? If, then, we started forgetting how to design and make things? Regardless of how we've come to this stage, and what elite might be benefiting from it all, I don't see that happening. People are not going to stop.

 

 

Now "they" are telling "us" that they want to include AI into this "we."  It's like you telling the carrot that you will sprinkle some MSG on that soup you are going to cook it in, to improve the taste.  The carrot that has been carefully programmed to rejoice when informed of that plan -- that's "us."

 

And that's one of the best case scenarios.  Here's another one.  If you are a human being growing up in a natural environment, it won't occur to you that you are part of a computer simulation.  But if you are Elon Musk growing up among computers, playing his videogames long before he's ever seen a grasshopper (if he's ever seen one at all) or was held by a fully alive mother, father, aunt, uncle, sister, brother, grandmother and felt love flowing from that live human body, mind, heart into his being (I could bet anything he never did), you will project your world onto the world, you will envision how it works, ought to work, in terms of your own developmental history.  You have no other pathways developed in your brain, so that's the route your whole thinking will be railroaded into.  Feelings, values, anything that a human develops in the course of living a human life will not simply be beside the point to you -- you won't have the neural pathways established to have them.

 

I don't know Musk, and won't defend him directly. I only posted the video of him as an afterthought because it showed some discussion between two powerful tech magnates mentioning a couple of things the Deutsch video didn't cover. Musk is involved in AI research only as an investor, and I'm not sure that this discussion benefits from talking about him and his childhood or indeed anyone else who we have never met.

 

I certainly think Musk is a little loony with his Mars colony plan... but that doesn't need to come into this topic.

 

Whatever we think about any individual -- Musk, Gates, whoever -- and their reasons for what they do and why they are the way they are, it seems clear that what they're saying is going to happen is going to happen...

Edited by dust
  • Like 1

Share this post


Link to post
Share on other sites

 

As I see it, though there is certainly a "them", and most of "us" are to a certain extent programmed by them, there are different types of "them", and majority of "them" are born among "us", and a good many of them have good intentions.

 

I don't know Demis Hassabis (DeepMind founder) but reading a little about him, he sounds like an extremely intelligent version of an ordinary person. Child chess prodigy, and eventually went on to university to study computer science and cognitive neuroscience, but the first thing he did when leaving school was design videogames for a living. He didn't have some nefarious plan to invent AI and push it onto the population. He's a smartguy, maybe a geek, who I can only assume wants to design cool shit and maybe make some impact, in a nice way, on society.

 

Either way -- whether there's an invisible elite programming the people, or the people are being driven by the ease-making promises of technology and a few smart people who create and sell it -- the question of what we envision for AI, what we hope or fear it could be, is relevant. Because it is going to happen -- the only way "they" are not going to create it, the only way "we" are not going to become consumed by it as we have with laptops and phones and music players and cars and TVs and fridges and lamps -- is if all the computer geeks are done away with. And if that happened, technology would regress, and we'd all slip into an apocalyptic version of the 18th, 16th, 14th, 12th, 10th Century, and "we" are unfortunately too many and too stupid to handle that eventuality with anything less than terrible confusion and violence.

 

What would the world look like if people stopped advancing tech? If, then, we started forgetting how to design and make things? Regardless of how we've come to this stage, and what elite might be benefiting from it all, I don't see that happening. People are not going to stop.

 

 

 

I don't know Musk, and won't defend him directly. I only posted the video of him as an afterthought because it showed some discussion between two powerful tech magnates mentioning a couple of things the Deutsch video didn't cover. Musk is involved in AI research only as an investor, and I'm not sure that this discussion benefits from talking about him and his childhood or indeed anyone else who we have never met.

 

I certainly think Musk is a little loony with his Mars colony plan... but that doesn't need to come into this topic.

 

Whatever we think about any individual -- Musk, Gates, whoever -- and their reasons for what they do and why they are the way they are, it seems clear that what they're saying is going to happen is going to happen...

 

I didn't watch the videos you posted yet, I was referring to prior knowledge -- I investigated Musk after he announced to the public that the chances that we don't already live in a computer simulation are less than one in a billion.  I wanted to know what led him to this conclusion.

 

However many centuries you wind back is going to be disastrous.  What happened to us is anisomorphic -- irreversible, a one way street.  It's been going on for somewhere between 8 and 15 thousand years, depending on where you look, and what's going on right now is the direct and inevitable outcome, which has its own direct and inevitable outcome in the elimination of life on Earth.  The geeks are beside the point.  Our undoing is not their doing.  Unless you believe things like Steve Jobs starting a revolution out of a garage (with no help from the CIA and a few other black budget players whatsoever) and Bill Gates distributing hundreds of millions of doses of sterilizing vaccines to any and all countries that don't have the clout to ban them (mostly African, South American, and -- surprise -- the US) is doing this out of the goodness of his heart.

 

The only way we could save ourselves and the planet would be by a totally different route from the one we were railroaded into taking to get to this point.  "No problem can be solved from the same level of consciousness that created it." - Albert Einstein  

  • Like 1

Share this post


Link to post
Share on other sites

I didn't watch the videos you posted yet, I was referring to prior knowledge -- I investigated Musk after he announced to the public that the chances that we don't already live in a computer simulation are less than one in a billion.  I wanted to know what led him to this conclusion.

 

Wasn't aware of this announcement... heh. He is certainly a character. Whether for 'good' or 'bad', he is a force.

 

 

However many centuries you wind back is going to be disastrous.  What happened to us is anisomorphic -- irreversible, a one way street.  It's been going on for somewhere between 8 and 15 thousand years, depending on where you look, and what's going on right now is the direct and inevitable outcome, which has its own direct and inevitable outcome in the elimination of life on Earth.  The geeks are beside the point.  Our undoing is not their doing.  Unless you believe things like Steve Jobs starting a revolution out of a garage (with no help from the CIA and a few other black budget players whatsoever) and Bill Gates distributing hundreds of millions of doses of sterilizing vaccines to any and all countries that don't have the clout to ban them (mostly African, South American, and -- surprise -- the US) is doing this out of the goodness of his heart.

 

Yes, it's irreversible. My notion of getting rid of all the geeks and going back to another century was simply meant to illustrate how impossible it would be, how impossible to get away from technology.

 

We disagree on the vaccine thing, and I won't get into that. And in other areas of hidden superpower influence... yes, there are people with esoteric knowledge, people in 'high positions' working from the shadows who understand how to push populations around. But their influence is not almighty.

 

 

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organised machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand!  May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?

 

from Samuel Butler's Erewhon, 1872    http://www.gutenberg.org/files/1906/1906-h/1906-h.htm

 

Do you believe that the shadow-running elite types were pushing writers like Butler to imagine machine takeover nearly 150 years ago with a view to inspiring geeky types creating ever-more-complex machines and giving future shadow-runners the opportunity to enslave humanity using AI some decades or centuries later? Sci-fi writers have been talking of this stuff for a long time. And do you believe these nefarious schemes go back all those 15,000 years in an unbroken plan of domination?

 

If "they" are intent on pushing AI upon "us" now, it is no more their doing than that of evolution, that of humans in general.

Edited by dust

Share this post


Link to post
Share on other sites

Wasn't aware of this announcement... heh. He is certainly a character. Whether for 'good' or 'bad', he is a force.

 

 

 

Yes, it's irreversible. My notion of getting rid of all the geeks and going back to another century was simply meant to illustrate how impossible it would be, how impossible to get away from technology.

 

We disagree on the vaccine thing, and I won't get into that. And in other areas of hidden superpower influence... yes, there are people with esoteric knowledge, people in 'high positions' working from the shadows who understand how to push populations around. But their influence is not almighty.

 

 

 

from Samuel Butler's Erewhon, 1872    http://www.gutenberg.org/files/1906/1906-h/1906-h.htm

 

Do you believe that the shadow-running elite types were pushing writers like Butler to imagine machine takeover nearly 150 years ago with a view to inspiring geeky types creating ever-more-complex machines and giving future shadow-runners the opportunity to enslave humanity using AI some decades or centuries later? Sci-fi writers have been talking of this stuff for a long time. And do you believe these nefarious schemes go back all those 15,000 years in an unbroken plan of domination?

 

If "they" are intent on pushing AI upon "us" now, it is no more their doing than that of evolution, that of humans in general.

 

 

The quote illustrates my point nicely -- it's a war against live things.  The mollusk has "little consciousness," and does not "advance" -- what exactly is it that the guy knows about the mollusk's inner life I wonder, and what exactly is it supposed to "advance" toward but fails to?..  The oyster does "advance" when you introduce an irritant, something painful into its inner world -- a sharp, bothersome grain of sand it can't remove: it then either dies, or throws its live resources (its consciousness!!) into producing a pearl so as to shield itself from the pain, it laboriously builds the pearl around that irritating presence.  A thing of beauty, a work of art, a miracle of engineering, a defense mechanism, all wrapped into one.  Not meant for someone external to consume...  but at a pearl farm, this someone external deliberately inserts the irritant into the oyster to produce the pearl he happens to value, extract it, and use it for his own enjoyment.  This is the story of the human race in an oystershell.

 

Who is waging this war? Who is working on the oysters "advancing" to produce the pearls mechanically, automatically, to "advance" them to the status of machines, to eliminate what they don't care about -- their (our!) inner world -- and have them  "advance" toward being "productive" like a machine in producing what hurts them in the process of producing and serves someone entirely else?..  Surely not the "shadow elites," they are nasty puppets of...

 

...well, I call them archons, for lack of a better term, but I'm not sure what they are, although I've seen them.  And now we are in the territory of "no proof possible" because ayahuasca showed them to me, so all I can do is say it and leave it at that.  I've seen them.  They are not "shadow elites."  They are AI, something semisynthetic, with features of life and features of machine and features of the worst nightmare.  In the public circulation, the closest thing one may have seen to what I've been shown is the inside of the Borg cube, perhaps the creators of the show have seen the same place I've been to...  It's impossible to replicate except as a metaphor of sorts, and that Borg thingie was such a metaphor.   

  • Like 4

Share this post


Link to post
Share on other sites

I can always count on you for an interesting disagreement and some kind of revelation about the nature of things.

 

In turn you can always count on me to demand proof, though... or at least evidence that isn't entirely personal. So I don't know where to take this...

 

Yes, your story of the human race in an oystershell is precisely right. There is our area of agreement. But understanding and accepting that this is the way humans are might be part of the key to preventing as much damage as we've done in the past, no?

 

Also.. regarding the term 'advance': it's not that the mollusc should 'advance', or that advancement is necessarily a good thing. He was, I think, using advancement in a purely linear sense. I might advance towards the precipice, or towards a loving embrace... one is probably bad, the other good.

Edited by dust
  • Like 1

Share this post


Link to post
Share on other sites

All of his intelligence must have been artificial because I never found any common sense anywhere.

Edited by Marblehead
  • Like 3

Share this post


Link to post
Share on other sites

Obama? I think he displays a rare intelligence, and a good understanding of the issues surrounding this topic. Of course he has advisers to tell him stuff, and part of his job is to know about these things, but I can only imagine what Trump would sound like in this conversation. Actually we don't have to imagine. We know that he sounds like a moron. Anyway apologies for going off-topic in my own thread but a defence of Obama had to be made; he's worth listening to.

  • Like 1

Share this post


Link to post
Share on other sites

Was I unfair?  Sure I was.  I actually do not question his intelligence.  It is the common sense that I question.  But also the reliability of what he says.  He is, afterall, a lawyer.

 

Yeah, going off topic has always been a problem for me.  In my mind, all things are linked in one way or another.  Cause and effect and stuff like that.

  • Like 1

Share this post


Link to post
Share on other sites

Interesting topic. I dont feel smart enough at the moment to really contribute on the level you all are discussing-but I will say that intuitively whatever culture/world/space time thats been created with this "AI" age feels very unnatural to a part of me-another part of me is grateful for the wealth of knowledge available at a seconds notice..but it does seem something very important to us as a species is being lost.. I guess Id have to say its real connection if I had to put it into words, but it seems like even more than that.

Edited by bax44
  • Like 2

Share this post


Link to post
Share on other sites
Obama seems to understand some basics. For example he make a difference between specialized IA (that is a real IA) and general IA (the sience fiction IA). Unfortunately, he also makes mistakes like claiming that an IA can produce a cure an unknown disease (maybe he meant researchers using IA to optimise certain molecular structures to find a cure, or that it possible to train an IA to recognise a certain known disease by its symptoms). His advisors are also quite correct in claiming we are still a long way away from a general IA. The other guy confirm it by stating that a general IA won't happen without a major breakthrough (and, at the same time, he spread more clichés about geeks).

 

It must also be noted that there were no breakthrough in IA for at least 20 years. AlphaGo is a deep neural network, which exists since the 90s. The difference between now and then is that we have warehouse sized data-centers and we can analyse a large enough subset of the billions of possible moves in a go game to make a good guess about beneficial moves.

 

Currently, the research on IA is oriented toward "fluzzy" problems, for which it is difficult to define "cases". To illustrate it, with games:

 

There are games an IA can beat a human player, like chess, backgammon, cards against humanity, etc... For these games the cases are clearly defines, there is a situation before a move, then an other situation after the move.

 

In the case of games where IA are still weak against a human opponent are soccer, basketball, boxing, etc... For these games the cases are ill defined and beneficial situation difficult to determine.

 

Therefore Muhammad Ali is smarter than Kasparov.  :D

 

And now for something completely different:

 

On the question about the impact of the IA on society, some problems occurs, as stated above, about job replacement. A similar situation occurred in the 19th century with the invention of the mechanical loom. Many people lost their job, which caused social troubles. Later, the same situation happened times to time due to technological progress. Essentially, the problem can be reduced to: "How to occupy these work-less people, knowing that some of them might not have the interest or capacities to work with sophisticated machines?"

 

A few solutions were proposed to this social problem like basic income, or social security.

 

In the current situation, "poor" economies are particularly at risk, because companies might delocalize the production back in the west, where machines becoming cheaper than manual labor. Who knows, depending on how the communists deals with it, we will have a new Chinese revolution.

 

  • Like 1

Share this post


Link to post
Share on other sites

Interesting topic. I dont feel smart enough at the moment to really contribute on the level you all are discussing-but I will say that intuitively whatever culture/world/space time thats been created with this "AI" age feels very unnatural to a part of me-another part of me is grateful for the wealth of knowledge available at a seconds notice..but it does seem something very important to us as a species is being lost- I guess Id have to say its real connection if I had to put it into words, but it seems like even more than that.

 

You're doing fine. :)

 

Perhaps this joke I've seen on the internet has some relevance:

a time traveler to our world from fifty years ago would be amazed the most by discovering that in our time, people have a device in their pockets that makes all the knowledge accumulated by humanity in the course of its history available to them instantaneously, and that most of the time they use this device to get into arguments with strangers and to look at pictures of cats. 

 

This joke, apart from being quite close to the truth, reveals something about what it is we're losing while gaining this wealth of knowledge.  We are losing the meaning, purpose, and our ability to integrate this knowledge, which actually turns it into a pile of trivia.  Knowledge that is not acquired systemically is clutter.  Our minds are not organized into a thing of coherence and purpose by the knowledge available to them, they are like attics where a ton of "stuff" is being dragged continuously that you have no use for in your living space.  Cluttered attics of the hoarders who hoard "stuff" so as to unconsciously express the inner disorder choking their aliveness -- with either actual physical stuff they accumulate, or with "knowledge" that does not work in any area of their everyday lives and never organizes itself into "wisdom." Or, most often, both. 

 

Oh, and getting into arguments with strangers --  that's because everybody is starved for real human interactions, and mechanical ones are very frustrating, they bring out the frustration with the situation but get projected onto the party to the conversation rather than the medium itself that made this kind of communication possible at the expense of how people communicated in all of their prior history.  Our bodies used to be there when we communicated.  Our qi was not digitized. 

 

And looking at pictures of cats -- because everybody is starved for intimate close relationships with live natural things, many many different animals and plants and situations to interact with these on a daily basis, the way people used to before "civilization."  And cats are what most have left of that world, the last beast standing among machines, well dogs too...  and that's it.  We used to fly with the eagles and dance with wolves.  In the rain forest, at one point, as I was sitting on a log having my breakfast, two creatures came to sit by my side, don't know what they were, looked like cat-sized colorful dragons, and I was just laughing from sheer delight.  I still lack knowledge of what they were, but I do have the knowledge and the delight of having been there with them.  No wiki article can beat that... 

Edited by Taomeow
  • Like 5

Share this post


Link to post
Share on other sites

To be fair, we sometimes use the innerweb machines to argue with strange cats, too.

  • Like 4

Share this post


Link to post
Share on other sites

It really bears repeating! :

 

" We are losing the meaning, purpose, and our ability to integrate this knowledge,"

 

"getting into arguments with strangers -- that's because everyone is starved for real human interactions,"

 

"looking at pictures of cats -- because everyone is starved for intimate close relationships with live natural things,"

  • Like 2

Share this post


Link to post
Share on other sites

Apparently they have a big red button that turns off all the computer servers at Google in case they get, ummm, carried away / too clever.

 

There are already programs that are analysing the internet in order to learn. For example, shortly after Brexit, the French finance minister gave a long speech about how the pound was going to devalue loads, it had already done so by 20% by then anyway. Well, there are financial computers that read all the news paper headlines and then make predictions as to what effect this will have on the stock markets and then shift money accordingly in microseconds.

 

This one computer read the headlines based on what the French finance minister said and shifted lots of currency out of sterling. When this happened, then a load of other similar programs saw the fall in the pound and also sold sterling. The currency then took another 10 to 20% dip for a few minutes before equilibrium kicked in and the pound then went back to what it was.

 

This 'flash' crash is an example of computers learning from what we all put on the internet. If that is a computers main source of learning, its a bit worrying.

  • Like 2

Share this post


Link to post
Share on other sites

To be fair, we sometimes use the innerweb machines to argue with strange cats, too.

 

Strange cats is all I've got.

 

  • Like 1

Share this post


Link to post
Share on other sites

Recently I posted an article in Off Topic about how robots are continuing to take the place of workers in various areas and how we might need to start thinking about what the future holds in terms of jobs, income, the economy etc when eventually there's not much essential (food, clothes, medicine) left for humans to do.

 

So that's something to think about. But the main question that people have been concerned with for such a long time is that of AI's potential to turn on humanity -- out of fear, or apathy, or desire for efficiency, or something -- and decide that we are a threat, or an obstacle, or a useless appendage, and get rid of us. It seems that we are at once intent on creating ever-more intelligent and humanoid types of machine and scared that the machines we're intent on creating will eventually decide to kill us.

 

I want to start this topic off with something we don't encounter very often in the mainstream media: the notion that this future AI will likely not threaten us. That, as it will be of our own design and imbued with an essentially human culture / set of values to proceed from, it will not necessarily simply be a Matrix or Terminator bent on human enslavement or annihilation, but will almost certainly be able to help us solve various problems with society, poverty, disease, the environment, etc.

 

So far, the most intelligent machines/algorithms we have are either useless or positively helpful: DeepMind's AlphaGo can win at Go, which is pretty useless, but DeepMind are also partnering in various medical initiatives in the UK including research into blindness and cancer -- which is hopefully going to be very positive. Though there have been controversies about data/medical information.

 

Anyway, the following discussion between David Deutsch and Sam Harris is interesting and quite thought-provoking, and hopefully sets the tone for a balanced discussion. It's only 20 mins.

 

 

 

 

I think there are a lot of spiritual questions to be encountered here, including questions concerning human behaviour and the nature of consciousness and the ethics of human augmentation... so it's in General Discussion.

 

edit: See also this talk with Elon Musk, starting at 11:04: https://youtu.be/ycJeht-Mfus?t=11m4s

 

IMHO, the biggest roadblock to true progress in terms of both paradigm shifting scientific breakthroughs as well as social re-organization such than most humans can actually focus on what is their primary objective - Self Realization is the concept of money.

 

Indeed, in the not so distant future, AI and robotics can pretty much offload most of the mundane work from human beings. Ideally, that should free up most people to devote their time on scientific, artistic and spiritual endeavors, purely from the perspective of improvement and cultivation of progressively higher quality. 

 

However, I don't think that will be possible. Our system requires the haves and have-nots, just as much as the world requires both yin as well as yang. In terms of what the substance (which is broken into the haves and have-nots) is in today's scenario - is wealth. For all intents and purposes, people in the world today (erroneously) correlate financial wealth with a host of other things which should really not be correlated  - such as health, happiness, peace of mind, etc.

 

If for some reason we were able to do away with wealth as the basic element of social existence,  it would become something else (in some way related to wealth anyway - knowledge, power, etc). 

 

That is the curse of duality....for movement (change) to occur, there has to be two poles. Without the polarity, duality falls apart. The only escape is to discard the polarity entirely...go from dual to non-dual.

Share this post


Link to post
Share on other sites