When Machines become Self-Aware

Sync

Well-known member
Joined
Aug 27, 2009
Messages
29,704
I dunno. I just think if we give the machines control of too much that it'll end up with anyone not wearing 2 million sunblock having a real bad day.
 


TradCat

Well-known member
Joined
Jun 5, 2005
Messages
1,989
It's a pity those e-voting machines weren't able to overrule our stupidity in 1997. John Bruton would have remained as Taoiseach and we'd have a much sounder economy now. Of course the peace process may have failed but you can't have everything.
 

Malbekh

Well-known member
Joined
Apr 30, 2009
Messages
3,012
I take am Iain Banks approach rather than a Terminator approach. It's more likely to be an incremental symbiotic relationship. Regardless of the divide between us in terms of efficiency and capacity, they'll still need the creative leaps in reasoning and theory-crafting that we bring to the table.

Look at the people on this forum for example....
 

Foghorn

Well-known member
Joined
Sep 18, 2009
Messages
378
The upshot of this article is that once robots get into the business of programming themselves humans are toast.
and that would make the machines................the toasters?
 

Magror14

Well-known member
Joined
Jun 13, 2008
Messages
1,870
I started this thread and then went to bed, not a good idea. This is a fun topic but there is a serious side to this. There is a lot going on in robotics. Before you feel too comfy about this topic take a look at Asimo the personal robot Honda are developing. He can walk, run, talk and listen. The video of him taking care of the elderly is chilling. What will he look like in 20 years? With a high speed wireless connection to a supercomputer/internet is his potential limitless?

ASIMO - The World's Most Advanced Humanoid Robot
 

Thac0man

Well-known member
Joined
Aug 13, 2006
Messages
6,444
Twitter
twit taa woo
I don't think machines will ever become "self aware". We will create software and are trying too, that mimicks self aware behaviour. To a large degree there is a danger in making machines autonomous, but that is not the same thing as truely self aware in any way. Intelligence seems to have alot to do with environmental factors. Self awareness is related to independent judgement based on relivance to ones environment. What it is beneath that we don't know. Our civilistations quest to create artificial intelligence has yielded more understanding about decoding what make ourselves tick than creating any new life form.

What sort of environment is the inside of a computer chip? There is no difference between the data in a chip and the data which is being fed into it, its just raw binary is a very hot box. On paper it might look the same as a basic brains operation, but what can be seen and imagined in the minds eye within such an environment? How could the ability to imagine and interpret though self awareness be developed? If it is in any way artificially created, it is again a process mimicking self awareness, not actually self aware.

I would be my opinion that no matter how complex the computers we have or will have soon, the microwave oven in your kitchen has as much chance of becoming self aware than any computer. I doubt very much processing power has anything to do with it.

In Azimovs day self awareness and intelligence were seen as very simular things I think. That today we should believe a computer could become self aware or that it could make a choice regarding the human race (for good or ill) is not really realistic in my opinion. Why would a machine engage in self preservation unless it was programmed to to so?

In reality we are engaged in what I think is a fruitless quest. Creating artificial intelligence may create 'smart' machines, but machines that none the less are in terms of self awareness, unthinking.

It would be my understanding, speculative though it is, that if we created artificial machine life, self aware life, we would not recognise it and it would not recognise us. It could have existed billions of times already in machines that were just switched off. We would have absolutely no means of common communication. A self aware Intelligence would understand all the numbers that are put into it, see them process, but ultimitely those numbers would be meaningless to it. Again, environmental context. No matter where we start in attempting to create machine intelligence, we always start with a bias towards our own terms and means of understanding, negating the possiblity that we can create something new. It is also a bar to us recognising self aware artificial intelligence.

In terms of processing power (which I have dismissed in terms of endowing self awareness) at computer speeds and the time they have been in existance, billions of generations of computer AIs have perhaps existed in evolutionary cycles. Yet we still not not seen self awareness develop.
 
Last edited:

Half Nelson

Well-known member
Joined
Dec 12, 2009
Messages
21,439
Is there even the slightest hint of self-awareness in any computer, anywhere? No.

The simplest lifeform is still more intelligent than the most powerful computer.

Life is a prerequisite for self-awareness.
 

Magror14

Well-known member
Joined
Jun 13, 2008
Messages
1,870
I used to be happy in the knowledge that computers had limits (e.g they can only perform on the basis of what is fed into them)

I'm not so sure now. Intelligence is not a measure of just computing power but the combination of a number of elements which include computing power, the environment and, importantly, the ability to manipulate and respond to the environment. The advances in robotics by which I mean the ability to construct machines that can run, walk talk listen and grasp add a new dimension to the development of artifical intelligence. Increasingly developers are making robots that perform independently in given environments. Advances will increasingly occur to enable them to perform in more complex and changing environments. It will come to the point that when a robot is faced with a situation for which it was not programmed it will be able to go on the internet itself and download the necessary software or data. Still a computer but looking more and more intelligent.
 

myksav

Well-known member
Joined
May 13, 2008
Messages
23,381
I used to be happy in the knowledge that computers had limits (e.g they can only perform on the basis of what is fed into them)

I'm not so sure now. Intelligence is not a measure of just computing power but the combination of a number of elements which include computing power, the environment and, importantly, the ability to manipulate and respond to the environment. The advances in robotics by which I mean the ability to construct machines that can run, walk talk listen and grasp add a new dimension to the development of artifical intelligence. Increasingly developers are making robots that perform independently in given environments. Advances will increasingly occur to enable them to perform in more complex and changing environments. It will come to the point that when a robot is faced with a situation for which it was not programmed it will be able to go on the internet itself and download the necessary software or data. Still a computer but looking more and more intelligent.
What would happen if that robot could not access the internet?
What if the information is not on the 'net? Either not uploaded or a brand new situation?
Would it be able to fudge its way through a situation it never met before, as most humans can?

So far, machine "intelligence" is restricted to binary, yes or no. There is no "maybe" yet, though that is being worked on. Then there's the "make it up" aspect. The "maybe" would need to come first to make "make it up" possible.
Then you have the "I don't/won't believe it" aspect. Could a machine disbelieve what it is told?

And what about holding two contrdictory positions without internal conflict, as many humans do?
When that happens, I'd give the machines voting rights. ;)
 

Malboury

Well-known member
Joined
Apr 15, 2008
Messages
368
Is there even the slightest hint of self-awareness in any computer, anywhere? No.

The simplest lifeform is still more intelligent than the most powerful computer.

Life is a prerequisite for self-awareness.
What is life, exactly? If it's a prerequisite for self-awareness, we'd better define it.

The simplest lifeform's are exceedingly simple. So simple, in fact, that we can make them in a lab, as of this year. What, exactly, is the difference between making a computer out of amino acids, and one of silicone? Because that's exactly what a bacterium is; a small computer and robotic system built using organic materials as opposed to digital systems.

Finally, mice are pretty complicated, right? Now lets say we take a mouse brain; that's probably contains at least a hint of self awareness, right? Now, what if we had a computer then simulated every last neuron in that brain. Sure, it might run at 1/10 speed, and it might not even encompass the entire brain, but would we have something resembling life? But in a computer? Luckily this is only science fiction, so we don't have to actually consider the ethical ramifications of simulated brains running in.. oh wait, no. Sorry, we did this three years ago:
BBC NEWS | Technology | Mouse brain simulated on computer

And Moores law means that our computers are only getting more and more powerful. Give it time, and we will have software capable of perfectly simulating the behaviour of living organisms. Someday, probably human minds. And the behaviour of those software emulated minds will be basically indistinguishable from the behaviour of biological minds. And we'll be left with the hairy question; are they alive?

(In case you're thinking "at least that's a supercomputer at the cutting edge, we won't have to worry about computers powerful enough to run more complicated minds for ages!" The very next year the Blue Gene L series of supercomputers were given an overhaul that almost doubled their processing power. And they're not even the top end of what's possible. The highest end computers don't even measure in Terraflops any more, they use petaflops; and entire order of magnitude greater.)
 

Magror14

Well-known member
Joined
Jun 13, 2008
Messages
1,870
Not uploaded on the net? I would have thought that there is enough loaded on the net to enable a robot to pass for your average two by four. Also cutting edge but established mathematics is fairly good at resolving complex situations. It is just a step to translate this mathematics into computer programming. This is being done all the time.

Just as a by the way, many of the big hedge funds use mathematical equations to buy and sell things. Some of those transactions are done automatically by computers. You could argue that entire countries are having their economic policies heavily influenced by computers.
 
Last edited:

cgcsb2

Well-known member
Joined
Nov 2, 2008
Messages
525
The real worry is that we'll remove human decisions from strategic planning and leave it up to the computers. When they eventually gain self awareness we'll panic, try to pull the plug and then....well the computers will fight back won't they?
This already exists sync. The Netherlands flood defence system is capable of making decisions if the worst come to wost. This mechanism is in place so that the computer will decide to allow less populated areas to flood rather than the big cities in the event of a catostrphic storm. A computers lack of emotion is important for decisions like these.
 

Éireann go Brách

Well-known member
Joined
May 17, 2010
Messages
1,546
John Connor: We're not gonna make it, are we? People, I mean.
The Terminator: It's in your nature to destroy yourselves.
John Connor: Yeah. Major drag, huh?

Terminator 2 quotes
says it all really
 

Ard Eoin

Active member
Joined
Apr 19, 2010
Messages
278
The upshot of this article is that once robots get into the business of programming themselves humans are toast.
and that would make the machines................toasters?
just once...every now and then...i find it...and all is well once more...
 

Malboury

Well-known member
Joined
Apr 15, 2008
Messages
368
It does indeed. It's all the evidence I need to come to my considered rejection of the arti-creatures...
Well, if it's in human nature for us to destroy ourselves, then perhaps we need a little help in overcoming that nature. And developing synthetic minds might give us enough understanding of our own to overcome such urges. Indeed, a synthetic human mind might make such destructive urges identifiable, perhaps?
 
Joined
Jun 9, 2007
Messages
18,714
Well, if it's in human nature for us to destroy ourselves, then perhaps we need a little help in overcoming that nature. And developing synthetic minds might give us enough understanding of our own to overcome such urges. Indeed, a synthetic human mind might make such destructive urges identifiable, perhaps?
The homosynths will wheedle their way into making you believe that, and will then use your brain as a lubricant for their robo-joints. You will pay for your complacency! Hark at my warning! You will PAY!
 

rightsofman

Member
Joined
Aug 2, 2008
Messages
34
The real worry is that we'll remove human decisions from strategic planning and leave it up to the computers. When they eventually gain self awareness we'll panic, try to pull the plug and then....well the computers will fight back won't they?
I'll be honest and say I'm waiting patiently for that day. I can't imagine a computer being as open to special interest groups, NGO's, lobbyists and horse breeders as our current biological overlords.

We might get some sane laws, and there will be such a beautiful paper trail wont there?
 

Magror14

Well-known member
Joined
Jun 13, 2008
Messages
1,870
I started this thread with a quote from Scientific American. There is an article in this month's issue called "War Machines". Here is a quote

"Not a single robot accompanied the U.S. advance from Kuwait toward Baghdad in 2003. Since then 7000 "unmanned" aircraft and another 12,000 ground vehicles have entered the U.S. military inventory"

Also in the same issue "These systems are only one software upgrade away from fully self sufficient operation."

The same is happening in armed forces around the world. There are huge ethical issues involved. This is just the beginning of the age of robots.
 


New Threads

Popular Threads

Most Replies

Top