When Machines become Self-Aware

Magror14

Well-known member
Joined
Jun 13, 2008
Messages
1,870
"When machine self awareness first occurs, it will be followed by self-improvement, which is a critical measurement of when things get interesting.... Improvements would be made in subsequent generations which for machines can pass in only a few hours"

"Asimov's three laws of robotics (referred to in the movie I-Robot) become difficult to obey once robots begin programming one another, removing the human input"

(Scientific American May, 2010)

The upshot of this article is that once robots get into the business of programming themselves humans are toast. The article also quotes Will Wright creator of Sims games as saying this could happen in our lifetime.
 


Éireann go Brách

Well-known member
Joined
May 17, 2010
Messages
1,546
Bit drunk at the momeny but bear with me
Theres a lot if thinks going to happen in next generation
"AI , machines becoing self aware or at least so fast that they think they self aware"
"Scienists able to create and mainapuate life"
"scienists able to use human brain waves "
Scienists able to download human thought"
Nanotech
supercomputers
robotics
etc


When you take all these things together, AI, Nano, bio,DNA,
One wonders about the future
If we had one race and one goverment worldwide with ablity
to regulate then maybe we have a one hope in a thousand
but with the world spilt into different races, reglions, power blocs and rogue states
then whats to stop the genie coming out out of the bottle????
bioweapons, grey goo etc...
i.e."we must develop this cos otherwise the chinese will do it"?

The future of humanity is doubtful
 
Joined
Jun 9, 2007
Messages
18,714
Bit drunk at the momeny but bear with me
Theres a lot if thinks going to happen in next generation
"AI , machines becoing self aware or at least so fast that they think they self aware"
"Scienists able to create and mainapuate life"
"scienists able to use human brain waves "
Scienists able to download human thought"
Nanotech
supercomputers
robotics
etc


When you take all these things together, AI, Nano, bio,DNA,
One wonders about the future
If we had one race and one goverment worldwide with ablity
to regulate then maybe we have a one hope in a thousand
but with the world spilt into different races, reglions, power blocs and rogue states
then whats to stop the genie coming out out of the bottle????
bioweapons, grey goo etc...
i.e."we must develop this cos otherwise the chinese will do it"?

The future of humanity is doubtful
You're drunk? You couldn't tell, I swear...
 

Éireann go Brách

Well-known member
Joined
May 17, 2010
Messages
1,546
If machines become self aware and if Machines have control
over the defence grids they will logically analysis
the planet sit-rep and decide your fate in a nano-second
i.e. wipe us out
 

ManfredJudge

Well-known member
Joined
Mar 11, 2010
Messages
3,506
In a Morecambe and Wise sketch a supercomputer became self aware and decided it could do without humanity. Eric pulled the plug on it.
 

Éireann go Brách

Well-known member
Joined
May 17, 2010
Messages
1,546
starting to sober out
why am i not in bed????

There won't be time to pull plug

sequnce of events
1:Machine goes self aware
2:machine decides to wipe us out
3:machine gains control of "defence grids"
4:bio weapons launched


latest super computer


2009 Cray Jaguar 1.759 PFLOPS DoE-Oak Ridge National Laboratory, Tennessee, USA
1.759 PFLOPS thats petaflops or 10,000,000,000,000,000 instructions a second , we are already there

combine this computing with AI and other biotechs and
its game over
 

Squire Allworthy

Well-known member
Joined
May 31, 2007
Messages
1,404
At this moment in time I would be more worried about humanities ability to inflict misery on itself than any evolving machine doing that for us.

Us fiddling about with genetics is more likely to cause problems in the coming century than any machine or self aware computer.
 

Kevin Parlon

Well-known member
Joined
Dec 4, 2008
Messages
11,624
Twitter
Deiscirt
"When machine self awareness first occurs, it will be followed by self-improvement, which is a critical measurement of when things get interesting.... Improvements would be made in subsequent generations which for machines can pass in only a few hours"

"Asimov's three laws of robotics (referred to in the movie I-Robot) become difficult to obey once robots begin programming one another, removing the human input"

(Scientific American May, 2010)

The upshot of this article is that once robots get into the business of programming themselves humans are toast. The article also quotes Will Wright creator of Sims games as saying this could happen in our lifetime.
That's a fairly unsupportable conslusion to draw. A 'critical mass' of exponentially improving software, nanotechnology and computer power is predicted to occur around 2050. The "singularity" as it has been termned by the world's pre-eminent thinker (Ray Kurzweil) on this topic, will quickly be followed by humans 'transcending biology' which is something we are well on the way to doing already.

If this kind of stuff tickles your interest, I could not recommend more highly the book on the subject "The Singularity is near"

As Kurzweil points out, exponentially increasing machine intelligence does not axiomatically mean the end of humans. That notion is merely 21st century ludditism.

- Kevin
 

Sync

Well-known member
Joined
Aug 27, 2009
Messages
29,129
The real worry is that we'll remove human decisions from strategic planning and leave it up to the computers. When they eventually gain self awareness we'll panic, try to pull the plug and then....well the computers will fight back won't they?
 

Squire Allworthy

Well-known member
Joined
May 31, 2007
Messages
1,404
The real worry is that we'll remove human decisions from strategic planning and leave it up to the computers. When they eventually gain self awareness we'll panic, try to pull the plug and then....well the computers will fight back won't they?
That assumes that the computer values its own existence. You are assuming that it will view such things as a human would. It may be an innate pacifist or have no particular feeling on the matter. Would the machine have ambition, would it be bored, would it be suicidal?
 

Kevin Parlon

Well-known member
Joined
Dec 4, 2008
Messages
11,624
Twitter
Deiscirt
The real worry is that we'll remove human decisions from strategic planning and leave it up to the computers. When they eventually gain self awareness we'll panic, try to pull the plug and then....well the computers will fight back won't they?
You're anthropomorphising machines. You're presupposing Machines designed by us will assume that most basic of biological of impulses; self interest. There are no grounds to believe that will happen automatically. It is not beyond the realm of possibility (we could purposly design them that way I suppose), but it is anything but the foregone conclusion alarmists would have you believe.
 


New Threads

Popular Threads

Most Replies

Top