Post
by teraflop122 » Sat Jul 15, 2006 7:13 am
...Actually, robots capable of HUMAN thought are most likely decades away.
My own theory is that computers can indeed reach a state of self-aware intelligence, but that intelligence would need to be based on a computer's brain- digital components processing 0's and 1's- NOT an immitation of an organic brain, which processes data holistically. With such a different type of mind, the first machine intelligences would most likely be incomprehensible to humans, and incapable of effectively communicating with humans. Our brains would be so different, as would our thoughts. You would not be able to determine what a machine intelligence "wants."
In that scenario, it is very unlikely that a machine intelligence would seek the death of humans, because humans and machines would be on two different...not Levels, but planes, perhaps. A machine intelligence could just as soon assume humans are a geological process than living things. The evils which drive humans, like greed and hatred, simply wouldn't apply to such an alien intelligence as machine intelligence.
As things are today, the problem is we don't know how we ourselves think, so trying to make a robot think is like a shot in the dark. I'm relatively sure that a simple computer with a PDA processor would be capable of relatively advanced learning and human-like behavior- but only with the appropriate algorithms and programs.
It is my belief that human technologists will eventually stumble upon a very special algorithm, maybe just a couple lines of code long, which, when put into a computer system, will statistically always yeild a learning intelligence.